In the interest of user-friendliness, would anyone know a way to overload ‘optimize’ to work conveniently for scalar input without crashing?
[please note I am not asking for opinions as to how good/bad an idea it would be - let’s assume I am willing to undertake any ‘risks’].
Of course, I can get it to work if I redefine my objective function to have one-dimensional Array argument and likewise my starting point, and then tell optimize I do not want Nelder-Mead, but let’s assume I would like to use optimize frequently and conveniently for scalar input and don’t want to go through the above steps every time. I would like (for my own use) to make the routine more user-friendly in this regard. Can anyone suggest a way for me to conveniently do this?
[PS I don’t mean the ‘Brent’s’ solution which apparently works - I want something fast]
1 Like
I am not sure you are aware of the possible pitfalls. Curiously, multivariate methods can break down in surprising ways in 1D, and can easily yield suboptimal performance. Optim also has GoldenSection()
, see
https://julianlsolvers.github.io/Optim.jl/stable/#user/minimization/#minimizing-a-univariate-function-on-a-bounded-interval
That said, you can always write a wrapper like
using Optim
function univariate_optimize(f, x0, args...; kwargs...)
opt = Optim.optimize(x -> f(x[1]), [x0], args...; kwargs...)
@assert Optim.converged(opt)
Optim.minimizer(opt)[1]
end
univariate_optimize(x -> abs2(x), 1.0, BFGS(); autodiff = :forward)
You just have to make a decision about what to extract from Optim.MultivariateOptimizationResults
, or write a conversion routine to Optim.UnivariateOptimizationResults
.
2 Likes
Thanks - I searched but didn’t see that one.