I am not sure you are aware of the possible pitfalls. Curiously, multivariate methods can break down in surprising ways in 1D, and can easily yield suboptimal performance. Optim also has GoldenSection(), see
That said, you can always write a wrapper like
using Optim
function univariate_optimize(f, x0, args...; kwargs...)
opt = Optim.optimize(x -> f(x[1]), [x0], args...; kwargs...)
@assert Optim.converged(opt)
Optim.minimizer(opt)[1]
end
univariate_optimize(x -> abs2(x), 1.0, BFGS(); autodiff = :forward)
You just have to make a decision about what to extract from Optim.MultivariateOptimizationResults, or write a conversion routine to Optim.UnivariateOptimizationResults.