Optimization of an univariate function using Optim.jl

Dear all,

I am trying to deepen my knowledge of the Optim.jl package and in a near future of Optimization.jl.

In the course of my research, I have developed a method for estimating the noise in a signal. This methodology involves the resolution of a set of univariate optimization problems.

My first approach was to use the Brent’s method to solve the problem, since it is the indicated approach for this kind of problem. Below is a MWE (extracted from the documentation of Optim.jl), representative of the difficulties I am struggling with.

using Optim

f(x) = 2x^2 + 3x + 1

# Brent's method - Success
res = optimize(f, -2, 1)
xopt = Optim.minimizer(res)

# xopt = -0.75

Then I tried to use the Nelder-Mead method and it absolutely failed. Here is my attempt.

# First attempt
res2 = optimize(f, [0.]) # Error

# Second attempt
g(x) = @. 2x^2 + 3x + 1
res3 = optimize(g, [0.]) # Error

I think the problem is quite logical, if my understanding is correct, since the function to minimize must a scalar (Float64). However, when using Nelder-Mead, the initial must be a vector. Considering, I have tempted the following and more esoteric approach.

h(x) = (@. 2x^2 + 3x + 1)[1]
res4 = optimize(h, [0.])
xopt4 = Optim.minimizer(res4)[1]

# xopt4 = -0.7500000000000002

Although this approach is successful, it feels a bit unnatural.

My question is : Is there a more idiomatic way of minimizing univariate function using Nelder-Mead() or any other optimization technique (LBFGS, …) ?

Thank you !

I guess that in the univariate case there are more straightforward methods for optimization than the Nelder-Mead, essentially line search methods: LineSearches.jl.

What about

res2 = optimize(f ∘ only, [0.])  # ?

Of course you’re right, but I wanted to test the interface to see to what extent I can write generic code for both univarita and multivariate optimization.

Thanks for pointing the LineSearches.jl.