Help with global optimizers in NLopt and GalacticOptim

This is a very simple optimization problem, but for some reason I cannot get NLopt to converge:

using GalacticOptim
using NLopt

rosenbrock(x, p) =  (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p  = [1.0, 100.0]
f = OptimizationFunction(rosenbrock)
prob = GalacticOptim.OptimizationProblem(f, x0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])

sol = GalacticOptim.solve(prob, NLopt.GN_CRS2_LM())

I’m trying to use Controlled Random Search (CRS) with local mutation described here, but the program seems to go on and on (for at least 30 minutes) and does not output an answer (perhaps it is a convergence problem?). Is there an issue with the setup of the problem or is that algorithm a bad choice for this problem?

specify a tolerance
sol = GalacticOptim.solve(prob, NLopt.GN_CRS2_LM(),reltol=1e-6)

seesolver options for details.

EDIT: reltol = 1e-15 will give you a proper solution

1 Like

Perhaps the problem is because the solution is at the upper bound you provide. I don’t have experience with the algorithms you’re using, but the following code finds the solution very quickly.

using Optim
rosenbrock(x, p) =  (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2

function main()
x0 = zeros(2)
p  = [1.0, 100.0]
obj = x -> rosenbrock(x,p)
lb = fill(-100., 2)
ub = fill(100., 2)
prob = Optim.optimize(obj, lb, ub, x0, SAMIN(), Optim.Options(iterations=10^6))
end
main()
1 Like

You didn’t specify any termination condition.

(This is pretty hard to well for non-convex global optimization — it is often impossible to know for certain whether you have converged to a given tolerance. I usually just set an upper bound on the number of function evaluations and keep doubling it until I run out of patience or the answer stops changing for a long while.)

1 Like

Is there a nice way we can query NLopt for what conditions need to be given for the optimizers? I know that some algorithms require bound constraints, others require a termination condition, etc. but I don’t know of a nice way to throw users an error for this case.

Basically every optimizer prefers that you specify a termination condition. (Some of them will halt automatically when machine precision is reached, but this isn’t guaranteed.)

(The global optimizers also bound constraints, and will throw an error if you don’t supply them, but these are not termination conditions.)

3 Likes

I assumed there would be default terminating conditions. Aren’t there defaults?

No. What defaults would you pick?

Say 1000 iterations, tolerances of 1e-8, as in Optim.

I would find it pretty unexpected if my optimizer suddenly quit after 1000 iterations… that choice seems very problem dependent, and I’d prefer that the user give it explicitly.

A default tolerance of 1e-8 ≈ sqrt(eps) is more reasonable, but a little thought will reveal that this will essentially never terminate for general nonconvex global optimization in more than 1 variable, since you would essentially have to search the entire parameter space to that precision to ensure you have found the global optimum to 8 significant digits. (Hopefully you find it sooner than that, but you can’t be sure that any given local optimum is global so it is hard to decide when to stop.)

2 Likes

So you terminate after a certain number of iterations, rather than setting a target tolerance. I tried that approach with my real application, but the output was very disappointing. I am minimizing a norm that apparently has many local minima. With the local methods in Optim, I have reached an objective value of 1e-7, after trying many initial points, but with CRSLM, running overnight for 8 hours, I have reached something like 200.0. And following the suggestion in your NLopt website of using the output of the global optimizer as starting values of a local optimizer, I do not reach that minimum of 1e-7, but another point with a larger objective value. Do you have any tips? Perhaps the initial box for the global optimizer is too large? Perhaps I should try another global optimizer?

Aside, I suppose those global optimizers have “long memory” in the sense that they remember the best point reached, as oppose to local optimizers that only remember the last point they have reached. Is that so?

If local optimizers do well on your problem, I would tend to recommend a multistart algorithm like MLSL for global optimization.

3 Likes