How to set max_iterations for Optim.jl

I have the following code:

    lower = [-20, -20, -20, -20, -20, -20.0, -20, -20, -20, -20, -20, -20]
    upper = [ 20,  20,  20,  20,  20,  20.0,  20,  20,  20,  20,  20,  20]
    initial_x =  [-1.4665866297620287, -3.5561543609716884, -5.328280757163652, -5.825432425624137, -4.06758438870819, 1.3365850018520555, 0.34726455461348643, 0.8708697538110506, 1.2180971705802224, 1.077432049937649, 0.20510584981238655, -1.6322406908860976]
    inner_optimizer = BFGS(linesearch=LineSearches.BackTracking(order=3)) # GradientDescent()
    results = optimize(test_initial_condition, lower, upper, initial_x, Fminbox(inner_optimizer), Optim.Options(iterations=10000))
    params=(Optim.minimizer(results))

The output is:

* Status: failure

 * Candidate solution
    Final objective value:     1.750779e-01

 * Found with
    Algorithm:     Fminbox with BFGS

 * Convergence measures
    |x - x'|               = 7.29e-07 ≰ 0.0e+00
    |x - x'|/|x'|          = 7.23e-08 ≰ 0.0e+00
    |f(x) - f(x')|         = 0.00e+00 ≤ 0.0e+00
    |f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
    |g(x)|                 = 2.36e+01 ≰ 1.0e-08

 * Work counters
    Seconds run:   10  (vs limit Inf)
    Iterations:    1000
    f(x) calls:    255906
    ∇f(x) calls:   16789

This is not too bad, but I know that the minimum is zero, and the solver stops at 0.175, so the result could be better. The solver stops after 1000 Iterations. Why is it ignoring the option: “Optim.Options(iterations=10000)” ?

The complete example can be found at: https://github.com/ufechner7/KiteViewer/blob/sim/test/test_optim.jl

This seems like a problem of local minima. As you can see below the objective function stopped decreasing in search direction so the algorithm stopped.

I don’t think that this is the case. If I enter the result as starting condition for a new optimization the result improves. Furthermore it says: * Status: failure

I don’t think it would indicate a failure if it found a local minimum.

BFGS method uses Hessian Matrix approximation if not provided. If you feed the result again, obviously this matrix is reset so it may find a search direction with the new hessian prediction(I believe it starts with identity matrix). I think it is failed because the norm of gradient is not small but in the search direction the algorithm cannot find x' that f(x') is lower than f(x). There are some references for failures in here about convergence but it is not clear to me.

I am using the solver :LN_BOBYQA from NLopt now. With this solver I can achieve a final objective value as low as 0.0001699, then it terminates with ROUNDOFF_LIMITED. But it is slow, it needs about 10 minutes.

See: https://github.com/ufechner7/KiteViewer/blob/sim/test/test_nlopt.jl

Its a pity that no solver from Optim.jl is able to achieve this accuracy.

I think that this can be achieved using the outer_iterations = ... together with iterations=... option. Here is my example:

optimize(myfunction, theta_lower, theta_upper, theta_initial,
		Fminbox(),
			Optim.Options(outer_iterations = 1500,
						  iterations=10000,
						  show_trace=true,
						  show_every=50,
						  f_tol=my_f_tol,
						  g_tol=my_g_tol))
1 Like