Optimality gap of Global solver in NLopt

Hi all,

I have an extremely hard NLP that I want to solve via @NLopt. I can get the code running but there is no output unless the algorithm gets terminated. Now I have two questions:

  • Is it possible to extract the optimality gap from the Global Optimization solvers in this package?

-If no, then is it possible to at least see the iterations? (it seems that nlopt has a parameter called “verbose” that can control what to display in MATLAB but not in Julia?).

Thanks a lot.

No. Its global-optimization methods do not compute a bound on the optimum (other than the value that they return).

Sure, just put a println statement or some other output into your objective function.

Thanks for your reply.

Thanks, good to know.

But this would give me all the objective function evaluation. I am guessing that in each iteration of the algorithm many evaluations are done to select a good or local optimum and continue to the next iteration. Is it possible to get only the values of the “good” solution in each iteration?

FWIW, optimality gaps are computed by most solvers that are accessible from JuMP. For global non-convex nonlinear optimization you might try Couenne, Alpine or Baron a try.

Thanks, but JuMP is not practical for me as the nonlinearity in the objective function comes from integration over a parameter, which is not a valid type of input function for JuMP.

Sure. Just have a variable keeping track of the best objective function value so far, and only print when your return value is better than this.

2 Likes

Maybe I am not asking my question right because what you are saying means I should adapt the code of the solvers themselves. Let me try it again.
Currently, it seems to me that NLopt doesn’t allow you to see the iterations the solvers have. The iterations can be due to branching or what ever method the solver is using. Is that right or can I set a parameter value in NLopt to see the iterations?

Hope this time the question is clear.

Right, there is no API to expose the internals of the solver iterations (which depend on the algorithm). (But in most algorithms an improvement in the objective function essentially corresponds to an iteration.)

Why do you care? Outputting the current best value of the objective function is sufficent to give a progress display, and unless you understand the details of the algorithm the internal iteration counters won’t mean much to you…

As the convergence to the optimal value/solution is asymptotic, and there is no upper bound, it is good to check the improvement we get in each iteration to at least have some rough idea on whether we are close to the optimal. Furthermore, it is also necessary when we want to compare algorithms to know their performance. Some may have a great start and then slowly converge and some may converge in average speed.

i.e. you just want to watch progress, so printing out the objective value on every call (or just the best value so far) should suffice.

(Personally, I prefer to print the result of every call, so that I can see if the algorithm is spinning its wheels without making progress.)

Furthermore, it is also necessary when we want to compare algorithms to know their performance

For comparing algorithms especially, it is meaningless to compare internal iteration counters because they mean completely different things in different algorithms. You just want to compare the best objective value so far versus the number of calls, and again this can be done easily in your objective function.

1 Like

I know optim has a show_trace option but to some extent I agree with Steven here: it’s much much easier for the user to log these values than to bake it into the software. It’s literally just closing over a couple of arrays and pushing to them in the objective call. What this does not allow you to do is to monitor internal things such as step lengths, search directions, etc. It just gets hairy very quickly to maintain this, and it’s probably not a very common user request either way. I probably lean towards supporting it, but I certainly also understand why someone would chose not to. NLopt also has a very diverse set of algorithms, and it’s not clear that they share a lot of feature, so that makes the maintenance burden even worse.

4 Likes

@stevengj and @pkofod: Thank you for your replies.