NLopt :FORCED_STOP when objective function has wrong signature (missing gradient arg)


I just faced a small usability issue using NLopt.jl. I believe the issue is common enough (e.g. forum post from Oct 2020 that it may suggest a better error catching.

To give a bit of background, I had some prior experience with NLopt last year, then a long pause and then yesterday I was studying a fresh optimization problem from scratch (not starting from a working code template). So I was partly a NLopt newcomer. And I get stuck with a :FORCED_STOP return code.

I took me some hours of instrumenting my cost functions (with prints, logging of parameters values in a global array…) to find the silly cause of the error. Here is a simplified version of my problem which shows the issue: a seemingly innocent simple unconstrained derivative-free optimization problem:

using NLopt

function obj_badsig(x)
    J = sum(x.^2)
    return J

opt = Opt(:LN_NELDERMEAD, nx)

x0 = [1., 1., 1.]
xmax = [2., 2., 2.]

opt.lower_bounds = -xmax
opt.upper_bounds = xmax

opt.min_objective = obj_badsig
opt.xtol_rel = 1e-4

(minf,minx,ret) = optimize(opt, x0)

This yields (0.0, [1.0, 1.0, 1.0], :FORCED_STOP). And in fact the objective function is never called.

And the solution is to reread the NLopt.jl README with better eyes: a gradient parameter is required, even when working with gradient-free algorithms:

function obj(x::Vector, grad::Vector)
    J = sum(x.^2)
    return J

which works.

So my question is: is there a better way for NLopt.jl to catch wrong signatures of objective (and constraint) function to report a more informative error message than :FORCED_STOP for absent-minded users like me?

1 Like

Hello @stevengj, do you believe I should open an issue on this topic of uncatched wrong signature? Or is it just a matter of requiring users to read NLopt.jl’s doc more carefully?


Nlopt catches all errors and throws FORCED_STOP. The right fix would be to communicate what error caused the FORCED_STOP.