Is there a way to debug an Nlopt optimization

I am trying to debug my code with the Juno debugger, and I would like to run part of the optimization line by line, however when I try to “@enter”, the REPL tells me I can only call that with a function. Is there any way to get a closer look at the workings of the optimization? My function works fine but I am getting strange results when I run it through Nlopt.

NLopt is an external C library, so the Juno debugger will not be able to access that code.

(A common cause for problematic convergence in NLopt is an inaccurate gradient … it is easy to get this wrong! … if you are using a gradient-based algorithms, or discontinuities in your function or its derivative if you are using a derivative-free algorithm that internally assumes differentiability.)

4 Likes

Ah okay gotcha. Unforunately I’ve tried all the algorithms and even the derivative free ones like COBYLA give me a minimum of 0.0 after 1 or 2 iterations. It’s weird because even if I change or delete huge sections of my function nothing changes, and furthermore when I plug the minimization vector into the function again, I don’t get zero. So I’m not sure what’s going on.

Thank you though!

if you want to calculate gradients, you can use ForwardDiff and DiffResults to generate the gradient:

using ForwardDiff , DiffResults
function nlopt_form(f,xx,gg)
        if length(gg) > 0
            df = DiffResults.GradientResult(xx)
            df = ForwardDiff.gradient!(df,f,xx)
            gg .= DiffResults.gradient(df)
            return DiffResults.value(df)
        else
            return f(xx)
        end
end

min_objective!(opt, (x,grad) -> nlopt_form(f,x,grad))
1 Like

Thank you, I don’t know if a derivative exists for my function but I will definitely try it out.

Firstly, I am minimizing a function which contains complex arrays and real arrays, with which there are a lot of operations and definitions which are pretty specific so I don’t know how to simplify it down into an example that I can post, and I can’t post the whole function for privacy so I understand if I cannot get much help on this:

My function seems to be working as expected. When I optimize it to find the minimum however, I get something like this, when I am looking for a small negative value (or really any value other than getting zero every single time)

 12.126196 seconds (17.56 M allocations: 850.970 MiB, 4.54% gc time)
got 0.0 at [0.0752977, 0.120637, 0.0793536, -0.0835715, -0.17881, -0.421116, -0.0358142, -0.326447, 0.125564, 0.170543, 0.465881, 0.147937, 0.0309477, 0.04438, -0.22023, 0.109589, 0.483408, -0.132619, 0.235039, -0.0263986, -0.164046, 0.00920599, -0.0972819, 0.185368, -0.373212, -0.068289, -0.355533, 0.345202, -0.321877, -0.0296487, 0.0965427, -0.186317, 0.430853, 0.279356, -0.186008, 0.333952, -0.392721, -0.101361, -0.00896927, 0.149327, 0.306786, -0.215247, -0.106167, 0.367069, -0.381538, 0.0530803, -0.040668, 0.0265114, -0.446107, -0.164597, 0.0933045, 0.120752, 0.222864, -0.211655, -0.193802, 0.272776, -0.379973, 0.347921, 0.0671, 0.00366182, 0.0436334, 0.157879, 0.0223028, 0.311515, -0.0437446, 0.317073, 0.142629, -0.371994, -0.0134165, 0.329916, -0.149653, -0.483149, 0.646226, 0.0365353, 0.116795, -0.163173, 0.0900926, 0.388191, 0.211555, 0.191873, 0.0766339, -0.128054] after 1 iterations (returned FORCED_STOP)

Every time. I can delete huge parts of my function and I still get 0.0 after 1 or 2 iterations. I have tried every algorithm. I have tried zeroing out different sections of my function but nothing I do to the function ever has an effect on the result of the optimization, and my function works fine when I check it with other minimization vectors from old runs before this problem happened.

The Nlopt code I am running looks like this with Ef(x::Vector, grad::Vector) being my function:

D=5
Sigma=D
x0 = randn(5*D^2+2)/Sigma
opt2f = Opt(:LN_COBYLA, 5*D^2+2)
lower_bounds!(opt2f, -10.0)
upper_bounds!(opt2f, 10.0)
xtol_rel!(opt2f,1e-6)
min_objective!(opt2f, Ef)
@time (minf,minx,ret) = optimize(opt2f, x0)
numevals = opt2f.numevals # the number of function evaluations
println("got $minf at $minx after $numevals iterations (returned $ret)")

And furthermore I know that Nlopt is generally working because it accurately minimizes the example problem on the github and any other problem I pass to it, even a function which is very similar to the one I have now.

If anyone has any ideas, please help. This has been plaguing me for a month, and it could just be the function, in which case I don’t expect any help without a working example, but if something seems off or familiar to anyone about the Nlopt result, let me know.

Thanks

Probably it’s something simple like your function not returning the objective value. Does your function have a return statement? Did you try adding a println statement to see what you are returning?

1 Like

Yeah I have a return statement, and I don’t quite know what you mean about the print statement but I’ve used the @show command within my function to check that the output is what I want.

I agree it’s probably something ridiculously simple that I haven’t seen. I’ll get a minx from Nlopt, which Nlopt says minimizes the function so that minf = 0 but when I plug that minx back into my function, I get a number of varying size, but never zero.

All I can suggest is to boil it down to a minimal working example illustrating the problem that you can post.

1 Like

Okay, I’ll work on that.

It may also be useful to look at the return code ret of the minimizer.

1 Like

The ret is what always gives me a forced_stop.

I found out that taking the trace of the matrices that I created in my function is what gives the zero after 2 iterations with the force_stop. Is it possible that if the trace of my complex matrices have an imaginary part, Nlopt wouldn’t know how to handle it and then give an error?

NLopt expects your objective to return a real value. If you return a complex value, there will be an exception that causes a forced stop.

1 Like

Thank you. This is my problem.

Note also that this is not a peculiarity of NLopt. “Optimization” of a complex-valued function is not meaningful because the complex numbers are not ordered. So if your objective function is complex-valued then you need to rethink your problem formulation.

3 Likes

Very true, it should not be complex

When I run an optimization (everything is working now) I will get a minf which has a much higher value than the value of the function I am optimizing. I know this because I have a lot of @show commands showing various values of equations in my function. None of those values are anywhere near the value of minf that NLopt returns, and I was wondering if there is a way to find out what NLopt is considering to be minf.

Probably the smaller values of your objective are from iterations that violate your constraints.

When I put minx back into my function, I get a reasonable result, but it is consistently a couple orders of magnitude lower than than the minf value NLopt returns for that same minx

Then your function is not deterministic. NLopt uses the function value that you return.