Is there a way to debug an Nlopt optimization

Without a minimal working example, it’s not possible to provide you with much more help.

Okay that’s fair, I was just confused because my function should be deterministic and acts deterministic in every other way. So I’ll assume this discrepancy must mean there is something wrong with my function if you haven’t heard of this happening before. It just makes no sense to me where this minf is coming from if I can plug minx back in and get different values. I’ll work on getting a working example but I don’t think I’ll be able to come up with a minimum working example that exhibits this strange behavior without just giving you my whole function - which I cannot do.

What value do you get if you call Ef(minx) (or any other value but always the same) several times in a row? Do you get the same result?

Also, at 127 degrees of freedom, your parameter space is pretty big. Can you reduce the number of free parameters to make it easier for yourself to check the intermediate steps? Can you tell what operations you are doing within your objective function?

When I call Ef(minx) I get the return value of my function every time, my function is very consistent when calling it with different vectors, even older vectors which I know the output for and they are consistent. I just don’t know where minf is coming from. The Ef(minx) values are reasonable, its just minf that I can’t trace.

This is actually the smallest number of dof that I will have, I’ll only increase it from here unfortunately. My function works with kronecker products of complex arrays and anti-hermitian arrays and their conjugates, and I am then always dealing with the traces of these arrays which are always real.

As far as I understand from the docs, the function is not evaluated outside constraints in the case of lower and upper bounds only, as in the OP question.

1 Like

It looks like you solved the problem in this post: what was it?

Essentially, I was passing an imaginary valued return value to the optimizer. My imaginary parts were something like 10^-15 but I needed to pass real() to the return value before NLopt could handle it, and same with the constraints. As far as the problem of getting a different minf than when I plugged minx back into the function, I rewrote my problem so that constraints were automatically implemented and the problem went away.

2 Likes

I will say, I am now still getting the minf discrepancy problem. But now the discrepancy is much much less, the reported minf by NLopt is very close to the minimum I get when I plug minx back into the function I optimize. I believe that the reported minf might be a previous iteration because it’s always a little larger than the minimum I get plugging minx back in.