All I can suggest is to boil it down to a minimal working example illustrating the problem that you can post.
Okay, I’ll work on that.
It may also be useful to look at the return code ret
of the minimizer.
The ret
is what always gives me a forced_stop.
I found out that taking the trace of the matrices that I created in my function is what gives the zero after 2 iterations with the force_stop. Is it possible that if the trace of my complex matrices have an imaginary part, Nlopt wouldn’t know how to handle it and then give an error?
NLopt expects your objective to return a real value. If you return a complex value, there will be an exception that causes a forced stop.
Thank you. This is my problem.
Note also that this is not a peculiarity of NLopt. “Optimization” of a complex-valued function is not meaningful because the complex numbers are not ordered. So if your objective function is complex-valued then you need to rethink your problem formulation.
Very true, it should not be complex
When I run an optimization (everything is working now) I will get a minf
which has a much higher value than the value of the function I am optimizing. I know this because I have a lot of @show
commands showing various values of equations in my function. None of those values are anywhere near the value of minf
that NLopt returns, and I was wondering if there is a way to find out what NLopt is considering to be minf
.
Probably the smaller values of your objective are from iterations that violate your constraints.
When I put minx
back into my function, I get a reasonable result, but it is consistently a couple orders of magnitude lower than than the minf
value NLopt returns for that same minx
Then your function is not deterministic. NLopt uses the function value that you return.
Without a minimal working example, it’s not possible to provide you with much more help.
Okay that’s fair, I was just confused because my function should be deterministic and acts deterministic in every other way. So I’ll assume this discrepancy must mean there is something wrong with my function if you haven’t heard of this happening before. It just makes no sense to me where this minf
is coming from if I can plug minx
back in and get different values. I’ll work on getting a working example but I don’t think I’ll be able to come up with a minimum working example that exhibits this strange behavior without just giving you my whole function - which I cannot do.
What value do you get if you call Ef(minx)
(or any other value but always the same) several times in a row? Do you get the same result?
Also, at 127 degrees of freedom, your parameter space is pretty big. Can you reduce the number of free parameters to make it easier for yourself to check the intermediate steps? Can you tell what operations you are doing within your objective function?
When I call Ef(minx)
I get the return value of my function every time, my function is very consistent when calling it with different vectors, even older vectors which I know the output for and they are consistent. I just don’t know where minf
is coming from. The Ef(minx)
values are reasonable, its just minf
that I can’t trace.
This is actually the smallest number of dof that I will have, I’ll only increase it from here unfortunately. My function works with kronecker products of complex arrays and anti-hermitian arrays and their conjugates, and I am then always dealing with the traces of these arrays which are always real.
As far as I understand from the docs, the function is not evaluated outside constraints in the case of lower and upper bounds only, as in the OP question.
Essentially, I was passing an imaginary valued return value to the optimizer. My imaginary parts were something like 10^-15 but I needed to pass real() to the return value before NLopt could handle it, and same with the constraints. As far as the problem of getting a different minf
than when I plugged minx
back into the function, I rewrote my problem so that constraints were automatically implemented and the problem went away.
I will say, I am now still getting the minf
discrepancy problem. But now the discrepancy is much much less, the reported minf
by NLopt is very close to the minimum I get when I plug minx
back into the function I optimize. I believe that the reported minf
might be a previous iteration because it’s always a little larger than the minimum I get plugging minx
back in.