I have a simple model, which I know there exists multiplicity in results. When I start Ipopt to optimize this model, I found out that every time the optimizer gives me different results. The question is: Is there a way to control this randomness? Thanks
Here are just examples of two consecutive starts:
First run stops after 950 iterations and gives the following messages; Second run, however, stops already after 752 iterations.
Moreover, I also encounter more strange randomness: sometimes the ipopt can solve the problem, sometimes gives infeasible back…
thanks awasserman. I tried, but it doesn’t work… I also tried to set
VariablePrimalStart() = nothing
ConstraintPrimalStart() = nothing
ConstraintDualStart() = nothing
NLPBlockDualStart() = nothing
No one helps…
I somehow know why it is like that. Because, I don’t use either start or set_start_value for any of my variable. My problem should have multiple optimal results (even global). I originally want to understand how Jump initialize my problem. Then I found out that it embeds some random generation processes, which I cannot decipher.
yes, odow. That’s another problem of my model, which is not easy to be initiated:(
Yesterday I tried to build a minimal example for you, then I found something interesting:
In my original model, I use a function _add_node to build up the model:
function _add_node!(model::JuMP.Model, load::AbstractLoad)::Nothing
key = :nloads
if !(key in keys(object_dictionary(model)))
id = model[key] = 1
else
id = model[key] += 1
end
T = @variable(model,base_name=“T$id”, lower_bound = T_min, upper_bound = T_max, start = T_init)
In the light version (hard-coded), it looks like:
T1 = @variable(model,base_name=“T1”, lower_bound = T_min, upper_bound = T_max, start = T_init)
T2 = @variable(model,base_name=“T2”, lower_bound = T_min, upper_bound = T_max, start = T_init)
and so on…
Then I found out that my light version doesn’t create randomness any more!
That means, it is something to do with my original model formulation. I’ll study it at first before I report new findings.
Do you already have some clues, what could go wrong? Thanks in advance!
I’m not aware of anything in Ipopt’s algorithm that’s non-deterministic, which means that running the same problem (same data inputs) on the same machine should give you the same results.
If you see some non-determinism, I would look outside Ipopt’s core code, which could include:
The linear solver you’re using (default is Mumps, but users can use another linear solver), especially if you use multiple cores
The exact input that’s passed to Ipopt. For instance, permuting the variables gives an equivalent problem but it may yield a different solution. Something to look out for is whether something is using non-ordered dictionaries (last time I checked, iterating over the key-value pairs of a Dict is done in a non-deterministic fashion).
One way to check this is to export the Ipopt model to a file, then call Ipopt directly on that file (without going through JuMP). This will also rule out anything that’s in the Julia part of your code.