Need help understand error status of Ipopt

I’m trying to use JuMP and Ipopt for this optimization problem below. I keep getting errors like “Infeasible_Problem_Detected”, “Search_Direction_Becomes_Too_Small”, “Restoration_Failed”, for various datasets.

I read Ipopt documentation about these errors at https://www.coin-or.org/Ipopt/documentation/node36.html. But I’m still having trouble understanding what in my data could cause these errors, because my other datasets fitted to global optima with no problem. Sometimes changing the start value of variables helps it go from those errors to fitting to global optima. But it’ll still be nice to know more about the cause of those errors. I don’t have math background, so I’m not sure what exactly happened mathematically that led to those errors.

Also, in some cases when I got “Search_Direction_Becomes_Too_Small”, I plotted the predicted data and it actually fitted pretty close to the observed values.

Let me know if I need to post actual data. Thanks!

cycs::Vector{Integer} = some_vector_1
obs_fluos::Vector{AbstractFloat} = some_vector_2

m = Model(solver=IpoptSolver(max_iter=200))

@variable(m, fb, start=0)
@variable(m, eu0 >= 0, start=1000)
@variable(m, d0 >= 0, start=1)
@variable(m, inh >= 0, start=1e-6)
@variable(m, f[cycs2fit])
@variable(m, eu[cycs2fit] >= 0)
@variable(m, d[cycs2fit] >= 0)

@constraint(m, f_constr[cyc in cycs2fit], f[cyc] == fb + d[cyc])
@NLconstraint(m, eu_constr_01, eu[1] == eu0 / (1 + inh * d0))
@NLconstraint(m, d_constr_01, d[1] == d0 + d0 * eu[1] / (eu[1] + d0))
@NLconstraint(m, eu_constr_2p[cyc in cycs2fit[2:end]], eu[cyc] == eu[cyc-1] / (1 + inh * d[cyc-1]))
@NLconstraint(m, d_constr_2p[cyc in cycs2fit[2:end]], d[cyc] == d[cyc-1] + 1 / (1 / d[cyc-1] + 1 / eu[cyc]))

@NLobjective(m, Min, sum((f[cycs[idx]] - obs_fluos[idx]) ^ 2 for idx in idc2fit))

status = solve(m)

Also, dumb questions, what do these terms mean:
Dual infeasibility
Restoration phase

I tried to find these definitions online but haven’t succeeded.

I only know one of them: “Dual Infeasibility” refers to the dual problem which is an alternative way of formulating your optimization: instead of searching for the values of your decision variables, it searches for a lower bound on the optimal objective value. If the dual problem is infeasible, then that means there is no lower bound on the objective. If there’s no lower bound on the objective, then there must be some direction in which your decision variables can be moved that will cause the objective to decrease forever. Other solvers might report that as “unbounded”.

An example of a trivially dual-infeasible problem is:

minimize x
s. t. x <= 1

x can go as low as it wants, so there’s no way to put a lower bound on the objective value.

(note: the above is based on what I know from convex optimization, but the basic ideas should hold for the nonlinear case as well)

Thanks for the explanation! Does Ipopt transform the primal problem into its dual problem for every optimization problem or just some? Why is it necessary? And I’m just guessing: does restoration mean transforming the dual problem back to the primal problem? Thanks!

Read http://cepac.cheme.cmu.edu/pasilectures/biegler/ipopt.pdf for a full explanation of all of Ipopt’s terminology and algorithms.

I suspect what’s happening here is the solver is having numerical difficulties due to your problem formulation. Division can get especially difficult to deal with if the denominators get very small. Try reformulating your equality constraints to not have divisions by multiplying both sides by the denominator.

@NLconstraint(m, eu_constr_01, eu[1] * (1 + inh * d0) == eu0)
@NLconstraint(m, d_constr_01, d[1] * (eu[1] + d0) == d0 * (2* eu[1] + d0))
@NLconstraint(m, eu_constr_2p[cyc in cycs2fit[2:end]], eu[cyc] * (1 + inh * d[cyc-1]) == eu[cyc-1])
@NLconstraint(m, d_constr_2p[cyc in cycs2fit[2:end]], d[cyc] * (eu[cyc] + d[cyc-1]) == d[cyc-1] * (2 * eu[cyc] + d[cyc-1]))

Thanks for the insight! By numerical difficulties you mean the numbers become too small or too large to be represented by the computing infrastructure? Thanks!

When the function, gradient, and Hessian values that result from your problem formulation vary by too many orders of magnitude, the calculations that Ipopt does are less accurate and it has a harder time converging. The Newton steps in the interior point method can become especially ill-conditioned. Some of this is expected due to barrier terms from inequality constraints, but different formulations can make it more or less of a problem in terms of actual number of iterations that the optimization will take and robustness of convergence from different initial conditions.

Thanks a lot!

I tried the no-division formulation, it works better on some variants of the model I showed above but worse on others, but I do get the point that it does make a difference.

Also I noticed that for this model and its variants, sometimes the objective value goes down fast at first, then lingers for many iterations without going down much, then goes down fast again, then lingers again, back and forth on and on, and eventually fit to global optimum or :Error due to “Search_Direction_Becomes_Too_Small”. Could this be due to what you said - numerical difficulties cause it to lingers and hard to converge?