OptimizationMOI Ipopt violating inequality constraint

I was just trying out Percival.jl (which is an augmented Lagrangian solver) and getting outstanding results:

  • It converged and produced the result I’ve seen mentioned several times in this thread [0.099688, 6.941804, 6.817457].
  • I initialized it using tens of random points and it worked every time, giving almost the exact same solution.

Unfortunately, I ran into DomainError with -0.01658982851308019 with the point

[0.2967843570401516, 2.8936982042385653, 4.6280258820705225]

Well, guess I could open up Nocedal’s and Boyd’s books and some papers (including yours, thanks!) and… get to work implementing a freaking feasible interior-point solver. I can’t believe I have to code my own solver to fit a fancy GARCH model…

It’s just not guaranteed to respect your inequality constraints.

@odow, @dpo, @ChrisRackauckas, sorry for the ping, but I think I got some interesting results.

TL;DR: when operating under JuMP, IPOPT can detect that there was an evaluation error, while under OptimizationMOI and NLPModelsIpopt it cannot.


JuMP constructs a different optimization problem:

  • JuMP has 1 less nonzero in inequality constraint Jacobian than the others.
  • JuMP has 1 more nonzero in Lagrangian Hessian than the others.

However, IPOPT reports identical objective function values for Optimization, ADNLPModels and JuMP:

iter  OptimizationMOI NLPModelsIpopt JuMP
   0  1.8621407e+00   1.8621407e+00  1.8621407e+00
   1  1.6724881e+00   1.6724881e+00  1.6724881e+00
   2  1.5562440e+00   1.5562440e+00  1.5562440e+00
   3  1.5191827e+00   1.5191827e+00  1.5191827e+00
   4  1.5107919e+00   1.5107919e+00  1.5107919e+00
   5  1.5046485e+00   1.5046485e+00  1.5046485e+00
   6  1.5073755e+00   1.5073755e+00  1.5073755e+00
   7  1.5089724e+00   1.5089724e+00  1.5089724e+00
   8  1.5062711e+00   1.5062711e+00  1.5062711e+00
   9  1.5072838e+00   1.5072838e+00  1.5072838e+00
  10  ERROR           ERROR          1.5066406e+00 Warning: SOC step rejected due to evaluation error

Exactly when OptimizationMOI and NLPModelsIpopt sort of… let the DomainError through, JuMP seems to inform IPOPT about it. This is also seen in this comment: OptimizationMOI Ipopt violating inequality constraint - #19 by odow.

Thus, IPOPT indeed steps outside the feasible region, but it can recover from this situation and continue the optimization process.

Full reproducible code and its full output here: JuMP can use IPOPT to optimize a function, but OptimizationMOI.jl and NLPModelsIpopt.jl cannot · GitHub.


So looks like OptimizationMOI and NLPModelsIpopt should probably signal to IPOPT that there was an evaluation error? JuMP does it and this leads to a solution. Or, since JuMP builds its own representation of the optimization problem, maybe it passes this entire representation to IPOPT somehow.

The point is, when operating under JuMP, IPOPT can detect that there was an evaluation error, while under OptimizationMOI and NLPModelsIpopt it cannot. This could probably be fixed?

JuMP uses NaNMath when it computes derivatives to avoid DomainErrors:

2 Likes

Indeed, import NaNMath: log - and it looks like I don’t have to write my own solver after all:

  • Now all… optimization packages (? not sure what to call these, actually) obtain a result without erroring out:
    • OptimizationMOI: [0.09968760243429249, 6.941809365897346, 6.817463190450336]
    • NLPModelsIpopt: [0.09968760243429249, 6.941809365897346, 6.817463190450336]
    • JuMP: [0.09968760243429328, 6.941809365897269, 6.817463190450249]
  • With each package IPOPT produces Warning: SOC step rejected due to evaluation error after iterations 9, 11 and 14, again like in OptimizationMOI Ipopt violating inequality constraint - #19 by odow

However, this bit remains different between these packages:

Number of nonzeros in equality constraint Jacobian...:        0 0 0
Number of nonzeros in inequality constraint Jacobian.:        3 3 2
Number of nonzeros in Lagrangian Hessian.............:        6 6 7

Total number of variables............................:        3 3 3
                     variables with only lower bounds:        2 2 2
                variables with lower and upper bounds:        1 1 1
                     variables with only upper bounds:        0 0 0
Total number of equality constraints.................:        0 0 0
Total number of inequality constraints...............:        1 1 1
        inequality constraints with only lower bounds:        0 0 0
   inequality constraints with lower and upper bounds:        0 0 0
        inequality constraints with only upper bounds:        1 1 1

Columns with numbers correspond to Optimization, ADNLPModels and JuMP, respectively. JuMP still produces 1 less nonzero in inequality constraint Jacobian and 1 more nonzero in Lagrangian Hessian. Not sure why that is and whether it can cause any issues.


Looks like import NaNMath: log saved the day, question mark?

2 Likes

There are only 2 non-zeros in the sparse Jacobian, so JuMP gets this right. I assume the others are using some dense formulation. JuMP computes a sparse hessian, but it can add multiple terms for each non-zero. Ipopt sums duplicate indices in the Hessian matrix, so this can result in more, easier to compute terms, instead of fewer, more complicated terms.

Hey @ForceBru ,

As said by @dpo , IPOPT doesn’t have guarantees for nonlinear constraints. However, it handles very well bound constraints and linear constraints. So, you might want to try a change of variables to make the VIP constraints as bounds or linear constraints.

Starting from your gist ; You can replace dt by a new variable idt so that

a - 1/dt <= 0

would become

a - itd <= 0

and

variance[t+1] = (1 - dt * a) * variance[t] + dt * data[t]^2 + dt * b

would become

variance[t+1] = (1 - 1/idt * a) * variance[t] + 1/idt * data[t]^2 + 1/idt * b

(you can eventually multiply both sides of this last expression by idt).

By the way, if that’s another issue, for your strict bound constraints a > 0, you could write a >= eps() as strict inequality are usually not handled by solvers.

1 Like