The solution deteriorates if the domain of decisions is improperly set when using Ipopt

We expound the title through the following Convex Minimization Problem.

function convex_f(x)   return x * log(x) end # ⚠️ This convex function is NOT defined at 0
function f_gradient(x) return log(x) + 1 end

import JuMP
import Ipopt
function optimise(lb_of_x)
    CP = JuMP.Model(() -> Ipopt.Optimizer()) # CP stands for Convex Program
    JuMP.@variable(CP, x >= lb_of_x)
    JuMP.@objective(CP, Min, convex_f(x))
    JuMP.optimize!(CP)
    @assert JuMP.termination_status(CP) == JuMP.LOCALLY_SOLVED
    return JuMP.value(x)
end
function assess(x)
    @info "x = $x, f = $(convex_f(x)), ∇f = $(f_gradient(x))"
end

# ❌ The following solution returned by Ipopt is unexpected, since the domain of `x` is improperly set
x = optimise(0)
assess(x) # [ Info: x = 0.2627101793829332, f = -0.35116570400222863, ∇f = -0.3367038339628263

# ✅ The following solution is the expected optimal solution.
x = optimise(1e-6)
assess(x) # [ Info: x = 0.36787944368298275, f = -0.36787944117144233, ∇f = 6.827074683357637e-9

Does this constitute a bug of Ipopt?
Since when it reports LOCALLY_SOLVED, I’m expecting that ∇f = 0 at this solution.
In this case ∇f = -0.3367038339628263.
@odow

This is related to Ipopt’s automatic scaling that it applies.

If you solve the original model, you’ll see that Ipopt claims success, but reports

Overall NLP error.......:   3.3680175321150538e-09    3.3680175321150535e-01

So Ipopt solved the scaled problem to optimality, but the unscaled error is very high! I don’t know if this is classified as a bug. It happens in LP solvers as well.

For this problem, you could turn off scaling:

julia> using JuMP, Ipopt

julia> function main(; nlp_scaling_method = nothing)
           model = Model(Ipopt.Optimizer)
           if nlp_scaling_method !== nothing
               set_attribute(model, "nlp_scaling_method", nlp_scaling_method)
           end
           @variable(model, x >= 0)
           @objective(model, Min, x * log(x))
           optimize!(model)
           assert_is_solved_and_feasible(model)
           return value(x)
       end
main (generic function with 1 method)

julia> main()

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************

This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0 -4.6051666e-02 0.00e+00 1.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1 -2.4280022e-01 0.00e+00 1.00e-02  -1.0 1.00e-01    -  9.90e-01 1.00e+00f  1
   2 -2.5997605e-01 0.00e+00 1.52e-06  -3.8 1.50e-02    -  1.00e+00 1.00e+00f  1
   3 -2.6269769e-01 0.00e+00 1.06e-08  -8.6 2.55e-03    -  9.93e-01 1.00e+00f  1
   4 -3.5116570e-01 0.00e+00 3.37e-09 -12.9 1.35e-01    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 4

                                   (scaled)                 (unscaled)
Objective...............:  -3.5116570400222862e-09   -3.5116570400222863e-01
Dual infeasibility......:   3.3680175321150538e-09    3.3680175321150535e-01
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5724384364771816e-13    2.5724384364771815e-05
Overall NLP error.......:   3.3680175321150538e-09    3.3680175321150535e-01


Number of objective function evaluations             = 5
Number of objective gradient evaluations             = 5
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 4
Total seconds in IPOPT                               = 2.535

EXIT: Optimal Solution Found.
0.26271017938293323

julia> main(; nlp_scaling_method = "none")
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0 -4.6051666e-02 0.00e+00 4.61e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1 -1.9902165e-01 0.00e+00 4.75e+00  -1.0 6.80e-02    -  1.00e+00 1.00e+00f  1
   2 -2.6593036e-01 0.00e+00 1.07e+00  -1.0 5.27e-02    -  7.77e-01 1.00e+00f  1
   3 -3.6776246e-01 0.00e+00 7.35e-01  -1.0 2.28e-01    -  1.00e+00 1.00e+00f  1
   4 -3.6779739e-01 0.00e+00 1.09e-03  -1.7 1.70e-02    -  1.00e+00 1.00e+00f  1
   5 -3.6787937e-01 0.00e+00 2.05e-04  -3.8 7.56e-03    -  1.00e+00 1.00e+00f  1
   6 -3.6787944e-01 0.00e+00 2.01e-07  -5.7 2.33e-04    -  1.00e+00 1.00e+00f  1
   7 -3.6787944e-01 0.00e+00 1.43e-11  -8.6 1.96e-06    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 7

                                   (scaled)                 (unscaled)
Objective...............:  -3.6787944117144233e-01   -3.6787944117144233e-01
Dual infeasibility......:   1.4279221332907585e-11    1.4279221332907585e-11
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5167856134015930e-09    2.5167856134015930e-09
Overall NLP error.......:   2.5167856134015930e-09    2.5167856134015930e-09


Number of objective function evaluations             = 8
Number of objective gradient evaluations             = 8
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 7
Total seconds in IPOPT                               = 0.002

EXIT: Optimal Solution Found.
0.36787944368297487
2 Likes

The underlying issue is that the gradient of the objective is not defied at the starting point. I think we can do something to warn uses if this is the case: Tools to test and debug JuMP models · Issue #3664 · jump-dev/JuMP.jl · GitHub

julia> using JuMP, Ipopt

julia> function main(; start)
           model = Model(Ipopt.Optimizer)
           @variable(model, x >= 0, start = start)
           @objective(model, Min, x * log(x))
           optimize!(model)
           assert_is_solved_and_feasible(model)
           return value(x)
       end
main (generic function with 1 method)

julia> main(; start = 0)
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0 -4.6051666e-02 0.00e+00 1.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1 -2.4280022e-01 0.00e+00 1.00e-02  -1.0 1.00e-01    -  9.90e-01 1.00e+00f  1
   2 -2.5997605e-01 0.00e+00 1.52e-06  -3.8 1.50e-02    -  1.00e+00 1.00e+00f  1
   3 -2.6269769e-01 0.00e+00 1.06e-08  -8.6 2.55e-03    -  9.93e-01 1.00e+00f  1
   4 -3.5116570e-01 0.00e+00 3.37e-09 -12.9 1.35e-01    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 4

                                   (scaled)                 (unscaled)
Objective...............:  -3.5116570400222862e-09   -3.5116570400222863e-01
Dual infeasibility......:   3.3680175321150538e-09    3.3680175321150535e-01
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5724384364771816e-13    2.5724384364771815e-05
Overall NLP error.......:   3.3680175321150538e-09    3.3680175321150535e-01


Number of objective function evaluations             = 5
Number of objective gradient evaluations             = 5
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 4
Total seconds in IPOPT                               = 0.003

EXIT: Optimal Solution Found.
0.26271017938293323

julia> main(; start = 1)
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.7.3.

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        1
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  0.0000000e+00 0.00e+00 0.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1 -3.4340576e-01 0.00e+00 1.83e-01  -1.7 4.90e-01    -  1.00e+00 1.00e+00f  1
   2 -3.6522860e-01 0.00e+00 2.08e-02  -1.7 9.71e-02    -  1.00e+00 1.00e+00f  1
   3 -3.6783766e-01 0.00e+00 4.88e-03  -2.5 3.95e-02    -  1.00e+00 1.00e+00f  1
   4 -3.6787938e-01 0.00e+00 1.03e-04  -3.8 5.34e-03    -  1.00e+00 1.00e+00f  1
   5 -3.6787944e-01 0.00e+00 1.67e-07  -5.7 2.13e-04    -  1.00e+00 1.00e+00f  1
   6 -3.6787944e-01 0.00e+00 1.37e-11  -8.6 1.93e-06    -  1.00e+00 1.00e+00f  1

Number of Iterations....: 6

                                   (scaled)                 (unscaled)
Objective...............:  -3.6787944117144233e-01   -3.6787944117144233e-01
Dual infeasibility......:   1.3730039409520724e-11    1.3730039409520724e-11
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.5163095253048878e-09    2.5163095253048878e-09
Overall NLP error.......:   2.5163095253048878e-09    2.5163095253048878e-09


Number of objective function evaluations             = 7
Number of objective gradient evaluations             = 7
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 6
Total seconds in IPOPT                               = 0.003

EXIT: Optimal Solution Found.
0.3678794436827008
2 Likes

We can also employ a global optimizer to do the same task, the correct code should be like this

import JuMP, MosekTools
CP = JuMP.Model(() -> MosekTools.Optimizer())
JuMP.@variable(CP, t)
JuMP.@variable(CP, x >= 1e-6)
JuMP.@constraint(CP, [-t, x, 1] in JuMP.MOI.ExponentialCone())
JuMP.@objective(CP, Min, t) # minimize `x -> x * log(x)` over positive reals
JuMP.optimize!(CP)
JuMP.assert_is_solved_and_feasible(CP; allow_local = false)

A direct extension of this is to solve a (nonconvex) problem that has a “linear times convex” objective. e.g. this nonconvex problem (NC):
\min_{x_1 > 0, x_2} \{x_1 x_2 \log x_1 \; | \; 0 \le x_2 \le 2 \}
The code example of a convex relaxation (CR) that can solve (NC) to global optimality is as follows:

import JuMP, MosekTools
CR = JuMP.Model(() -> MosekTools.Optimizer()); JuMP.set_silent(CR);
JuMP.@variable(CR, t);
JuMP.@variable(CR, x[1:2]);
JuMP.set_lower_bound(x[1], 1e-6);
JuMP.set_lower_bound(x[2], 0); JuMP.set_upper_bound(x[2], 2);
JuMP.@variable(CR, X[eachindex(x), eachindex(x)], Symmetric);
# JuMP.set_lower_bound(X[1, 2], 0)
JuMP.@constraint(CR, [-t, X[1, 2], x[2]] in JuMP.MOI.ExponentialCone());
JuMP.@constraint(CR, X[1, 2] <= 2 * x[1]); # VI
JuMP.@objective(CR, Min, t);
JuMP.optimize!(CR)
JuMP.assert_is_solved_and_feasible(CR; allow_local = false)
x = JuMP.value.(x) # [0.3678829664527772, 2.0]
ub = x[2] * x[1] * log(x[1]) # -0.735758882309103
lb = JuMP.objective_bound(CR) # -0.735758881911808

In this example, we again discover the fact that:
although the lb might become tighter after adding the SDP cut, the primal feasible solution could nonetheless deteriorate. Therefore it is surely advantageous to collect primal candidate solutions along the way.

A related caveat about the function x -> x * log(x) is here.