JuMP+Ipopt producing infeasible solutions for constraints with small upper bounds

using JuMP, Ipopt

function parametertest()
    m = Model(Ipopt.Optimizer)
    a = [0.15, 0.16, 0.72, -0.80, -12.0]
    @variable(m, x[1:5])
    @variable(m, abs_x[1:5])
    for i in 1:5
        @constraint(m, abs_x[i] ≥ x[i])
        @constraint(m, abs_x[i] ≥ -x[i])
    @NLparameter(m, gamma == 0.0)
    @NLconstraint(m, Lasso, sum(abs_x[i] for i in 1:5) ≤ gamma)
    @objective(m, Min, sum((a[i] - x[i])^2 for i in 1:5))
    gammas = [1e-10, 0.1, 1.0, 15.0, 200]
    for g in gammas
        set_value(gamma, g)

        @show value(gamma)
        @show value(sum(value(abs_x[i]) for i in 1:5))



value(gamma) = 1.0e-10
value(sum((value(abs_x[i]) for i = 1:5))) = 1.000154534191108e-8 # Infeasible!
value(gamma) = 0.1
value(sum((value(abs_x[i]) for i = 1:5))) = 0.10000000989464701
value(gamma) = 1.0
value(sum((value(abs_x[i]) for i = 1:5))) = 1.0000000098860817
value(gamma) = 15.0
value(sum((value(abs_x[i]) for i = 1:5))) = 14.642196754232643
value(gamma) = 200.0
value(sum((value(abs_x[i]) for i = 1:5))) = 143.0505423096824

Notice that in the first case, the absolute error is on the order of 1e-8, but the relative error is a factor of 100! Is there a way I can tell Ipopt not to accept a solution until the relative error is acceptably small?

In this particular case, better results can be obtained by multiplying both sides of the @NLconstraint by a constant like 1e10, but it is not always obvious beforehand which constant is appropriate.

1 Like

Almost all solvers use absolute tolerance as their convergence criteria. You should reformulate your model to avoid such small values.

Although not explicitly for NLP, Gurobi have a good series of documentation on numerical issues: https://www.gurobi.com/documentation/9.0/refman/num_grb_guidelines_for_num.html

You should for constants in your model to differ by at most ~6 orders of magnitude. Here, yours differ by 12 (1e-10 and 2e2).

Ipopt has a variety of tolerances you can experiment with, but the underlying issue is poor scaling of your model: Ipopt: Ipopt Options. Fix that first, rather than playing with tolerances.

Edit: or use the absolute 1e-8 solution.

1 Like

Thank you. I should probably have figured this out on my own. This is a challenging issue when working with a lasso-type constraint as in the example in the OP, because often what I want to do is try several values of the constraint across many orders of magnitude in order to find the best tradeoff.