JuMP nonlinear optimization constraints broken to O(1e-9)

I’m running nonlinear optimization using the Ipopt optimizer. My objective/cost function relies on the constraints being adhered to strictly. However on some iterations the constraint are being broken.

Specifically, my constraints are s.t. the values should be between 0 and 1/0.6 = 1.6666666…

However, in some iterations, the variables have values like:


crashing the program.

Is this a bug? Is it a problem with JuMP or Ipopt? Is there a workaround?


using JuMP
using Ipopt

model = Model(Ipopt.Optimizer)
@variable(model, 0.0 <= ξ[1:6] <= 1.0/0.6)
set_start_value.(ξ, ξ₀)

@constraint(model, dot(ξ, T) == 1)

register(model, :objtv, 6, MyNonLinearObjectiveFn; autodiff=true)
@NLobjective(model, Min, objtv(ξ...))


It’s the question of tolerances: Results NLP problem - #2 by blob

As for a workaround - ipopt will always have finite tolerances, so differences will always be possible. If things are crashing, then you are probably requiring stricter tolerances in the objective function than ipopt. But -7.761358324221462e-9 should really be treated as zero in most applications.

Maybe the performance will be better if you get rid of division:

@variable(model, 0.0<= 0.6*ξ[1:6] <= 1.0) 

Alternatively, if the constraints must be satisfied, you can tighten them:

@variable(model, 0.00001 <= 0.6*ξ[1:6] <= 1.0-0.00001) ##You can pick a different positive value instead of 0.00001
1 Like

Alternatively, you can use interval methods (IntervalOptimisation.jl/optimise.jl at master · JuliaIntervals/IntervalOptimisation.jl · GitHub), provided your problem is not too big (say a few dozen variables) and you can express your functions analytically.

Interval arithmetic was designed to bound roundoff errors. So it never goes out of bounds.

Makes sense but why isn’t the tolerance built into the JuMP API then? Seems very hacky to add and subtract a small number from your bounds in user-code.

Constraint tightening is used in optimal control when you really cannot violate constraints, for example due to physical boundaries.

Maybe I am wrong here, but JuMP provides an interface, the tolerances depend on what the user/the solver needs. From Ipopt perspective, -7.761358324221462e-9 is zero for default tolerances (I think 1e-8?), so this is the value it returned. Other parts of this code crash because for them -7.761358324221462e-9 is not zero and they cannot accept it. But in general, it is impossible to predict what the user wants/needs, unless they are using specialised solutions, like interval arithmetic.

Have a look here: Objective function "limit reached" status in JuMP - #6 by ivborissov

1 Like

Yes, don’t add hacky epsilons to your code. Ipopt is working as its authors intended by allowing small violations of variable bounds. To get the behavior you want, adjust Ipopt’s bound_relax_factor parameter. See the Ipopt.jl readme for examples of setting parameters.


Just to clarify, this syntax is invalid:

julia> @variable(model, 0.0<= 0.6*ξ[1:6] <= 1.0)
ERROR: LoadError: Expression 0.6 * ξ[1:6] cannot be used as a name.
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] _get_name(c::Expr)
   @ JuMP.Containers ~/.julia/dev/JuMP/src/Containers/macro.jl:16
 [3] var"@variable"(__source__::LineNumberNode, __module__::Module, args::Vararg{Any, N} where N)
   @ JuMP ~/.julia/dev/JuMP/src/macros.jl:2045
in expression starting at REPL[4]:1

So yes, either make your objective function robust to numerical tolerances, or play with the solver settings.

1 Like