How interpret constraint values for inequalities?

Hello,

I’m trying to understand the values outputted when evaluating a constraint. In another thread I received this useful response on how to evaluate constraints. I’ll use a slightly modified example from there (pasted below) to illustrate my confusion:

Why does evaluating a constraint yield the left-hand side of the constraint? I expected that evaluating the constraint would give either A) a binary value indicating whether or not the constraint was satisfied or B) a continuous value indicating how far we are from satisfying the constraint.

Is there a way to extract values like A) or B) from the constraints without modifying the constraints themselves?

Concretely, for x = 0.3795 and the 2x <= 0.5 constraint, I would expect either A) 0 (or false) because the constraint isn’t satisfied or B) 0.25901 (which is 2x - 0.5) and tells us how far we are from satisfying the constraint.

Concretely, for x = 0.3795 and the 0 <= sin(x) <= 0.5 constraint, I would expect either A) 1 (or true) because the constraint is satisfied or B) 0 indicating we need this much change (a.k.a. no change) to satisfy the constraint.

julia> using JuMP

julia> model = Model();

julia> @variable(model, x >= 0)
x

julia> @constraint(model, c, 2x <= 0.5)
c : 2 x ≤ 0.5

julia> @NLconstraint(model, nl_con, 0 <= sin(x) <= 0.5)
0 ≤ sin(x) ≤ 0.5

julia> variable_values = Dict(v => rand() for v in all_variables(model))
Dict{VariableRef, Float64} with 1 entry:
  x => 0.379509

julia> cons = all_constraints(model; include_variable_in_set_constraints = true)
3-element Vector{ConstraintRef}:
 c : 2 x ≤ 0.5
 x ≥ 0.0
 0 ≤ sin(x) ≤ 0.5

julia> sol = Dict(c => value(xi -> variable_values[xi], c) for c in cons)
Dict{ConstraintRef{Model, C, ScalarShape} where C, Float64} with 3 entries:
  0 ≤ sin(x) ≤ 0.5 => 0.370465
  x ≥ 0.0          => 0.379509
  c : 2 x ≤ 0.5    => 0.759019

julia> sol[c]
0.759018812310488

julia> sol[nl_con]
0.37046482764020316

julia> sol[LowerBoundRef(x)]
0.379509406155244
1 Like

The point you are raising is logical. It would be nice to know the slack each constraint needs to be satisfied (or even the margin in case it is). Currently, by reading the question, it seems the value is the non-constant part of the constraint (i.e. if a constraint is an Expression and a (LowerBound, UpperBound) pair, then the constraint value is the value of Expression).

2 Likes

The answer is actually slightly complicated.

For @constraint, JuMP moves all variables to the left-hand side, and all constants to the right-hand side. The value of a constraint is the value of the expression on the left-hand side of the constraint.

For @NLconstraint, JuMP moves all terms to the left-hand side of the constraint. The value of the constraint is the value of the expression on the left-hand side of the constraint.

For interval constraints, both @constraint and @NLconstraint, JuMP does not move terms, and the left- and right-hand sides must be constants. value is the value of the expression in the middle of the constraint.

julia> using JuMP

julia> model = Model();

julia> @variable(model, x >= 0)
x

julia> @constraint(model, con_le_lhs, 2x + 1 <= 0.5)
con_le_lhs : 2 x ≤ -0.5

julia> @constraint(model, con_le_rhs, 0.5 <= 2x + 1)
con_le_rhs : -2 x ≤ 0.5

julia> @constraint(model, con_ge_lhs, 2x + 1 >= 0.5)
con_ge_lhs : 2 x ≥ -0.5

julia> @constraint(model, con_ge_rhs, 0.5 >= 2x + 1)
con_ge_rhs : -2 x ≥ 0.5

julia> @constraint(model, con_eq_lhs, 2x + 1 == 0.5)
con_eq_lhs : 2 x = -0.5

julia> @constraint(model, con_eq_rhs, 0.5 == 2x + 1)
con_eq_rhs : -2 x = 0.5

julia> @constraint(model, con_eq_in_le, 0.5 <= 2x + 1 <= 3)
con_eq_in_le : 2 x ∈ [-0.5, 2.0]

julia> @constraint(model, con_eq_in_ge, 4 >= 2x + 1 >= 0.5)
con_eq_in_ge : 2 x ∈ [-0.5, 3.0]

julia> @NLconstraint(model, nl_con_in, 0 <= x <= 0.5)
0 ≤ x ≤ 0.5

julia> @NLconstraint(model, nl_con_le, x <= 1)
x - 1.0 ≤ 0

julia> @NLconstraint(model, nl_con_ge, x >= 1)
x - 1.0 ≥ 0

julia> @NLconstraint(model, nl_con_eq, x == 1)
x - 1.0 = 0

julia> cons = all_constraints(model; include_variable_in_set_constraints = true)
13-element Vector{ConstraintRef}:
 con_eq_lhs : 2 x = -0.5
 con_eq_rhs : -2 x = 0.5
 con_ge_lhs : 2 x ≥ -0.5
 con_ge_rhs : -2 x ≥ 0.5
 con_le_lhs : 2 x ≤ -0.5
 con_le_rhs : -2 x ≤ 0.5
 con_eq_in_le : 2 x ∈ [-0.5, 2.0]
 con_eq_in_ge : 2 x ∈ [-0.5, 3.0]
 x ≥ 0.0
 0 ≤ x ≤ 0.5
 x - 1.0 ≤ 0
 x - 1.0 ≥ 0
 x - 1.0 = 0

julia> sol = Dict(c => value(xi -> 1.0, c) for c in cons)
Dict{ConstraintRef{Model, C, ScalarShape} where C, Float64} with 13 entries:
  con_ge_rhs : -2 x ≥ 0.5          => -2.0
  x - 1.0 ≤ 0                      => 0.0
  con_ge_lhs : 2 x ≥ -0.5          => 2.0
  con_eq_rhs : -2 x = 0.5          => -2.0
  x ≥ 0.0                          => 1.0
  x - 1.0 ≥ 0                      => 0.0
  con_eq_in_le : 2 x ∈ [-0.5, 2.0] => 2.0
  con_le_rhs : -2 x ≤ 0.5          => -2.0
  0 ≤ x ≤ 0.5                      => 1.0
  con_eq_in_ge : 2 x ∈ [-0.5, 3.0] => 2.0
  x - 1.0 = 0                      => 0.0
  con_eq_lhs : 2 x = -0.5          => 2.0
  con_le_lhs : 2 x ≤ -0.5          => 2.0

If you want the slacks to test feasibility, then you’re probably interested instead in:

1 Like

On top of the other answers: perhaps it’s a matter of terminology.
You’re right that a constraint is a logical relation between LHS and RHS. However in modeling and numerical optimization, we work with constraint functions, that is usually the evaluation of the LHS when all variables have been moved there (see @odow’s answer).
We have more latitude to recreate the constraint “g(x) <= 0” from the two pieces of information “g(x)” (the actual constraint function) and “[-inf, 0]” (the bounds) than vice versa. For example:

  • we can differentiate g;
  • we can reformulate the constraint, e.g. add a slack s: g(x) - s = 0 and s \le 0;
  • we can relax the constraint in the objective: f(x) + \rho (g(x) - s)^2;
  • and so on
1 Like