A note on how to identify redundant constraints using JuMP

,

(In this topic, whenever I write ==, this really means the Julia’s ==)
I use this primal LP model to demonstrate
CodeBlock1:

import JuMP, Gurobi; GRB_ENV = Gurobi.Env()
primal = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV))
JuMP.@variable(primal, x)
JuMP.@variable(primal, y)
JuMP.@objective(primal, Min, 2 * y + 0.7 * x)
JuMP.@constraint(primal, c1, 3 * y >= x)
JuMP.@constraint(primal, c2, x >= 0)
JuMP.@constraint(primal, c3, y >= 0)
JuMP.optimize!(primal); JuMP.assert_is_solved_and_feasible(primal; dual = true, allow_local = false)
primal_objval = JuMP.objective_value(primal) # 0.0
c = JuMP.dual.([c1, c2, c3]) # c = [0.6667, 1.3667, 0.0]

First we note that c[3] == 0, indicating that c3 is redundant.
We can do a test to verify

# reload CodeBlock1
JuMP.delete(primal, c3); JuMP.unregister(primal, :c3)
JuMP.optimize!(primal); JuMP.assert_is_solved_and_feasible(primal; dual = true, allow_local = false)
relaxed_objval = JuMP.objective_value(primal)
relaxed_objval >= primal_objval && print("c3 is indeed redundant")

Okay, let’s return to the initial state.
Note that although c[1] != 0, we cannot conclude that c[1] is not a redundant constraint.
Because, from a primal-side view, we have

# reload CodeBlock1
JuMP.delete(primal, c1); JuMP.unregister(primal, :c1)
JuMP.optimize!(primal); JuMP.assert_is_solved_and_feasible(primal; dual = true, allow_local = false)
relaxed_objval = JuMP.objective_value(primal)
relaxed_objval >= primal_objval && print("c1 is indeed redundant")

Alternatively, from the dual-side view, with an epsilon-relaxation trick we have

# reload CodeBlock1
JuMP.delete(primal, c1); JuMP.unregister(primal, :c1)
ϵ = 1e-5; JuMP.@constraint(primal, c1, 3 * y + ϵ >= x) # readd an ϵ-relaxed counterpart
JuMP.optimize!(primal); JuMP.assert_is_solved_and_feasible(primal; dual = true, allow_local = false)
JuMP.dual(c1) == 0 && print("c1 is indeed redundant")

Conclusion:

  1. After solving primal to OPTIMAL, we can drop all constraints whose dual() == 0 without altering the primal-side optimal ObjVal.
  2. There are 2 equivalent ways (one primal, one dual) to judge whether a constraint is redundant. From the primal side, if you remove that constraint, finding that relaxed_objval == primal_objval, then that constraint is redundant. Or, you can ϵ-relax that constraint being investigated, if the dual(constr) == 0, then that constraint is redundant. The equivalence can be proved via these facts 0 is an optimal dual solution + min-max strong duality + Lagrangian relaxation.

if the dual(constr) == 0 , then that constraint is redundant

You should be a bit careful here. If the model is dual degenerate (there are multiple optimal primal solutions), then removing this constraint could cause the solver to find a new solution with the same objective bound that violates the constraint you removed.

julia> using JuMP, Gurobi

julia> begin
           model = Model(Gurobi.Optimizer)
           set_silent(model)
           @variable(model, x[1:2])
           set_lower_bound(x[1], 0.0)
           @objective(model, Max, x[1] + x[2])
           @constraint(model, c1, x[1] + x[2] <= 1.5)
           @constraint(model, c2, x[2] <= 1)
           optimize!(model)
       end
Set parameter LicenseID to value 890341

julia> dual(c2)
0.0

julia> delete(model, c2)

julia> optimize!(model)

julia> value(x[2]) <= 1
false

I only intend to remove trivial constraints, mainly those artificial bounds added to ensure the compactness of the feasible region.
e.g., The primal problem is \qquad \min_{y, x} y \ge |x|_1
My code for this might be

# the following 4 lines are the primal problem
JuMP.@variable(primal, y)
JuMP.@variable(primal, x[1:2])
JuMP.@constraint(primal, [y; x] in JuMP.MOI.NormOneCone(3))
JuMP.@objective(primal, Min, y)
# To make the feasible region compact, we artificially add these
JuMP.set_lower_bound.(y, -3); JuMP.set_upper_bound.(y, 3); 
JuMP.set_lower_bound.(x, -4); JuMP.set_upper_bound.(x, 4);
# optimize WITH artificial bounds
JuMP.optimize!(primal); JuMP.assert_is_solved_and_feasible(primal; dual = true, allow_local = false)
S = [JuMP.dual(JuMP.LowerBoundRef(y))    JuMP.dual(JuMP.UpperBoundRef(y));
    JuMP.dual.(JuMP.LowerBoundRef.(x))  JuMP.dual.(JuMP.UpperBoundRef.(x))]
@assert all(S .== 0)
# then we can delete all artificial bounds and reoptimize to retain the same optimal objective_value

If I get the example correctly, you actually want to do presolve reductions on redundant constraints, in this case bound strengthening / domain propagation.
Dominated rows can be discarded, as can fixed variables (lb = ub).

This is what (mostly commercial) solvers are very good at.
What is your reasoning to reduce redundant constraints at the high level of JuMP?

No. According to my experience, we can not lean on solvers (e.g. Gurobi’s novel nonconvex solver, I had many posts on this issue, in the optimization category, within 10 days, you can check some of them). Sometimes it’s better to write algorithms ourselves, rather than depending on a black-box solver. Of course, LP solvers are robust as expected. Therefore LP relaxation (cutting plane method) is very robust and should be applied generously.

A fun fact is that, the dual variable of an obj_cut can be converted to true or false without incurring InexactError.

using Gurobi, JuMP
model = Model(Gurobi.Optimizer)
@variables(model, begin x >= 0; y <= 1 end)
@expression(model, obj, x + y); @objective(model, Max, obj)
@constraint(model, c1, 1.5    >= obj)
@constraint(model, c2, 1.5001 >= obj)
@constraint(model, c3, 1.5002 >= obj)
# 1️⃣
optimize!(model); JuMP.assert_is_solved_and_feasible(model; dual = true, allow_local = false)
Bool(JuMP.dual(c1)) && println("c1 is indispensable"); # prints
Bool(JuMP.dual(c2)) && println("c2 is indispensable"); # mute
# 2️⃣
JuMP.delete(model, c1)
optimize!(model); JuMP.assert_is_solved_and_feasible(model; dual = true, allow_local = false)
Bool(JuMP.dual(c2)) && println("c2 is indispensable"); # prints
Bool(JuMP.dual(c3)) && println("c3 is indispensable"); # mute

You should not rely on this behavior in general.

The solver may return any solution in tolerance.

Assuming exact floating point comparisons is always a bad idea (in any numerical problem, not just optimization solvers).

About Gurobi, I did a test using an moderate LP case.
If I solve with its default setting, then it is fairly robust (this is a facade, see below)

solve_to_normality(lpr);
c = JuMP.dual.(c_rlt) # c_rlt are all ≥ constraints
sum(c .< 1e-13) # 2979
sum(c .== 0) # 2979
sum(c .< -1e-9) # 0

But if I modify 2 settings, then it becomes somewhat messy

JuMP.set_attribute(lpr, "Method", 2)
JuMP.set_attribute(lpr, "Crossover", 0)
solve_to_normality(lpr);
c = JuMP.dual.(c_rlt) # c_rlt are all ≥ constraints
sum(c .< 1e-13) # 906
sum(c .== 0) # 0
sum(c .< -1e-9) # 1
sum(c .< -1e-8) # 0

But after all, the LP model lpr can be solved to normality—OPTIMAL, primal-dual both FEASIBLE_POINT.

Update: Gurobi gives irrational small values with the simplex algorithm as well

julia> solve_to_normality(lpr)
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))

Optimize a model with 2379 rows, 219 columns and 11498 nonzeros
Coefficient statistics:
  Matrix range     [7e-01, 2e+02]
  Objective range  [6e+00, 4e+02]
  Bounds range     [1e+00, 2e+02]
  RHS range        [7e-01, 8e+02]

Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    1.9359608e+05   1.715876e+04   0.000000e+00      0s
     331    1.8438598e+05   0.000000e+00   0.000000e+00      0s

Solved in 331 iterations and 0.03 seconds (0.02 work units)
Optimal objective  1.843859800e+05

julia> tup = get_dual();

julia> tup.g0_rx[9, 5, 9]
4.551914400963142e-14 # This is NOT 0

I have nothing to add, other than to re-affirm that this is expected behavior. Solvers may not find a “nice” rational solution, even if one trivially exists.