I added 1 Million constraints to an LP unwittingly

I adapt your example to an MIQP problem
Minimize (x + 0.54)^2 - 1 over the interval [-1, 1] and x being Int.
I intend to use Gurobi’s MIP callback-lazy cuts.
But I have a novel idea—adding a Phase I before we conventionally start.
The main question is:

  • Do you think the 🍅 line is advantageous (e.g. has the potential to speed up the overall solving procedure)?

The secondary question is: see the last for-loop (🟠)

import JuMP, Gurobi; GRB_ENV = Gurobi.Env()
import JuMP.value as v
COT = 1e-6  # cut-off tolerance
a = 0.54    # data of the problem
f(x)  = (x + a)^2 - 1
fˈ(x) = 2 * (x + a)
function set_user_cut(model, c) # via Gurobi's API
    JuMP.MOI.set(JuMP.backend(model), Gurobi.ConstraintAttribute("Lazy"), JuMP.index(c), -1)
end
function solve_to_optimality(model)
    JuMP.optimize!(model)
    JuMP.assert_is_solved_and_feasible(model; allow_local = false)
end
function my_callback_function(cb_data, cb_where::Cint)
    cb_where != Gurobi.GRB_CB_MIPSOL && return
    Gurobi.load_callback_variable_primal(cb_data, cb_where)
    xv = JuMP.callback_value(cb_data, x)
    yv = JuMP.callback_value(cb_data, y)
    @info "current trial at x = $xv, y = $yv"
    if yv < f(xv) - COT  # ✅ generate cuts on-the-fly
        con = JuMP.@build_constraint(y >= f(xv) + fˈ(xv) * (x - xv))
        @info "push & add lazy constr" func=con.func set=con.set
        push!(from_MIP_cut_vec, con)
        JuMP.MOI.submit(model, JuMP.MOI.LazyConstraint(cb_data), con)
    end
end

# 🟣 Phase 1: Relax all "Int" constraints in the `model`
# Phase 1 is supposed to be fast, due to its continuous nature
model = JuMP.direct_model(Gurobi.Optimizer(GRB_ENV))
JuMP.set_silent(model)
JuMP.@variable(model, -1.98 <= x <= 1.98)
JuMP.@variable(model, y >= -7) # the lower bound -7 is an artificial one to avoid initial unboundedness
# we intend to emulate the following Quad constraint via linear cuts
# JuMP.@constraint(model, y >= f(x))
JuMP.@objective(model, Min, y)
solve_to_optimality(model)
from_LP_cut_vec = JuMP.ConstraintRef[] 
while true 
    if v(y) < f(v(x)) - COT # ✅ generate cuts on-the-fly
        push!(from_LP_cut_vec, JuMP.@constraint(model, y >= f(v(x)) + fˈ(v(x)) * (x - v(x))))
        solve_to_optimality(model)
    else
        @info "After adding $(length(from_LP_cut_vec)) cuts, (x, y) converges to ($(v(x)), $(v(y)))"
        break
    end
end

# 🔵 Phase 2: This is the `model` with `Int` constrs that we intend to solve
JuMP.unset_silent(model)
set_user_cut.(model, from_LP_cut_vec) # 🍅
JuMP.set_integer(x)
JuMP.set_attribute(model, "LazyConstraints", 1) # this setting is exclusively for the callback purpose
JuMP.MOI.set(model, Gurobi.CallbackFunction(), my_callback_function)
from_MIP_cut_vec = JuMP.ScalarConstraint[]
solve_to_optimality(model) # 🟡 From the Logging we observe that user cut is indeed adopted by Gurobi
JuMP.solution_summary(model; verbose = true) # This final result is correct
for i in from_MIP_cut_vec[1:2] # 🟠 [question] why are they the same ??
    @info i
end
  • Update: why MOI has this behavior?
julia> a = from_MIP_cut_vec[1]
JuMP.ScalarConstraint{JuMP.AffExpr, MathOptInterface.GreaterThan{Float64}}(y + 0.9199999999999999 x, MathOptInterface.GreaterThan{Float64}(-1.7084))

julia> b = from_MIP_cut_vec[2]
JuMP.ScalarConstraint{JuMP.AffExpr, MathOptInterface.GreaterThan{Float64}}(y + 0.9199999999999999 x, MathOptInterface.GreaterThan{Float64}(-1.7084))

julia> a.func == b.func && a.set == b.set
true

julia> a == b
false

Therefore a related question is: how to remove this sort of redundancy, such that one cut occurs only once in the collection from_MIP_cut_vec?