I find an interesting example about Gurobi’s LP solver.
There are 2 linear constraints in my LP, named fixed
and cut
.
The outcomes for the following 2 schemes are different:
- Add Both constraints, and solve only once
- Add
fixed
, solve; Then addcut
, solve again
In the 1st scheme, the solution y = -1.192e-6
.
In the 2nd scheme, the solution upon the first solve is y = -2.384e-6
. Theoretically, upon adding the cut
, the updated solution y
should be the same as the 1st scheme (because the final model
are identical). But the cut
cannot cut off the first solution y = -2.384e-6
.
Here is the runnable code to produce the above results
import JuMP, Gurobi
function optimise(model)
JuMP.optimize!(model)
JuMP.assert_is_solved_and_feasible(model; allow_local = false, dual = true)
end
c = -4.76837158203125e-6
model = JuMP.Model(Gurobi.Optimizer)
JuMP.@variable(model, y)
JuMP.@objective(model, Min, 2 * y)
JuMP.@constraint(model, fixed, -9.5367431640625e-6 * y <= 2.2737367544323206e-11)
optimise(model);
yt = JuMP.value(y)
JuMP.@constraint(model, cut, c * y <= 5.6843418860808015e-12)
optimise(model);
yt = JuMP.value(y)
Then I proceed to explore the can-cut-off behavior.
I modify the RHS constant of cut
, and find the 2 values, with one can cut off while the other cannot.
b = -1.1e-9 # cannot cut off
b = -2.9e-9 # can cut off
JuMP.@constraint(model, cut, c * y <= b)
optimise(model);
yt = JuMP.value(y)
I guessed Gurobi decide whether the current trial point y = -2.384e-6
is cut off according to the violation of the new cut
. But with this line of reasoning, the critical violation is a value which is not very rational (It’s somewhere between 1.1e-9
and 2.9e-9
).
I don’t quite understand this.