I’m running into the following issue while solving a set of LP-problems using JuMP:
I want to create a model, optimize it, and then in turn minimize and maximize its variables while requiring the original optimum to be achieved. For those in the know, yes, it’s FVA. The code would look something like this
using JuMP, CPLEX
model = JuMP.Model(CPLEX.Optimizer)
S = rand(5,5)
lower = zeros(5)
upper = ones(5)
c = [1; ones(4)]'
b = zeros(5)
@variable(model, x[i=1:5], lower_bound=lower[i], upper_bound=upper[i])
@objective(model, Max, c * x)
@constraint(model, S * x .== b)
JuMP.optimize!(model)
optimum = JuMP.objective_value(model)
@constraint(model, c * x ≥ optimum)
for i in 1:5
@objective(model, MOI.MIN_SENSE, x[i])
JuMP.optimize!(model)
min = JuMP.objective_value(model)
JuMP.set_objective_sense(model, MOI.MAX_SENSE)
JuMP.optimize!(model)
max = JuMP.objective_value(model)
end
Now by definition, if the original problem is feasible, all of the subproblems within the for-loop should be as well. However, it appears that sometimes the optimizer enters into a “degenerate” state, and return infeasible for one of the subproblems. If a new optimizer instance is given, it returns feasible as should. In other words
JuMP.optimize!(model)
# termination_status(model) will return
# INFEASIBLE::TerminationStatusCode = 2
set_optimizer(model, CPLEX.Optimizer);
JuMP.optimize!(model)
# termination_status(model) will return
# OPTIMAL::TerminationStatusCode = 1
I’m at a loss as to what is going on, I’m not even sure at what level the problem is. I originally thought the problem was related to CPLEX but it appears it happens with Gurobi as well (but not necessarily with the exact same model).
I’m on
Julia Version 1.3.0
Commit 46ce4d7933 (2019-11-26 06:09 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
with
JuMP v0.21.3
CPLEX v0.6.6