Hi, everyone
Does JuMP support the Benders decomposition interface of cplex(IBM Documentation)? I find cplex doesn’t support the examples for Julia. I have a model,which the master problem is MILP, and the subproblem is LP. The model will be hard to solve if solving the master and subproblem together, I want to try to decompose it using JuMP and cplex.(I have tried to write the codes using classical Benders algorithm and it is found that the convergence is not good)
Thanks in advace!
No.
I have tried to write the codes using classical Benders algorithm and it is found that the convergence is not good
How big is the problem? Benders has known issues with convergence, but you can do okay with some tricks (cut selection, some regularization in the first-stage).
The JuMP documentation has examples of a basic implementation of Benders.
https://jump.dev/JuMP.jl/stable/tutorials/Optimization%20concepts/benders_decomposition/
https://jump.dev/JuMP.jl/stable/tutorials/Optimization%20concepts/benders_lazy_constraints/
I was going to recommend StochasticPrograms.jl, but it looks like you already found it: model of different stages · Issue #30 · martinbiel/StochasticPrograms.jl · GitHub
You should also checkout GitHub - atoptima/Coluna.jl: Branch-and-Price-and-Cut in Julia
Thank you,again@odow. By the way, is the objective funtion value of sub-problem monotonic decreasing until the relative gap(UB-LB)/UB reaches? Why my master-problem objective function value has almost no changes during iteration process and my sub-problem objective function value
has a big fluctuation during iteration process using the classical Benders algorithms I wrote?
is the objective funtion value of sub-problem monotonic decreasing
No. The objective value of the relaxed first-stage problem is monotonic.
Why my master-problem objective function value has almost no changes during iteration process and my sub-problem objective function value
has a big fluctuation during iteration process using the classical Benders algorithms I wrote?
This is probably classic “bang-bang” solutions from the cutting planes. Google regularized Benders. Add a quadratic penalty term that penalizes deviations from the previous first-stage solution.
It seems like your minimizing, but if you’re maximizing, another common mistake is to use dual
instead of shadow_price
when maximizing. JuMP’s notion of duality is negated for linear programs when maximizing compared to some textbooks.
Okay,thank you. I will try the regularized Benders.