Hi,
I’m working on Bender’s decomposition of a MILP on the Julia interface with JuMP package on a Gurobi solver. In the example here, the callback function of lazy constraints is called when the master problem reaches an integer feasible solution(of an iteration). However, as per my math, the function should ideally be called only at an optimal solution of the Master Problem(at the iteration).
When I run the standard Bender’s example code, it calls the callback function when master Problem reaches any random integer feasible solution (hence, it gives trouble with convergence)
Could you suggest the method to call the function at optimal solutions for each iteration of master problem model? Following is the description of the code:
loading of the solver and JuMP
loading of master Problem
function addBendersCut(cb)
subproblem loading and solving
test for global optimum
lazy constraints
end
addlazycallback(masterProblemModel,addBendersCut)
statusMP=solve(masterProblemModel)
Instead of using a callback, you could just use a loop where you first solve them master, then generate the cut, and repeat.
Hi leethargo,
As per your suggestion, if I use a loop where I first solve the master, then generate the cut, and repeat, it leads to master problem being solved again and again.
The main reason why I’m using Julia and JuMP is to use callbacks_ it is a quicker way of solving_ because it will save the solution tree(and not solve the heavy master problem all over again)
Callbacks would help bring my solution times down drastically.
I was referring to the the example on page :JuliaOpt  Optimization packages for the Julia language
in the section : Modern approach: adding the Benders cuts as lazy constraints.
Here, callbacks help massively in getting the times down.
In this, the following happens:
1.Master problem : integer feasible solutions(bunch of them as per solution tree)
2.It picks one of the integer feasible solution; pauses the Master Problem
3.Callback of the function ‘addBendersCut’ which solves subproblem and updates solution values + adds lazy constraints
4. Playing of Master problem alongwith the lazy constraint.
5. repeat of 14 till the stopping condition is met.
The way I want it to be updated is:
In step 2, Picking of the optimal solution of Master problem at that stage(instead of any random feasible integer)
Regards,
Loy
You are right that with my suggestion, the master problem would be resolved from scratch (with a new tree). However, I thought that when you wait for the optimal solution of the (current) master, the remaining tree will already be pruned a lot, and the open node containing the solution quite deep (possibly containing a history of bad branching decisions), so that tree might not be worth that much to you.
I did not read the example you referenced, but with lazy constraints, you face this problem:
Let’s say you add lazy constraint callback, which is called for every integer feasible solution found in the tree. You would like to skip all nonoptimal solutions, and only generate cut for the optimal ones. However, at this point in time you can not decide optimality (without solving all other open nodes first). Even if you store this solution and wait for the dual bound to improve, your callback might never be called again!
I have not used the “informational callbacks” yet, but maybe you could use a callback to track changes in the improvement of the bound (in addition to your lazy constraint callback). That way, you would know whether any stored candidate solution turns out to be optimal or not.
Thanks leethargo,
Your response helped me and my prof think deeper on it and realize the idea that we are solving only 1 Master problem and hence we can’t solve it to optimality before any iteration(it would be optimal only at the last stage ). The unexplored branch and bound tree is saved when callbacks are called.
So, my problem is a Minimization MILP,(Master problem+Minimization). Please help me with the following:
When callback function is called,
I am able to get the integer solution by using getvalue(…) – this is the point where callback is called.
How do I derive the value of the best relaxed LP lowerbound obtained from the master problem.??
This would be the lowest objective value of a relaxed LP in the branch and bound tree.
I have tried cbgetbestbound(cb) but it doesn’t work.
I would say that cbgetbestbound
sounds right. I don’t know why it wouldn’t work.
Maybe it is an issue of using a MathProgBase
function on a JuMP
object, and you have to get the “inner model” first.
When the MILP is solved, shouldn’t the objective value be the same as the best bound (within tolerance)? You just need to call JuMP.getobjectivevalue(model)
.
Hard to say much without knowing in what way you mean that it “doesn’t work”.
@yeesian: The problem is that we don’t know whether the MILP is solved at this point. Rather, we have a candidate feasible solution and want to decide its optimality using the bounds.
Ah i see. Then there’s

getobjbound(m::Model)
 returns the best known bound on the optimal objective value. This is used, for example, when a branchandbound method is stopped before finishing.

getobjgap(m::Model)
 returns the final relative optimality gap as optimization terminated. That is, it returns b−ffb−ff, where bb is the best bound and ff is the best feasible objective value.
See the documentation for details…