Combinatorial Benders cuts

I have a big-M constraint in a stochastic programming problem that I want to replace with combinatorial Benders cuts. I have tried implementing combinatorial Bender decomposition, but I get unbounded problem error when it first tries to solve the master problem. I think I am nearly there, but unclear on how to rectify it. I have given the codes below for both benders and big-M method. May be someone will be able to help out?
Here is my code for the big-M formulation:

using JuMP
using Cbc 
# Define the problem data
n = 10 # Number of optimization variables
S = 100 # Number of scenarios
b = rand(80:140, n) # Vector of constant values of X
α = 0.05 # Confidence level
M = 1000 # Big constant value
ω = rand(40:140,S,n) # Matrix of uncertain values of X. Each column represents a scenario.
P = ones(S) ./ S # Vector of probabilities for each scenario
model = Model(Cbc.Optimizer)
M=1000
# Define variables
@variable(model, 0 <= x[1:n] <= 1)      # Optimization variables
@variable(model, γ)                     # Objective variable (free variable)
@variable(model, y[1:S], Bin)           # Binary variables for each scenario

# Objective function
@objective(model, Min, γ)

# Constraints
for s in 1:S
    @constraint(model, M*y[s] >= dot((b - ω[s,:]), x) - γ)
end

@constraint(model, sum(y[s] * P[s] for s in 1:S) <= α)
@constraint(model,sum(x[i] for i =1:n) == 1)


# Solve the model
optimize!(model)

# Display the results
println("Objective Value: ", objective_value(model))
println("Optimal x: ", value.(x))

Code for Benders

using JuMP, Cbc

# Define the problem data
n = 10
S = 100
b = rand(80:140, n)
α = 0.05
ω = rand(40:140, S, n)
P = ones(S) ./ S

# Master problem
master = Model(Cbc.Optimizer)
@variable(master, γ)
@variable(master, y[1:S], Bin)
@objective(master, Min, γ)
@constraint(master, sum(y[s]*P[s] for s=1:S) <= α)

# Subproblem
subproblem = Model(Cbc.Optimizer)
@variable(subproblem, 0<=x[1:n] <= 1)
@constraint(subproblem, sum(x[i] for i=1:n) == 1)

max_iterations = 100
for iter = 1:max_iterations
    optimize!(master)

    worst_γ = -Inf
    T = Vector{Int}()
    for s in 1:S
        if value(y[s]) == 0
            @objective(subproblem, Min, dot(b - ω[s, :], x))
            optimize!(subproblem)
            current_γ = objective_value(subproblem)
            if current_γ > value(γ)
                worst_γ = current_γ
                push!(T, s)
            end
        end
    end

    if isempty(T) || worst_γ <= value(γ)  # If there's no violating scenario
        break
    else
        @constraint(master, γ >= worst_γ * (sum([1 - y[s] for s in T]) - length(T) + 1))
    end
end

println("Objective Value: ", objective_value(master))
println("Optimal y: ", value.(y))

1 Like

In the first iteration, your problem is unbounded because γ can be arbitrarily negative. The common solution it to add a valid lower bound:

@variable(master, γ >= -1_000)  # Or something
1 Like

Thank you. I have tried your suggestion and added a lower bound for γ :
@variable(master, γ >= 0.0)
But I get a very different results for the objective and variables x than the big-M formulation. In the Benders version after all iterations, the objective value returned is the lower bound ofγ and values of x are very different.
Could there be any additional issues in my code?

If they don’t obtain equivalent solutions, then your code for Benders is wrong. Are you sure the cut is the correct formulation?

However, why do you need a decomposition for this problem? The problem solves almost instantly. If your real problem is larger, you could try Gurobi, or an open-source solver like HiGHS.

Gurobi supports indicator constraints, so you could write:

model = Model(Gurobi.Optimizer)
set_silent(model)
@variable(model, 0 <= x[1:n] <= 1)
@variable(model, γ)          
@variable(model, y[1:S], Bin)
@objective(model, Min, γ)
@constraint(model, [s = 1:S], !y[s] --> {γ >= (b - ω[s,:])' * x})
@constraint(model, sum(P[s] * y[s] for s in 1:S) <= α)
@constraint(model, sum(x) == 1)
optimize!(model)
println("Objective Value: ", objective_value(model))
println("Optimal x: ", value.(x))

See Constraints · JuMP for details

I doubt that Gurobi will be able to solve a large instance of this problem. It may solve a small instance but if you were to set n = 10000 and S = 10000 it will struggle badly. I suspect performance will be worst with indicator constraints than big-M for large problems, but fast for small problems. This is the motivation behind using combinatorial benders for this problem.

I am trying to use the method in this paper https://ideas.repec.org/a/inm/oropre/v54y2006i4p756-766.html

There could be possibly something wrong in my bender cuts. This is what I’m trying to do: if the value of the objective in the subproblem is greater than the objective of the candidate master solution it adds a cut and T keeps a track of the scenario in which this happens. This effectively should result in the same outcome as the big-M constraint. But obviously something is incorrect!

I think you need to take another look at your Benders algorithm. It seems like you’re trying to mix combinatorial Benders with regular Benders.

I am trying to apply the logic introduced by Matteo Fischett for combinatorial bender cuts to replace big-M constraints. The logic of bender cut here is quite basic:
If ( \gamma_T ) from the SP is larger than the MP’s ( \gamma ), it means our candidate solution has underestimated the necessary ( \gamma ) for the scenarios in ( T ). In this case, we add a Benders cut to the MP:
\end{itemize}

[ \gamma \geq \gamma_{T} \left( \sum_{s \in T} (1 - Y_{s}) - |T| + 1 \right) ]

This inequality essentially ensures that, for the same or similar future ( Y ) solutions, ( \gamma ) will need to be at least as large as ( \gamma_T ) to accommodate the worst-case scenarios in ( T ).

Apologies, I don’t know how to correctly display latex on this forum.

You can use LaTeX by putting it between dollar signs, like $ \gamma $.

For example: \gamma \geq \gamma_{T} \left( \sum_{s \in T} (1 - Y_{s}) - |T| + 1 \right)

I don’t think this is the right cut for your problem. It will only work if \gamma_{T} \ge 0.

Otherwise you might add a constraint like \gamma \geq -100 \left( \sum_{s \in T} (1 - Y_{s}) - |T| + 1 \right), and if two of the Y_s are 1, then the bit in brackets will be -1, and the constraint enforces \gamma > 100.