# Benders decomposition in JuMP

Hello, I have been experimenting with JuMP and I have coded a CVaR optimisation problem using the method introduced by Rockfeller and Uryasev. My implementation given below uses extensive form of the stochastic program. This works fine if number of scenarios and number of investments are small; however, I run in to memory issues for large scale problems.

This example uses 10,000 scenario and assumes 5 investments, and this runs within seconds. If number of investments are increased to 10,000 then I get errors related to memory.

Does JuMP provide any routines or packages for Benders decomposition or other methods that I can use for this example? I am new to the exciting world of optimisation so ideally looking for something that is easy enough to use.

I have looked through JuMP examples and I found this post on Benders decomposition using JuMP, but I am not sure if and how this can be used for this problem:https://www.juliabloggers.com/a-take-on-benders-decomposition-in-jump/

``````using JuMP
using Clp
# Generate random data for initial value of 5 investments
InitialValue= float(rand(80:140,1,5))
# Generate 10,000 scenarios for future value of investments and uncertain demand
FutureValue = float(rand(40:150,10000,5))
Demand = float(rand(40:50,10000))
nscen = size(FutureValue,1)
ninvestments = size(FutureValue,2)
# Create a vector of probabilities for scenarios
prob = 1/nscen*ones(nscen)
# Calculate loss amount based on changes in value of investments
Loss = zeros(Float64,nscen,ninvestments)
Loss = -FutureValue.+InitialValue
# Assume 95% confidence level
α = 0.05
cvarRanmodel = Model(Clp.Optimizer)
@variable(cvarRanmodel,x[b=1:ninvestments]>=0)
@variable(cvarRanmodel,y[b=1:nscen]>=0)
@variable(cvarRanmodel,γ)
@constraint(cvarRanmodel,budget,sum(x[i] for i =1:ninvestments) == 1)
@constraint(cvarRanmodel,constraint2[j in 1:nscen], y[j]-sum(Loss[j,i]*x[i] for i in 1:ninvestments) + γ >= 0)
@constraint(cvarRanmodel,constraint3[j in 1:nscen], sum(FutureValue[j,i]*x[i] for i in 1:ninvestments)-Demand[j] >= 0)
@objective(cvarRanmodel,Min,γ+1/(α)*sum(y[j]*prob[j] for j =1:nscen))
optimize!(cvarRanmodel)
println("")
println("VaR ", JuMP.value.(γ))
println("CVaR ", objective_value(cvarRanmodel))
println("Investment allocation",JuMP.value.(x))
``````

I have run through the quick start section but I see different outputs from that given in the documentation:

• `objective_value(sp)` give 7000.0, whereas `objective_value(sp_lshaped)` give-7470.40. This is meant to give same results.

• `optimize!(sp_progressivehedging)` gives errors

``````
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
******************************************************************************

ERROR: MethodError: no method matching shift_constant(::StochasticPrograms.SingleKnownSet{Float64}, ::Float64)
Closest candidates are:
shift_constant(::Union{MathOptInterface.EqualTo{T}, MathOptInterface.GreaterThan{T}, MathOptInterface.LessThan{T}}, ::T) where T at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\sets.jl:17
shift_constant(::MathOptInterface.Interval, ::Any) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\sets.jl:20
shift_constant(::MathOptInterface.Test.UnknownScalarSet, ::Any) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Test\modellike.jl:9
Stacktrace:
[1] normalize_constant(::MathOptInterface.ScalarAffineFunction{Float64}, ::StochasticPrograms.SingleKnownSet{Float64}; allow_modify_function::Bool) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\constraints.jl:40
[2] normalize_constant(::MathOptInterface.ScalarAffineFunction{Float64}, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\constraints.jl:40
[3] bridged_constraint_function at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1337 [inlined]
[5] bridge_constraint(::Type{MathOptInterface.Bridges.Constraint.ScalarFunctionizeBridge{Float64,StochasticPrograms.SingleKnownSet{Float64}}}, ::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64})
at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\Constraint\functionize.jl:14
[6] add_bridged_constraint(::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, ::Type, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1006
[7] add_constraint(::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1055
[8] add_constraint(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\cachingoptimizer.jl:250
[12] finish_initilization! at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\execution\serial.jl:37 [inlined]
[13] finish_initilization! at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\execution\execution.jl:5 [inlined]