Benders decomposition in JuMP

Hello, I have been experimenting with JuMP and I have coded a CVaR optimisation problem using the method introduced by Rockfeller and Uryasev. My implementation given below uses extensive form of the stochastic program. This works fine if number of scenarios and number of investments are small; however, I run in to memory issues for large scale problems.

This example uses 10,000 scenario and assumes 5 investments, and this runs within seconds. If number of investments are increased to 10,000 then I get errors related to memory.

Does JuMP provide any routines or packages for Benders decomposition or other methods that I can use for this example? I am new to the exciting world of optimisation so ideally looking for something that is easy enough to use.

I have looked through JuMP examples and I found this post on Benders decomposition using JuMP, but I am not sure if and how this can be used for this problem:A take on Benders decomposition in JuMP | juliabloggers.com

using JuMP
using Clp
# Generate random data for initial value of 5 investments
InitialValue= float(rand(80:140,1,5))
# Generate 10,000 scenarios for future value of investments and uncertain demand
FutureValue = float(rand(40:150,10000,5))
Demand = float(rand(40:50,10000))
nscen = size(FutureValue,1)
ninvestments = size(FutureValue,2)
# Create a vector of probabilities for scenarios 
prob = 1/nscen*ones(nscen)
# Calculate loss amount based on changes in value of investments
Loss = zeros(Float64,nscen,ninvestments)
Loss = -FutureValue.+InitialValue
# Assume 95% confidence level
α = 0.05
cvarRanmodel = Model(Clp.Optimizer)
@variable(cvarRanmodel,x[b=1:ninvestments]>=0)
@variable(cvarRanmodel,y[b=1:nscen]>=0)
@variable(cvarRanmodel,γ)
@constraint(cvarRanmodel,budget,sum(x[i] for i =1:ninvestments) == 1)
@constraint(cvarRanmodel,constraint2[j in 1:nscen], y[j]-sum(Loss[j,i]*x[i] for i in 1:ninvestments) + γ >= 0)
@constraint(cvarRanmodel,constraint3[j in 1:nscen], sum(FutureValue[j,i]*x[i] for i in 1:ninvestments)-Demand[j] >= 0)
@objective(cvarRanmodel,Min,γ+1/(α)*sum(y[j]*prob[j] for j =1:nscen))
optimize!(cvarRanmodel)
println("")
println("VaR ", JuMP.value.(γ))
println("CVaR ", objective_value(cvarRanmodel))
println("Investment allocation",JuMP.value.(x))
1 Like

Check out Quick start · StochasticPrograms.jl

I have run through the quick start section but I see different outputs from that given in the documentation:

  • objective_value(sp) give 7000.0, whereas objective_value(sp_lshaped) give-7470.40. This is meant to give same results.

  • optimize!(sp_progressivehedging) gives errors


******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************

ERROR: MethodError: no method matching shift_constant(::StochasticPrograms.SingleKnownSet{Float64}, ::Float64)
Closest candidates are:
  shift_constant(::Union{MathOptInterface.EqualTo{T}, MathOptInterface.GreaterThan{T}, MathOptInterface.LessThan{T}}, ::T) where T at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\sets.jl:17
  shift_constant(::MathOptInterface.Interval, ::Any) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\sets.jl:20
  shift_constant(::MathOptInterface.Test.UnknownScalarSet, ::Any) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Test\modellike.jl:9   
Stacktrace:
 [1] normalize_constant(::MathOptInterface.ScalarAffineFunction{Float64}, ::StochasticPrograms.SingleKnownSet{Float64}; allow_modify_function::Bool) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\constraints.jl:40
 [2] normalize_constant(::MathOptInterface.ScalarAffineFunction{Float64}, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\constraints.jl:40
 [3] bridged_constraint_function at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1337 [inlined]
 [4] add_constraint at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1045 [inlined]
 [5] bridge_constraint(::Type{MathOptInterface.Bridges.Constraint.ScalarFunctionizeBridge{Float64,StochasticPrograms.SingleKnownSet{Float64}}}, ::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) 
at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\Constraint\functionize.jl:14
 [6] add_bridged_constraint(::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, ::Type, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1006
 [7] add_constraint(::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Bridges\bridge_optimizer.jl:1055
 [8] add_constraint(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.SingleVariable, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\Utilities\cachingoptimizer.jl:250
 [9] add_constrained_variable(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::StochasticPrograms.SingleKnownSet{Float64}) at C:\Julia\.julia\packages\MathOptInterface\RmalA\src\variables.jl:88
 [10] add_projection_targets!(::StochasticPrograms.ProgressiveHedging.SubProblem{Float64,Array{Float64,1},Quadratic}) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\types\subproblem.jl:99
 [11] initialize!(::StochasticPrograms.ProgressiveHedging.SubProblem{Float64,Array{Float64,1},Quadratic}, ::Float64) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\types\subproblem.jl:85
 [12] finish_initilization! at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\execution\serial.jl:37 [inlined]
 [13] finish_initilization! at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\execution\execution.jl:5 [inlined]
 [14] initialize!(::ProgressiveHedgingAlgorithm{Float64,Array{Float64,1},HorizontalStructure{2,1,Tuple{StochasticPrograms.ScenarioProblems{Scenario{NamedTuple{(:q₁, :q₂, :d₁, :d₂),NTuple{4,Float64}}}}}},StochasticPrograms.ProgressiveHedging.SerialExecution{Float64,Array{Float64,1},Quadratic},StochasticPrograms.ProgressiveHedging.FixedPenalization{Float64},Quadratic}, ::Quadratic) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\types\AbstractProgressiveHedging.jl:16
 [15] ProgressiveHedgingAlgorithm(::HorizontalStructure{2,1,Tuple{StochasticPrograms.ScenarioProblems{Scenario{NamedTuple{(:q₁, :q₂, :d₁, :d₂),NTuple{4,Float64}}}}}}, ::Array{Float64,1}, ::Serial, ::Fixed{Float64}, ::Quadratic; kw::Base.Iterators.Pairs{Symbol,Real,NTuple{6,Symbol},NamedTuple{(:log, :offset, :keep, :indent, :ϵ₂, :ϵ₁),Tuple{Bool,Int64,Bool,Int64,Float64,Float64}}}) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\solver.jl:91
 [16] load_structure!(::StochasticPrograms.ProgressiveHedging.Optimizer, ::HorizontalStructure{2,1,Tuple{StochasticPrograms.ScenarioProblems{Scenario{NamedTuple{(:q₁, :q₂, :d₁, :d₂),NTuple{4,Float64}}}}}}, ::Array{Float64,1}) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\solvers\structured\progressivehedging\MOI_wrapper.jl:90
 [17] optimize!(::HorizontalStructure{2,1,Tuple{StochasticPrograms.ScenarioProblems{Scenario{NamedTuple{(:q₁, :q₂, :d₁, :d₂),NTuple{4,Float64}}}}}}, ::StochasticPrograms.ProgressiveHedging.Optimizer, ::Array{Float64,1}) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\methods\horizontal\optimization.jl:5
 [18] optimize!(::StochasticProgram{2,Tuple{StochasticPrograms.Stage{NamedTuple{(),Tuple{}}},StochasticPrograms.Stage{NamedTuple{(),Tuple{}}}},HorizontalStructure{2,1,Tuple{StochasticPrograms.ScenarioProblems{Scenario{NamedTuple{(:q₁, :q₂, :d₁, :d₂),NTuple{4,Float64}}}}}}}; crash::StochasticPrograms.Crash.None, kw::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\methods\api.jl:209
 [19] optimize!(::StochasticProgram{2,Tuple{StochasticPrograms.Stage{NamedTuple{(),Tuple{}}},StochasticPrograms.Stage{NamedTuple{(),Tuple{}}}},HorizontalStructure{2,1,Tuple{StochasticPrograms.ScenarioProblems{Scenario{NamedTuple{(:q₁, :q₂, :d₁, :d₂),NTuple{4,Float64}}}}}}}) at C:\Julia\.julia\packages\StochasticPrograms\t8mlY\src\methods\api.jl:201
 [20] top-level scope at none:0

Hmm. We might have to wait for @martinbiel to get back from vacation. He just rewrote a significant portion of it, so it looks like some of the documentation doesn’t line up with the new code.

Could StructJuMP be used for this type of problem? It seems to be aimed at problems with a special structure but there is little documentation on how to use it.