Hello,
This is a follow-up of my question on the thread about dynamic JuMP variable names Using anonymous and or symbols to create dynamic varible names? - #3 by pierre-haessig.
I’m sharing here a JuMP code design study on how to implement single/multi stage variants of the same optimization problem in a modular way, i.e. sharing as much common model code as possible.
I’m interested in your thoughts on this code design.
Motivation
In my use case, modularity is welcomed because each stage has more or less the same variables/constraints. This differs from many exemplar two stage programs, such as the farmer example implemented in InfiniteOpt.jl/Two-Stage Stochastic Program, where first and second stage variables are different (and named as such x vs y(\xi)).
Study notebook
It would be a bit long to paste the example, so here is the link to the Jupyter notebook in a Gist. Notes:
- The example is a toy energy system optimization problem, i.e. with optimal sizing and operation jointly solved. Since there is only one power source, the trivial solution is that
Pgen
should be the maximum of the requestedload
time series, i.e. the peak load. - The two-stage aspect comes from considering two subsequent periods, that is allowing a mid-project generator resize.
- The stochastic aspect comes from the second stage load being uncertain, so different resize values are to be considered.
Also notice that I arrived at two different code designs (“A” and “B”, named chronologically) and in the end I prefer the simplicity of option “B”.
Summary of “Modular implementation B”
The key building block is the inner stage problem wrapped in a function (m
is JuMP Model
, dat
is a Dict{String,Any}
to store variables)
function optim_model_stage_B!(m, dat, load; stage="")
K = length(load)
# Variables
Pgen = add_stage_variable_B!(m, dat, "Pgen"; stage=stage)
gen = add_stage_variable_B!(m, dat, "gen"; K=K, stage=stage)
# Constraints:
dat["balance"] = @constraint(m, [k=1:K], gen[k] == load[k], base_name="balance$stage")
dat["Pgen_cons"] = @constraint(m, [k=1:K], gen[k] <= Pgen, base_name="Pgen_cons$stage")
# scenario cost
dat["cost"] = Pgen + sum(gen)
end
This uses an add_stage_variable_B!
wrapper of @variable
macro to avoid having name conflict in the JuMP Model
, in the spirit of Using anonymous and or symbols to create dynamic varible names? - #3 by pierre-haessig. Instead of using the JuMP Model
, variables are stored in the Dict
.
About not wrapping @constraint
Notice that in the above example, I use a wrapper for @variable
, but not for @constraint
.
I find that the code looks a bit unbalanced (asymetrical) in that aspect, but I was unable to create a constraint wrapper allowing the large formulation flexibility permitted by @constraint
. As an example consider “Pgen_cons” above which is a vector constraint mixing vector and scalar variables.
Single stage usage
This stage problem can be used as a single stage (i.e. a classical deterministic energy system sizing), after setting the objective (notice @objective
macro is not called inside optim_model_stage_B!
, but rather defines dat["cost"]
which is only a fragment of the global cost in a multi-stage context):
m = Model(HiGHS.Optimizer)
data = Dict{String,Any}()
load = [1., 1.5, 1.2, 0.9] # dummy problem data
optim_model_stage_B!(m, data, load)
@objective(m, Min, data["cost"])
optimize!(m)
Pgen_val = value(data["Pgen"])
println("Pgen=$(Pgen_val)") # Pgen=1.5
Two-stage stochastic program usage
Remember that the goal of this code design study is that the inner stage can be reused for a larger two-stage problem. And indeed, 1+n_s stage instances are created, that is 1 for the first stage, and 1 for each of the n_s stochastic scenarios for the second stage.
# second stage data: 3 equally probable load changes
load_scenar = [0.5, 1.0, 2.0] # (decrease, constant, increase)
ns = length(load_scenar)
proba_scenar = [1., 1., 1.]/ns
m = Model(HiGHS.Optimizer)
data1 = Dict{String,Any}()
optim_model_stage_B!(m, data1, load; stage=1)
data2 = [Dict{String,Any}() for s=1:ns]
for s=1:ns
load_s = load*load_scenar[s]
optim_model_stage_B!(m, data2[s], load_s; stage="2$s")
end
Then, there is more work to define the global problem by joining the different stages, either in the objective and/or constraints. Here are examples of both:
- cost is the sum of expected stage costs
- a coupling constraints forbids having 2nd stage generator smaller than at the 1st stage
cost2 = [data2[s]["cost"] for s=1:ns]
cost2_exp = cost2' * proba_scenar
@objective(m, Min, data1["cost"] + cost2_exp)
@constraint(m, [s=1:ns], data1["Pgen"] <= data2[s]["Pgen"])
Optimization results are retrieved by digging into the Dict
containers of each stage:
optimize!(m)
Pgen1_val = value(data1["Pgen"])
println("Pgen1=$(Pgen1_val)") # Pgen1=1.5
Pgen2_val = [value(data2[s]["Pgen"]) for s=1:ns]
println("Pgen2=$(Pgen2_val)")
# Pgen2=[1.5, 1.5, 3.0] and not [0.75, 1.5, 3.0]
# due to the stage coupling constraint
That’s it, any thoughts welcome!