Hello,
I’m using JuMP.jl and Juniper.jl to solve a MINLP problem. Now I attempt to consider the uncertainty in some of the constraints by consulting theory guide of SDDP.jl. But I can’t understand how SDDP.jl deal with uncertainty source, just like in the example:
using SDDP, GLPK
model = SDDP.LinearPolicyGraph(
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = GLPK.Optimizer,
) do subproblem, t
# Define the state variable.
@variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)
# Define the control variables.
@variables(subproblem, begin
thermal_generation >= 0
hydro_generation >= 0
hydro_spill >= 0
inflow
end)
# Define the constraints
@constraints(
subproblem,
begin
volume.out == volume.in + inflow - hydro_generation - hydro_spill
thermal_generation + hydro_generation == 150.0
end
)
fuel_cost = [50.0, 100.0, 150.0]
# Parameterize the subproblem.
Ω = [
(inflow = 0.0, fuel_multiplier = 1.5),
(inflow = 50.0, fuel_multiplier = 1.0),
(inflow = 100.0, fuel_multiplier = 0.75),
]
SDDP.parameterize(subproblem, Ω, [1 / 3, 1 / 3, 1 / 3]) do ω
JuMP.fix(inflow, ω.inflow)
@stageobjective(
subproblem,
ω.fuel_multiplier * fuel_cost[t] * thermal_generation
)
end
end
Can someone explain the functionality of the loop of omega with SDDP.parameterize
?
And I am wondering if this is why SDDP.jl can not work with nonlinear optimizer. Is it possible to modify the binary of SDDP.jl to make it compatible with nonlinear optimizer? If the answer is yes, how can I do that?
Ru