# Why SDDP.jl can not work together with nonlinear optimizer like juniper

Hello,

I’m using JuMP.jl and Juniper.jl to solve a MINLP problem. Now I attempt to consider the uncertainty in some of the constraints by consulting theory guide of SDDP.jl. But I can’t understand how SDDP.jl deal with uncertainty source, just like in the example:

``````using SDDP, GLPK

model = SDDP.LinearPolicyGraph(
stages = 3,
sense = :Min,
lower_bound = 0.0,
optimizer = GLPK.Optimizer,
) do subproblem, t
# Define the state variable.
@variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)
# Define the control variables.
@variables(subproblem, begin
thermal_generation >= 0
hydro_generation >= 0
hydro_spill >= 0
inflow
end)
# Define the constraints
@constraints(
subproblem,
begin
volume.out == volume.in + inflow - hydro_generation - hydro_spill
thermal_generation + hydro_generation == 150.0
end
)
fuel_cost = [50.0, 100.0, 150.0]
# Parameterize the subproblem.
Ω = [
(inflow = 0.0, fuel_multiplier = 1.5),
(inflow = 50.0, fuel_multiplier = 1.0),
(inflow = 100.0, fuel_multiplier = 0.75),
]
SDDP.parameterize(subproblem, Ω, [1 / 3, 1 / 3, 1 / 3]) do ω
JuMP.fix(inflow, ω.inflow)
@stageobjective(
subproblem,
ω.fuel_multiplier * fuel_cost[t] * thermal_generation
)
end
end
``````

Can someone explain the functionality of the loop of omega with `SDDP.parameterize`?

And I am wondering if this is why SDDP.jl can not work with nonlinear optimizer. Is it possible to modify the binary of SDDP.jl to make it compatible with nonlinear optimizer? If the answer is yes, how can I do that?

Ru

SDDP.jl is not intended for MINLPs. It requires convex problems.

Can you explain the effect of loop of omega with `SDDP.parameterize`? Why it can simulate the uncertainty of inflow? I don’t see the influence of probability` [1/3, 1/3, 1/3]`

`SDDP.parameterize` is used to model the node-wise independent random variables. In this case, it maps over the sample space `Ω` with probabilities given by the probability vector `[1 / 3, 1 / 3, 1 / 3]`, and then for each element ω it runs the inner part of the `do ... end`.

Documentation:

But again, you should note that if you have a MINLP, SDDP.jl is the wrong tool for the job.

Thanks for your kind hint. I know it’s better not to use SDDP.jl. What I am trying to do is to understand how SDDP.jl deal with uncertainty source. And then I will implement the uncertainty effect into my JuMP code, which solves a MINLP. It seems that I can achieve my goal by declaring a struct `Uncertainty` and looping over the sample space `omega`. Is it right?

It seems that I can achieve my goal by declaring a struct `Uncertainty` and looping over the sample space `omega`

No. SDDP.jl doesn’t add uncertainty to a JuMP model in a generic way that you could easily replicate. (At a high-level, it creates a struct that stores the probability information and the callback to modify the JuMP model given a realization of the uncertainty. But the JuMP model at all times is still deterministic. The way we deal with the uncertainty is part of the larger algorithm.)

If you want to add uncertainty to your own code, you’ll have to do something different, like sample average approximation. If it’s multistage and you’re solving the deterministic equivalent, you’ll additionally need non-anticipativity constraints etc.

1 Like