How to define multiple random variables in SDDP

Hi everyone,
I have a small stochastic hydropower problem. As you may know, water inflows are considered random variables in most hydropower problems, just like my problem.
There is a well-structured example here:

https://odow.github.io/SDDP.jl/stable/tutorial/first_steps/#An-introduction-to-SDDP.jl

In this example, there is only one reservoir, so we have only one random inflow variable.
Now imagine we have two consecutive reservoirs that are connected. The output of the first reservoir is the input of the second reservoir.
Just like the example in the link, we have state variables (but two variables) for the water level in the reservoirs and two decision variables for hydro and thermal generation for each reservoir.

Now my question is how we can define random variables in this structure. Defining the second random variable is exactly like the first random variable? What if there is a relation between the random variables (for example huge inflow in one reservoir means a huge inflow in the other reservoir)

1 Like

See Add multi-dimensional noise terms · SDDP.jl

Here’s another example: Newsvendor · SDDP.jl

Thank you.
I managed to develop my model in Julia.
Now, I have another question.
How can I evaluate my decision rule when I have a two-dimensional term?
And also, I do not theoretically understand the difference between evaluation and simulation in SDDP.jl.
Do you have any web references to help me understand?

1 Like

The secret is that there is no multidimensional noise term. It’s just a single vector of outcomes with a corresponding vector of probabilities.

In this example:

julia> model = SDDP.LinearPolicyGraph(
               stages=3, lower_bound = 0, optimizer = HiGHS.Optimizer
               ) do subproblem, t
           @variable(subproblem, x, SDDP.State, initial_value = 0.0)
           support = [(value = v, coefficient = c) for v in [1, 2] for c in [3, 4, 5]]
           probability = [v * c for v in [0.5, 0.5] for c in [0.3, 0.5, 0.2]]
           SDDP.parameterize(subproblem, support, probability) do ω
               JuMP.fix(x.out, ω.value)
               @stageobjective(subproblem, ω.coefficient * x.out)
               println("ω is: ", ω)
           end
       end;

you’d use something like

rule = SDDP.DecisionRule(model; node = 1)
solution = SDDP.evaluate(
    rule;
    incoming_state = Dict(:x => 0.0),
    noise = (value = 1, coefficient = 3),
)

I do not theoretically understand the difference between evaluation and simulation in SDDP.jl.

There’s no real difference. Simulation is just a sequence of evaluating the decision rules.

1 Like

Thank you.
And if we have multiple (e.g., two state variables X1 and X2), how should we write the incoming_state in SDDP.evaluate?

incoming_state = Dict(:X1 => 0.0, :X2 => 0.0),

I usually find it more convenient to just call SDDP.simulate and then post-process the results, rather than trying to evaluate individual decisions rules.