Accessing to the dual variables, SDDP.jl

Hi,
Assume that our problem is two-staged and that there is no uncertainty in stage one. To solve that, we intend to employ Benders Decompsotion. The second stage of the problem itself is a multi-stage stochastic problem, which SDDP.jl is meant to solve. We must now access the dual variables in the multi-stage problem for each scenario in order to add cuts in each benders decomposition iteration. Does SDDP.jl give us these dual variables?

Instead of solving the problem as a two-stage Benders and a SDDP problem in stage 2, add the first-stage of your Benders as a node to the policy graph, and add the state variables from Benders as state variables to your SDDP model.

If you consider the example from Example: deterministic to stochastic · SDDP.jl, we could extend the problem to have a capacity expansion decision in the first-stage with:

model = SDDP.LinearPolicyGraph(
    stages = T + 1,
    sense = :Min,
    lower_bound = 0.0,
    optimizer = HiGHS.Optimizer,
) do sp, t
    @variable(sp, x_capacity >= 0, SDDP.State, initial_value = 0)
    @variable(sp, x_storage >= 0, SDDP.State, initial_value = reservoir_initial)
    if t == 1
        # Benders first-stage
        @constraint(sp, x_capacity.out <= reservoir_max)
        @constraint(sp, x_storage.out == x_storage.in)
        @stageobjective(sp, x_capacity.out)
    else
        # SDDP model
        @constraint(sp, x_storage.out <= x_capacity.in)
        @constraint(sp, x_capacity.out == x_capacity.in)
        @variable(sp, 0 <= u_flow <= flow_max)
        @variable(sp, 0 <= u_thermal)
        @variable(sp, 0 <= u_spill)
        @variable(sp, ω_inflow)
        Ω, P = [-2, 0, 5], [0.3, 0.4, 0.3]
        SDDP.parameterize(sp, Ω, P) do ω
            fix(ω_inflow, data[t, :inflow] + ω)
            return
        end
        @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)
        @constraint(sp, u_flow + u_thermal == data[t-1, :demand])
        @stageobjective(sp, data[t-1, :cost] * u_thermal)
    end
    return
end

There’s no need to solve a separate Benders decomposition problem.

Thanks.
It will work, sure. But I don’t think what you said is exactly the same as SDDP + Benders decomposition.
My plan is to model the expansion and operational problems twice, once with the SDDP + benders decomposition algorithm and once with SDDP alone, just like you said, and then compare the two.
As you know better than me, there will be new cuts for expansion nodes in each backward iteration if we solve both problems in SDDP. If we use Benders and SDDP, on the other hand, we will only get new cuts for Benders once SDDP meets the stopping rule (like the photo I attached).
But, as I said, I don’t know how to get the cuts from SDDP.jl and add them to the Benders part.

SDDP + Benders

What is your goal for comparing the two methods? What are the investment variables and how to they appear in the SDDP subproblems?

The investment decisions are continuous state variables, which I called Cap_exp. They represent the upper bound of the generation capacity of hydropower plants.
In subproblems, I have:

Gen ≤ Cap_exp.in
Cap_exp.out == Cap_exp.in

The goal of comparing is to see the quality of policies developed by two method of adding cuts.

The goal of comparing is to see the quality of policies developed by two method of adding cuts.

The two methods will find the same optimal policy. The integrated SDDP version will be faster to solve because it can re-use cuts when evaluating different capacity decisions. I wouldn’t bother coding a separate Benders+SDDP model.