End-of-horizon effect, Infinite horizon policy graph, SDDP.jl

Hi everyone,

Let’s assume a simple co-optimized expansion and generation problem with 13 nodes: 1 for expansion and 12 generations corresponding to 12 months. There is uncertainty during generation, e.g., monthly water inflows.
Initially, since I wanted to use the optimal generation policy found by SDDP.jl for future years as well, I added this constraint:
water level at the beginning of node 2 = water level at the end of node 13.
Then, I noticed an end-horizon effect in my solution. It means that the policy was optimal for the one-year problem, but it seems sub-optimal for a longer period.
Then, to resolve this end-of-horizon effect, I modeled the problem as an infinite horizon problem by eliminating the mentioned constraint and adding:

SDDP.add_edge(graph, 13 => 2, 0.9)

to the policy graph.

Now, how can I compare the generation policies derived from infinite/finite horizon models for a two-year simulation (while the training is for a year)?
Note that for the second year, there shouldn’t be any expansion node; so, for the finite horizon approach, duplicating the simulation process would not work.

You want something like

simulations = SDDP.simulate(
    model,
    1,
    sampling_scheme = SDDP.InSampleMonteCarlo(
        max_depth = 1 + 2 * 12,  # 13 nodes
        terminate_on_dummy_leaf = false,
    ),
)

See Create a general policy graph · SDDP.jl

1 Like

Thank you.
It works.
Can I use two different sets of historical data for the simulations of the first and second cyclic iterations (years), or should they be the same?

InSampleMonteCarlo uses the realizations from when you defined the model. The model doesn’t know that it is in the “first” or “second” year, it just sees the index of the node. (like node = 7).

If you create a Historical sampling scheme you can do what ever. Something like:

scenario = [(1, nothing)]
for t in 1:12
    push!(scenario, (1 + t, Ω[t] #= or something =#))
end
simulations = SDDP.simulate(
    model,
    1;
    sampling_scheme = SDDP.Historical([scenario]),
);

See Simulate using a different sampling scheme · SDDP.jl

1 Like

Thank you.
Actually, I checked the API reference page:
https://sddp.dev/stable/apireference/#SDDP.Historical
and it seems that if we use historical simulation, there are no max_depth and terminate_on_dummy_leaf arguments. So, to simulate my cyclic policy graph, I used SDDP.OutOfSampleMonteCarlo based on new realizations I defined.
It works.

Here is my code for SDDP.OutOfSampleMonteCarlo:

sampling_scheme = SDDP.OutOfSampleMonteCarlo(
           main_model;
           use_insample_transition = true,
           max_depth = 1 + 2*12,
           terminate_on_dummy_leaf = false,
       ) do node
       stage = node
           if stage == 1
               return [SDDP.Noise(nothing, 1.0)]
           else
               noise_terms = [SDDP.Noise(OMEGA[stage - 1], 1.0)]
               return noise_terms
           end
       end;

OMEGA includes the deterministic realizations for both cycles, with 24 values in total. The first 12 values correspond to the first cycle of visiting nodes, and the second 12 values correspond to the second cycle.

However, In my code, it does not matter whether it is the first time I visit a node (e.g., node 2) or the second time and the realizations are the same and equal to OMEGA[1]. However, it matters to me since I defined different realizations for the first visits and the second visits.
How can I achieve that?

I guess I wasn’t explicit enough before :smile: You can have multiple cycles with the historical:

scenario = [(1, nothing)]
for year in 1:2
    for month in 1:12
        push!(scenario, (1 + month, Ω[year][month] #= or something =#))
    end
end

You don’t need to use OutOfSampleMonteCarlo.

Could you please see the code below:

graph = SDDP.LinearGraph(3)
SDDP.add_edge(graph, 3 => 2, 0.90) 

function subproblem_builder(subproblem::Model, node::Int)
    # State variables
    @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)
    # Control variables
    @variables(subproblem, begin
        thermal_generation >= 0
        hydro_generation >= 0
        hydro_spill >= 0
    end)
    # Random variables
    @variable(subproblem, inflow)
    Ω = [0.0, 50.0, 100.0]
    P = [1 / 3, 1 / 3, 1 / 3]
    SDDP.parameterize(subproblem, Ω, P) do ω
        JuMP.fix(inflow, ω)
    end
    # Transition function and constraints
    @constraints(
        subproblem,
        begin
            volume.out == volume.in - hydro_generation - hydro_spill + inflow
            demand_constraint, hydro_generation + thermal_generation == 150
        end
    )
    # Stage-objective
    fuel_cost = [50]
    @stageobjective(subproblem, fuel_cost[1] * thermal_generation)
    return subproblem
end

model = SDDP.PolicyGraph(
    subproblem_builder,
    graph;
    sense = :Min,
    lower_bound = 0.0,
    optimizer = Gurobi.Optimizer,
)

SDDP.train(model; iteration_limit = 10)


realizations = [(1, 10), (2,50), (3,100), (4,15), (5,20)] 
# inflows of nodes 1, 2, 3 of cycle 1, and nodes 2, 3 of cycle 2; we visit 5 nodes in total. 

simulations = SDDP.simulate(
    model,
    1;
    sampling_scheme = SDDP.Historical([realizations]),
);

The policy graph includes 3 nodes, and we have a cycle between nodes 2 and 3.
As you run it, you will get an error: key 4 not found.

What am I missing?

I just got it.
It should be:

realizations = [(1, 10), (2,50), (3,100), (2,15), (3,20)] 

Am I right?

1 Like

Yes. Your graph has three nodes. You asked to visit node 4. You want to visit 1, 2, 3, 2, 3.

1 Like

The first part of (2, 15) is the index of the node. The second part is the ω that is passed to parameterize.

1 Like

Sorry, my last question.
It seems that simulations[replication][e.g., 2] only gives us the cost of one visit to node 2. Can I retrieve the cost of each visit and also the total cost cyclic graph?

simulations[replication][key] is not the visits to node key.

simulations[replication] is a vector with one element for each node that you visited (in order). For your example, simulations[replication][4] should be a visit to node 2.

You can check the :node_index key to see which node was visited.

1 Like