Hello everyone,
I am using Julia and SDDP library to solve a multi-stage stochastic optimization model. The water inflow is my random variable.
At the end of the SDDP, I used SDDP.train command to train the model.
Then I used SDDP.simulate() to do 100 simulations.
But I don’t know what is the role of simulation or train command in SDDP? Is the model learning? Learning what?
Is the model learning?
Yes.
Learning what?
SDDP.train
learns a policy that maps the incoming state variable and a realization of the random variable in each node to an optimal choice of the control variables.
Here’s a tutorial which explains some of the background theory:
But a complete description might be a little long for a discourse question.
I’d suggest you take a read of the papers linked here:
Great.
Then, assume that I trained a model based on some random variable’s realization.
Now, I want to do simulation (for 100 times) but with a new random variable realization. What should I do? I am asking this because by using SDDP.simulate we don’t have any control on the realization of radnoam variables and the model will be simulated based on the same random variable’s realization we used for training the model.
I hope my question was clear to you.
Thank you!
Thanks.
I have checked it.
Could you please tell me what changes I need to make in this code if I use linear graph rather than markovian graph (In the case I just want to redefine the stagewise independent noise terms)?
The SDDP.Historical
sampling scheme takes scenarios as a vector, where there is one element for each node you want to simulate, and each element of the vector is a tuple with the index and the realization of the noise.
If you had a linear graph with 52 nodes, and you wanted 100 historical scenarios, you’d use something like:
data = rand(52, 100)
simulations = SDDP.simulate(
model,
100;
sampling_scheme = SDDP.Historical(
[[(t, data[t, scenario]) for t in 1:52] for scenario in 1:100],
),
)