For a training iteration or the evaluation simulation (after training), is burn-in period implemented in SDDP.jl?

I am dealing with a multi-stage mixed-integer stochastic global supply chain optimization problem.

In this regard, I want to determine whether the burn-in period (for the random sampling process) is already incorporated in the implementation of SDDP.jl. I want to ask this because I am receiving inconsistent data while conducting evaluation simulation rounds (with a statistically significant number of replications) several times after training.

SDDP.jl does not have the concept of a burn-in period.

Can you share a reproducible example of the problem you are experiencing? What do you mean by “inconsistent data”?

Thanks for your response. I realized that I had to sample more data during the evaluation phase due to a large number of scenarios. That way, the issue is resolved.

However, I do think that burn-in time is an important consideration, and it should be incorporated in future releases of SDDP.jl.
If @odow recommends, should I open a GitHub issue for it?

SDDP.jl does not have the concept of a burn in time. We measure the cost from the root node using a discounted expected cost.

If you want to drop some of the initial periods thats up to you, but I don’t plan to add it as a feature.

Got it, thanks.