I’ll reply here, but perhaps we can consolidate on a single thread: https://discourse.julialang.org/t/policy-graph-in-sddp-jl-package
In general, I’d encourage you to read this tutorial: Introductory theory · SDDP.jl
As we know in theory, SDDP.jl is based on diving a problem into the master problem and subproblems and progressively adding cuts (Like Nested Benders Decomposition).
I don’t find it helpful to view SDDP in the context of nested Benders. I prefer to view it as value function approximation using Kelley’s cutting plane. There are no master and subproblems. It’s just a bunch of nodes in a graph, and we’re constructing a decision rule at each node.
Moreover, in theory, again, the condition for stopping the procedure of adding cuts is the Gap between the lower-bound and upper-bound of the problem
SDDP.jl doesn’t compute or maintain a lower bound for the problem. The default stopping rule is to terminate once the upper bound has stopped changing, and consecutive simulations of the policy return an identical distribution of objective values.
How can we obtain these values (lower and upper bounds) in each iteration of adding cuts in SDDP.jl package?
The upper bound is printed in the “bound” column. The lower bound is not computed, and in general, we cannot compute a valid lower bound. (We could compute a statistical lower bound using Monte Carlo simulation to form a confidence interval, but that’s often not useful.)
not the bound of the approximation of the original problem
We might be talking about different things, but if you have started with some “original” problem and then discretized the random variable to make it amenable to SDDP, then we have no bounds on the relation between our policy and an optimal policy for the “original” problem.
I’ve already read the following link:
Those docs are old. Don’t use the docs from juliahub. Use the official documentation instead https://odow.github.io/SDDP.jl/stable/.