Convergence of SDDP with nonlinear objective and mixed-integer variables

You might want to read Does SDDP.jl implement the SDDiP algorithm?

You can always find a valid upper bound (if minimizing) by simulating the policy.

Similarly, you can always find a valid lower bound by training using SDDP.jl.

The claim is that these are guaranteed to converge only if you satisfy all of the assumptions of SDDP, which, if you have integrality, means having a pure binary state space and using LagrangianDuality.

But if you have a mixed-integer state-space, and you train using the SDDP default, you can still find valid lower and upper bounds, they just might never converge.