@odow what reference should I cite if I want to claim that due to the simultaneous occurrence of exponential terms in the objective and mixed integer decision space, SDDP can not be used?
What is meant by “reference” and “cite”? Are you writing a paper?
Gurobi can solve MINLP with the exp
function, but presumably only small scaled ones.
I am writing a paper. In this regard, are there any SDDP convergence guarantees for mixed-integer programming with exponential terms in the objective?
Do you mean the famous algorithm SDDP, or do you mean SDDP.jl?
If you mean the former, then just as its name, it could not. SDDP is for continuous multistage linear. There are standard references.
I meant the latter, and about the convergence guarantees of it, when it comes to mixed-integer nonlinear (exponential terms in the objective) multi-stage programs.
Hi @Engr_Moiz_Ahmad, in future, please start a new topic if you have a new question, instead of posting on an old thread. (For this question, I’ve moved these posts to a new topic.)
You can use SDDP if you have a nonlinear objective and mixed-integer decisions, but you probably shouldn’t.
Convergence is guaranteed only if you have a pure binary state-space and you use SDDP.train(model; duality_handler = SDDP.LagrangianDuality())
.
You may want to look at the following:
- Girardeau P, Lecl`ere V, Philpott AB (2015) On the Convergence of Decomposition Methods for Multistage Stochastic Convex Programs. Mathematics of Operations Research 40(1):130–145.
- Zou J, Ahmed S, Sun XA (2019) Stochastic dual dynamic integer programming. Mathematical Programming. 175(1-2):461–502.
Edit: okay, one code issue for SDDP.jl is that Gurobi can’t provide duals for NLP. I’ve opened an issue: Which solver can solve a MINLP · Issue #844 · odow/SDDP.jl · GitHub
Got it @odow, I will start a new topic for posting a new question. The problem I had is now resolved and I think I should either linearized or use SDDiP to find the upper bound (in case of minimization).
You might want to read Does SDDP.jl implement the SDDiP algorithm?
You can always find a valid upper bound (if minimizing) by simulating the policy.
Similarly, you can always find a valid lower bound by training using SDDP.jl.
The claim is that these are guaranteed to converge only if you satisfy all of the assumptions of SDDP, which, if you have integrality, means having a pure binary state space and using LagrangianDuality
.
But if you have a mixed-integer state-space, and you train using the SDDP default, you can still find valid lower and upper bounds, they just might never converge.
is this a statistical upper bound?
you meant SDDiP?
maybe can converge, but with a nontrivial gap. Maybe you want to say this?
is this a statistical upper bound?
Yes
you meant SDDiP?
I dislike calling SDDiP a separate algorithm. It’s the same algorithm. The paper is about a convergence result. See the link above.
maybe can converge, but with a nontrivial gap. Maybe you want to say this?
Sure, the upper bound will converge to something, and the lower bound will converge to something, but they might not converge together with a zero gap.