I want to solve a large two-stage stochastic optimization problem with a large number of scenarios (think around ~500 second stage scenarios and ~50-100 binary first stage decisions). As such, I need to use some kind of decomposition method for the problem, such as an L-Shaped method or Progressive Hedging (PH).
I tried StochasticPrograms.jl (which has implementations of L-Shaped and PH) and while that worked locally, I could not run it on my institution’s cluster with Gurobi.jl as StochasticPrograms.jl requires an old version of JuMP that seems to cause issues with licensing via Gurobi. I posted a separate thread about that here: Licensing Issues with Gurobi.jl on Cluster
In the meantime, I’m wondering what other options I may have. JuMP.jl documentation discusses a simple implementation of Bender’s decomposition, but mentions that the method is not fully “performant.” I also would consider SDDP.jl, but I’m worried that my use case is not appropriate as I have just two nodes with a large set of discrete second-node uncertainties. I admittedly am not too familiar with the SDDP method so could be missing something. Finally, ProgressiveHedging.jl seems unmaintained so I’m hesitant to use it. Have any of you had success with these or other packages for large two-stage stochastic programs?
How big are your second-stage subproblems? This does’t seem like a very big problem, and it shouldn’t require a cluster.
SDDP.jl can be used to solve this problem. If you can post a reproducible example of your model I can point you in the right direction. Take a look at this tutorial: Example: two-stage newsvendor · SDDP.jl
Ah, that is an important detail I neglected to mention, my apologies. In the second stage, I’m solving a DC optimal power flow on the California Test System. This amounts to an LP of around 30k variables and constraints.
Let me have a closer look at SDDP.jl and get back to you if I need any help. The example you linked seems like a good start.
Have you considered solving the deterministic equivalent? It’s a big model but it might be solvable. Especially if you run it on a node with a lot of memory.
After I tried on my laptop and ran out of memory, I’d assumed the problem was too big to do that. But my cluster does have some bigmem nodes, so let me give that a shot!