This isn’t possible.
Since you’ve asked a few related questions recently, it might be worth stepping back a bit.
- What are you trying to do and why?
It looks like you’re trying to compare the performance of different solvers on some multistage stochastic problems, and potentially compare to the deterministic equivalent. This is both useful and not useful at the same time, and there are number of things to take into consideration
When comparing against the deterministic equivalent:
- SDDP.jl cannot (always) find a provably optimal policy. Your state variables must be pure binary, and you must use
duality_handler = SDDP.LagragianDuality()
. - Even if you can fairly compare runtimes against the deterministic equivalent, it is trivially possible to make SDDP.jl look arbitrarily better, merely by increasing the size of the sample space at each node.
- Once trained, SDDP.jl can evaluate out-of-sample realizations of the random variables, which the deterministic equivalent cannot.
I really see SDDP.jl and the deterministic equivalent as solving separate problems. They’re not usefully directly comparable.
When comparing between two solvers:
- Even if you set the random seed before calling
SDDP.train
, the algorithm is not guaranteed to follow identical paths because the two solvers may converge to different primal points. This particularly the case with MIP solvers if you solve to some optimality gap. Thus, comparing the total number of branch and bound nodes isn’t a very useful comparison metric. - I have noticed that some solvers like CPLEX (and even GLPK) may outperform Gurobi on simple examples, but this is because they are less numerically robust. If you tweak the problem slightly they may error or fail to converge, while Gurobi normally has fewer problems.