I literally had a draft open with reply to your original post that we added sparsity support in Optimization.jl so you might want to give it a try
But we haven’t added documentation for it yet, the way to do it is pass AutoModelingToolkit(true, true) (the two fields are boolean for enabling sparsity for objective and constraint respectively).
I finished implementing sparsity and symbolic differentiation support in Nonconvex. Your rosetta-opf example works locally. I will make a release and update the docs shortly.
@mohamed82008, I tried with Nonconvex v1.0.3 but I am still getting this issue of zygote crashing, could you elaborate on how I should change these calls to use your symbolic diff approach?
model = Nonconvex.DictModel()
...
x0 = NonconvexCore.getinit(model)
r = Nonconvex.optimize(model, IpoptAlg(), x0)
sparsify will use Symbolics.jl to get the sparsity pattern of the gradients/Jacobians/Hessians and then use SparseDiffTools + ForwardDiff for the differentiation. Alternatively, you can use:
which will use a Symbolics-only approach where the gradients/Jacobians/Hessians are obtained using Symbolics.jl. The sparse kwarg is used to tell Symbolics to return a sparse matrix/vector for the jacobian/Hessian/gradient.
Note that these are new features so please report any issues in the repo.
As promised, below is a table of updated results using the latest versions of all packages. Note that GalacticOptim has been renamed Optimization.
The substantive changes follow the discussion above. In particular, NonConvex and Optimization have been updated with new AD systems based on Symbolics and now support sparse Jacobian and Hessian computations. For NonConvex, this allowed me to finish building the NonConvex OPF model and confirm that the model is correct on small networks. For Optimization, the new sparse AD system based on ModelingTookit enabled it to solve models with 73 network buses (800+ variables, 900+ constraints), which is a 2x scalability increase from what was possible with the previous dense ForwardDiff approach.
This Zygote issue has thus far blocked using Zygote for AD on these models
These upstream issues [1][2] appear to be the root cause blocking scaling of the ModelingTookKit approach to larger problems. It looks like it will be resolved in future Julia versions (testing was done on Julia v1.7.2).
Are you including the compilation time here? Yeah the RGF issues are one thing, and the other is we need to release SymbolicIR.
MTK also doesn’t do structural_simplify on OptimizationProblem right now, which can be a major win because then it solves a much smaller problem. @YingboMa was working on that.
Good question. In these runtime numbers I try to avoid capturing artifacts from one-time overheads, such as JIT time. In my experience, with sufficient effort, these can be avoided in production deployment settings.
In all of these implementations before I start collecting data solve_opf("$(@__DIR__)/data/pglib_opf_case5_pjm.m") is executed first, to compile the relevant code execution paths. Then solve_opf(...) is run again on the file where runtime data is being collected.
Thanks for this excellent source of test cases for Nonconvex. For Nonconvex, I would be interested in the results using ReverseDiff as well as SparseDiffTools instead of Symbolics.
Table updated with results from [01bcebdf] Nonconvex v2.0.3 [bf347577] and NonconvexIpopt v0.4.1, which resolved convergence issues on a number of cases.
Thanks for the heads up! I think I’ll wait a while longer to re-run incase there are updates from other packages as well, including Julia v1.8 which seems pretty close.
Given it has been about 1 year since these benchmarks were last conducted and the ecosystem is continually improving, I thought it would be good to revisit this with Julia v1.9 and the latest version of all packages (details at the bottom).
Some preliminary observations:
Runtimes are generally stable (maybe slightly improved on average)
NonConvex now solves two cases than it did not previously case89_pegase, case118_ieee
I guess Carleton’s point is that this is what a user would experience if they installed each package today.
Carleton has been running these benchmarks over the year, and we’ll continue to run them, so we should be able to see the improvements when/if they land.