I have a PR to get it onto the SciMLBenchmarks system so that way it can be auto-run on a standard machine through CI, making it easier for everyone to get involved:
I noticed Optimization.jl is using ForwardDiff.jl and no sparsity. These days, it has Enzyme with sparse Hessians. I presume that would make a major difference, and so after this merges @Vaibhavdixit02 should look into adding that.
From memory we tried the various sparsity and AD options and ran into memory issues. This was the fastest, but perhaps recent versions are better. for putting this in the benchmark suite so you have control over what options are used etc.
I know the table formatting can be improved. If anyone knows how to make documenter show the whole thing I’d be happy to have the help in formatting it.
There is a parameter SIZE_LIMIT which tells the benchmark the maximum number of variables to use in the benchmark files. This is set to 100 right now just to play around with formatting, but I’ll bump it up and do a bigger run. I did do a bigger run a bit back so I know it works, just when fixing table formatting it’s annoying to wait a day haha.
Optimization.jl code was horrendously allocating and type-unstable. A common issue in the code was using d["$i"] on a dictionary, instead of just using an integer. The dictionaries are still there, so there’s still some indirection that should be removed and the performance should increase, but at least it’s type-stable and non-allocating now. I’ll actually optimize the code in the near future, but the code now is at least a lot simpler than the original code and won’t interfere with AD tooling.
Optimization.jl is now using a sparse reverse mode, but it’s not optimized yet. This case is a case we’re using to optimize things. See: