I have been using Python’s scipy.optimize.minimize(method="LBFGSB")
for my research, and have been looking to speed the code because it doesn’t scale.
And so I tried to rewrite my code in Julia using Optim.jl, with Optim.LBFGS
as the method. Unfortunately, my situation is the opposite of Optimize performance comparison - Optim.jl vs Scipy.
Here is my benchmark output:
- scipy.optimize: 0.448 ± 0.013 s
- Optim.jl: 6.744 s (30180 allocations: 1.07 MiB) – using
@btime
I.e. Optim.jl is ~15x slower.
Here is the source code: GitHub - rht/climate_stress_test_benchmark (~300 LOC of Julia / Python).
I have already made sure that my objective function uses as few allocations as possible (see climate_stress_test_benchmark/climate_stress_test_simplified.jl.274731.mem at main · rht/climate_stress_test_benchmark · GitHub).
I have received help in optimizing my code in Zulip (e.g. specifying const for the globals). But at the time I haven’t had the permission to publish a subset of the research code yet. So it was like shooting in the dark. Now I have got the permission.
According to Patrick Kofod Mogensen, Zulip , it could be that the parameters of the LBFGS optimizations aren’t exactly comparable. But according to minimize(method=’L-BFGS-B’) — SciPy v1.9.1 Manual, the scipy version uses 1500 maxfun and maxiter, while there is no such equivalent parameter in Optim.jl, so I can’t compare directly.
Thank you for reading the post!