I’ve been comparing the performance of
Optim.jl with Matlab’s
Optim's L-BFGS algorithm doesn’t always beat
fminunc. I was wondering if anyone knows why this might be.
Here’s the set up:
- Use a single processor for each language (i.e. load Matlab with
-singleCompThreadoption; don’t use parallelization in Julia)
- Generate identical data sets [up to random noise] for a common maximum likelihood problem: the normal linear model
- Provide the analytical gradient of the obj. function to
- Estimate the maximum likelihood parameters using identical starting values [up to random noise]
- Compare the timing of the estimation under various scenarios optimization options, or various starting values
- I’m using Julia 0.6.0 and Matlab R2015a on a RHEL Linux machine with an Intel Core i-7 processor with 8 CPUs
The complete code for both examples is available as a gist here.
Some of my observations:
Optimperforms much better than
fminuncwhen the starting values are “good” (i.e. close to the solution) but much worse when the starting values are “bad.” I’m surprised by this, given that this objective function is about as well-behaved as one could possibly be.
Optimgoes much faster when using the
BackTrackingline search option (thanks, @Elrod!)
Finally, some questions I have:
- Does anyone know why
Optimseems to be more efficient of an optimizer than
fminunc, but less robust? What I mean by that is that
fminunccan find the solution given pretty much any feasible set of starting values, whereas
Optimtakes much longer to find the solution, or sometimes doesn’t find the solution at all, given poor (but feasible) starting values.
- To clarify: is there any performance gain to
Optimby specifying the gradient by hand rather than using
autodiff? My reading of the docs suggests that there isn’t, but I wanted to make sure.
- Are there any other options or non-default settings I should know about that will improve performance? I didn’t see
BackTracking()listed anywhere in the docs, so it would seem that I could be missing something major.