Optim.jl v0.9.0 is out!

I’m very happy to announce that Optim.jl v0.9.0 is out, although only for Julia v0.6. So if you only use official releases, you’ll have to wait a bit to try it out.

Once again, thanks to Asbjørn Nilsen Riseth (@anriseth ) Christoph Ortner (@cortner ) for their help and contributions. While I realize now that I failed to do so in the blog post, I also want to thank Tony Kelman @tkelman once again for his eagle eye that always spots my mistakes.

I’ve written a short description of the release over at http://www.pkofod.com/2017/06/02/optim-jl-v0-9-0/

Have fun, give stars, and tell your friends and family about Optim and Julia!

10 Likes

:heart_eyes:

2 Likes

Looks Wonderful.

How do you compare it to MATLAB Optimization Toolbox feature and performance wise?

Thank You.

something we should test when constrained optimisation matures. For now - for unconstrained optimisation - I suspect Optim is far superior both in performance and robustness. But maybe the better question is how it will fare against Gurobi, Knitro, Ipopt, and the like.

Unless / until Optim supports sparse Jacobians and Hessians, I don’t think it’ll be comparable to established solvers on problems of any substantial size.

Do you mean for constrained optimization ? And by what do you mean “support”?

I know you are simply answering the question, but let us also pay some attention to the different nature of the two types of projects. Optim is largely the result of people spending their free time on improving off-the-shelf easy to use optimization routines in Julia, not the result of dedicated research effort over many years. Now that does not make Optim any more efficient or capable and it is no excuse for bugs or shortcomings, but it does play a role in what you can reasonably expect.

I believe we will get there in due time.

@pkofod,

From what I read it is very impressive in any scale.

But I also think comparing it to MATLAB’s offering would be a great reference point.
I might try adding few simple benchmarks into my MATLAB & Julia Benchmark.

1 Like

Feel free :slight_smile:

Would you recommend 2-3 microbenchmakrs (Combination of Function, Constraints and Solver) to compare?

Thank You.

I started to implement the Black-Box Optimization Benchmarking functions not long ago, but I haven’t got very far. A collection of the simple ones would do a nice benchmark.

BlackBoxOptim also has some of them:

https://github.com/robertfeldt/BlackBoxOptim.jl/blob/master/examples/benchmarking/compare_optimizers.jl#L148

I feel like we are getting off topic, but please open a new topic in the optimization sub-forum if you want to discuss benchmarking optimization software in Julia. Though, do note that there is in fact a CUTEst.jl package that has a lot of problems suitable for Optim’s functionality.

Why did you remove autodiff from the options in v0.9.0?

Compare the master branch vs v0.7.x:
https://github.com/JuliaNLSolvers/Optim.jl/blob/master/src/types.jl

The docs haven’t been updated to reflect the loss of this functionality:
http://julianlsolvers.github.io/Optim.jl/latest/algo/autodiff/

A simple test produces the expected “MethodError: no method matching Optim.Options(; autodiff=true)” Error:

function f(x)
    return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end
initial_x = zeros(2)
Optim.minimizer(optimize(f, initial_x, BFGS(), Optim.Options(autodiff = true)))
Optim.minimizer(optimize(f, initial_x, Newton(), Optim.Options(autodiff = true)))

Should I write my own “g!” and “h!” functions, or has there been some major API change not discussed in the blog post?

I know that I changed the docs at some point, but this must have been reverted by accident. I can update the blog post (and docs) to reflect it, but you have to

od = OnceDifferentiable(f, initial_x; autodiff = :finite)

or

od = OnceDifferentiable(f, initial_x; autodiff = :forward)

I’m on my way to bed, but will update tomorrow.

1 Like

Even for unconstrained Newton, you’d want the ability to handle sparse Hessians. The last time I looked at the constrained Optim PR the Jacobians and Hessians were using dense Arrays IIRC, and the response to a comment on that was that it was a pre existing limitation in Optim.

Please can I add another request for the docs to be fixed. I’m trying to learn how to use Optim for a problem, and right now I can’t figure out how to try out autodiff.

Sure… The solution is just above, and it’s not too hard to make changes to our docs, but I’ll get to it. Good to hear more people are trying it out.

I KNEW that I had changed the docs. Just got a question about something in the docs on Fminbox and was also certain that I had changed that. Went to check the docs: sure, it was the old version. Headed into docs/ and found https://github.com/JuliaNLSolvers/Optim.jl/blob/master/docs/src/algo/autodiff.md . As you can see, my memory was NOT failing me: I did actually change it. Apparently, Documenter.jl didn’t fire. Probably because I need to update a link somewhere that wasn’t changed when we changed organizations…

edit: docs are now building again Optim.jl

1 Like