What would you recommend for nolinear optimization with optional gradient and hessian

Hi, I am solving a numerical box-constrained nonlinear optimization problem with hundreds of unknowns. I currently implemented only the objective function but not the gradient or hessian. However, I expect to do so in the future.
I would like to get recommendations on a light-weighted optimization tool that takes optional gradient and hessian info, but can compute them numerically by itself if these are not available. I have had good experience with sequential quadratic programming, and would like it to be the underlying algorithm. I looked at several options, include optim and NLopt, neither allow me to skip the gradient function.
So, what are my options? Do I have to implement gradients myself? Thanks a lot.

Hi @sunjin, the organization I’m part of, JuliaSmoothOptimizers, has an implementation of tron that handles box-constrained problems. You can define your problem and solve with tron with the following code:

using NLPModels, JSOSolvers
nlp = ADNLPModel(objective, x0, lvar, uvar) # NLPModels 0.13
output = tron(nlp)

This will use ForwardDiff to compute the derivatives internally. Alternatively, you can use some other tool to define your problem such as JuMP or AMPL and solve it the same way.

Finally, you can define your functions manually. It’s a longer code, but I have a video here: JuliaSmoothOptimizers Tutorials - Defining your optimization model manually - part 1 - YouTube

Also, you can use ipopt instead of tron, by adding NLPModelsIpopt and calling ipopt(nlp).

Obs.: Since the video came out, we’ve updated ADNLPModel, so the syntax is slightly different in NLPModels 0.12.

4 Likes

Optim doesn’t require a gradient. Just call optimize(f,u0)

1 Like

Thank you. I will check it out.

You are right. optim does seem to use finite difference if gradient is not avaliable.

Optim has support for Forwarddiiff as well.

1 Like