How to choose a nonlinear optimizer (ecosystem)?

At work, we currently use JuMP + Gurobi to solve fairly large LPs in production. These LPs are linear approximations of much smaller nonlinear problems. For several reasons including “Gurobi is expensive” and “this approach is sometimes slow,” I’m interested in trying out the smaller nonlinear formulation. However, I’m not sure where to start: JuMP + IPOpt? NLopt directly? Optim.jl? Etc.

I know the normal advice, especially for research, is “it depends on your problem; try out different solvers and formulations and see what works best.” However,

  • The exact problems we solve are likely to evolve over time as the business use case evolves. So I’m less interested in absolutely maximal performance on the current problems than I am in a setup which will be stable and [relatively] easy to work with as we update formulations over time.
  • Time is finite, I have to start somewhere, may as well start where success is most likely to be found!

A few details about my problem (nonlinear formulation):

  • There are typically 500-5000 variables.
  • The objective is a maximization. I’m ~95% sure (based on physical arguments) that the objective is concave, though I don’t have an analytical proof. Over the feasible region, the objective is definitely bounded.
  • Each variable has a min/max value. Other than that, the only constraints (in a minimal formulation with no helper variables) are a small handful (~10?) of linear constraints. In practice these constraints are rarely binding.
  • Evaluating the objective function takes about .01 seconds, +/- an order of magnitude for problem size.
  • I can provide exact gradients without autodifferentiation. I don’t think I can provide Hessians.

I greatly welcome any advice on which solvers / ecosystem seem most appropriate for this kind of problem!

2 Likes

Nonlinear solver or nonlinear optimization? Those are two very different things, and your discussion seemed very focused on nonlinear optimization so I changed the title.

1 Like

JuMP + Ipopt is the way to go. Stay high level if you want to update the formulations over time.

2 Likes

IPOPT often performs very poorly without exact Hessians.

If your problem objective and constraints are implemented as Julia functions, it will be easy to model your problem using NLPModels.jl and the JuliaSmoothOptimizers ecosystem. Here’s a quick intro to NLPModels: https://juliasmoothoptimizers.github.io/NLPModels.jl/stable/tutorial/

We have solvers for unconstrained and bound-constrained problems. Our tools let you indicate that you want to use a limited-memory quasi-Newton Hessian approximation. For more general constraints, we’re finalizing an implementation of the augmented Lagrangian method. Our solvers are written in pure Julia, but if you wish, you can also pass NLPModels to IPOPT or KNITRO.

Even if your problem is modeled with JuMP, you can turn it into a MathOptNLPModel and use our solvers.

We have a model type where derivatives are computed by way of ForwardDiff. A more sophisticated one with sparse derivatives computed via AD is coming up.

@abelsiqueira put together several tutorials and introduction videos on his Youtube channel: https://www.youtube.com/channel/UCrHWmb1a2JW50QovKgkcKCQ

Feel free to let us know more about your problems and requirements.

5 Likes

@ChrisRackauckas
Thanks. I can’t say I’m not doubting my vocabulary memory a bit, but I’m relatively confident “solver” is a common term (at least in the operations research community) for an optimizer. But whatever labels the topics and helps people searching in the future is best, so thanks for the update :slight_smile:

@odow
Thanks. I’ve had great luck with JuMP before, so it’d be nice to stay in the ecosystem! I’m a little confused by the documentation around user-defined nonlinear functions. The register function takes an argument for a Hessian, but this argument must be omitted?

Also, it looks like register requires specifying separate functions for the objective and gradient. For my specific problem, it will be much more efficient to calculate these jointly. Is there an obvious trick I’m missing for joint calculations?

@dpo
Much appreciated, I will check out the English language tutorials linked. Is this a good one to start with?

The best way is to write out the algebraic form of each equation and let JuMP handle the AD.

If you have common subexpressions in the objective and in the constraints, you can help JuMP by defining them as @NLexpression objects, just as you would with @expression objects for linear problems.

If you can’t write out the algebraic form, then you can register your own functions, and optionally provide first derivatives. But JuMP doesn’t support Hessians for multivariate user-defined functions. If all, or the majority of your code is user-defined functions, you might be best trying something else like NLPModels.

2 Likes

@evanfields My colleague @abelsiqueira recommends this one. Feel free to reach out.

3 Likes

@evanfields, this Wednesday, I’ll stream an example of creating your own model manually with NLPModels.jl. I can answer your question there if you’re able to participate.

Link: Twitch at 1h30 PM UTC-3 (BRT)

3 Likes