At work, we currently use JuMP + Gurobi to solve fairly large LPs in production. These LPs are linear approximations of much smaller nonlinear problems. For several reasons including “Gurobi is expensive” and “this approach is sometimes slow,” I’m interested in trying out the smaller nonlinear formulation. However, I’m not sure where to start: JuMP + IPOpt? NLopt directly? Optim.jl? Etc.
I know the normal advice, especially for research, is “it depends on your problem; try out different solvers and formulations and see what works best.” However,
- The exact problems we solve are likely to evolve over time as the business use case evolves. So I’m less interested in absolutely maximal performance on the current problems than I am in a setup which will be stable and [relatively] easy to work with as we update formulations over time.
- Time is finite, I have to start somewhere, may as well start where success is most likely to be found!
A few details about my problem (nonlinear formulation):
- There are typically 500-5000 variables.
- The objective is a maximization. I’m ~95% sure (based on physical arguments) that the objective is concave, though I don’t have an analytical proof. Over the feasible region, the objective is definitely bounded.
- Each variable has a min/max value. Other than that, the only constraints (in a minimal formulation with no helper variables) are a small handful (~10?) of linear constraints. In practice these constraints are rarely binding.
- Evaluating the objective function takes about .01 seconds, +/- an order of magnitude for problem size.
- I can provide exact gradients without autodifferentiation. I don’t think I can provide Hessians.
I greatly welcome any advice on which solvers / ecosystem seem most appropriate for this kind of problem!