Global Constrained Nonlinear Optimization

I am looking for a global optimizer to solve a nonlinear problem with three variables and two nonlinear inequality constraints. So far, everything I’ve tried delivers local solutions, i.e. solution heavily relies on the initial values. Do you have any recommendations?

Starting with the obvious: this problem is non-convex, right? Otherwise, any local solver will have given you a global optimum already.

The ANSI group at Los Alamos develops and maintains the Alpine global solver, which can solve nonconvex problems to global optimality.
You can access it from JuMP.

Given that your problem has relatively small dimension, interval arithmetic-based methods such as IntervalOptimisation might give you good results as well.

Finally, if your objective and constraints are quadratic functions, Gurobi supports solving those problems to global optimality. Note that any polynomial function can be reformulated into (possibly several) quadratic constraints, e.g., y = x^{3} can be written as y = x z, z = x^{2}.

2 Likes

Maybe try Fresa.jl which is in the general registry? It has been developed for such problems and can work well. Happy to discuss off-line if you wish.

1 Like

There is also the ISRES algorithm in NLopt.

1 Like

You could use PRIMA or Evolutionary both of which have derivative-free global optimizers that support constraints, if you want to avoid rewriting your problem you might want use the Optimization.jl wrappers for them

You could also use a lot of optimizers from the MathOptInterface wrapper MathOptInterface.jl · Optimization.jl as well - including ISRES from NLopt as Steven mentioned above.

PRIMA is only local optimization algorithms, I believe. And Evolutionary doesn’t support nonlinear constraints.

Ah, you are right on PRIMA -coblya is local. Evolutionary does support non-linear constraints Constraints · Evolutionary.jl

Only approximately — it asks you to express the constraints as penalties, but it looks like it does so in a way that doesn’t guarantee that the constraints are actually satisfied.

If you add a finite penalty for violating a constraint, as Evolutionary.PenaltyConstraints and Evolutionary.WorstFitnessConstraints do, then the optimization is likely to converge to a solution that trades off a constraint violation for an improvement in the objective. You only converge to a feasible solution in the limit where you ramp up the penalty coefficient to infinity.

To ensure that you actually satisfy the constraint, one option is a barrier function that diverges as you approach a constraint, but these have the tradeoff that they prevent you from reaching the boundary of the feasible region (i.e. you can’t reach exact equality for an inequality constraint). Another option is an augmented Lagrangian method, which adds Lagrange multipliers in addition to a penalty, so that you can converge to a feasible solution with a finite penalty.

The nice thing about the augmented Lagrangian method is that it allows you to apply any unconstrained optimization method (global or local) to a nonlinearly constrained problem. (NLopt includes this technique.)

2 Likes

The nice thing about the augmented Lagrangian method is that it allows you to apply any unconstrained optimization method (global or local) to a nonlinearly constrained problem. (NLopt includes this technique.)

Yes, I plan to support creating the augmented lagrangian in Optimization.jl natively it should be pretty interesting to be able to use a much bigger set of optimizers with non-linear constraints than now.

Percival.jl is also a native Julia implementation of the augmented Lagrangian algorithm.

1 Like

For reference, Manopt also has ALM: Augmented Lagrangian Method · Manopt.jl

1 Like