# Global, non-convex, smooth and differentiable optimisation?

I do have a loss function that :

• Is expressed as \lVert \mathbf y - \mathbf A'\mathbf p(\mathbf x)\rVert_2^2, where \mathbf y is a simple vector, \mathbf A is a simple matrix (lower-triangular), and p_1,...,p_m are nasty polynomials.
• Polynomials p_1,...,p_m are computed through a recursive implementation, say a function p(m,x) that computes all of them at once and which should be considered a black box (the number of coefficients of the polynomials are in the millions if we try to express them explicitely…).

Moreover:

• The loss is written in pure julia and automatic differentiation can pass through
• But the very polynomial nature of the loss gives it a huge amount of local minimas, and any gradient descent (e.g. Optim.LBFGS()) gets stuck on the nearest local minimum.

For all these reasons, I am currently optimizing through Optim.ParticleSwarm(), which is a global algorithm for non-convex non-differentiable functions. It works well but it’s quite slow.

Is there somewhere a global optimization algorithm that is more adapted to problems where:

• The loss function is clearly non-convex with a lot of local minima.
• But it is ‘smooth’ and automatic differentiation can pass through, and therefore local gradient descent are possibles.

It is required that the optimisation routines are implemented in pure julia as i use BigFloats. Furthermore if linear equality and bound contraints are posible it’s a bonus

I tend to use MLSL for this sort of problem: it is a multi-start algorithm where you repeatedly use a local optimizer (which can be e.g. BFGS) from different starting points, and which includes some techniques to prevent it from repeatedly searching the same local optima. e.g. NLopt includes an MLSL implementation. It requires some experimentation to choose good termination tolerances for the local search.

You didn’t say how many parameters you have, however. In high dimensions global optimization becomes pretty hard by any method!

It is required that the optimisation routines are implemented in pure julia as i use BigFloats.

NLopt is calling an external C library and only handles Float64; you’d have to re-implement MLSL or some similar algorithm.

You should really try to see whether you can avoid BigFloats. For example, there are lots of ways of working with polynomials that are numerically unstable (e.g. using a monomial basis, Lagrange interpolation, etcetera) and might at first seem to require BigFloat, but which can be reformulated in a stable fashion (Chebyshev-polynomial bases, barycentric interpolation, etcetera) that works fine in double precision.

3 Likes

If you are willing to write your problem as a JuMP model, give GitHub - PSORLab/EAGO.jl: A development environment for robust and global optimization or GitHub - lanl-ansi/Alpine.jl: A JuMP-based Global Optimization Solver for Non-convex Programs a try.

Edit: I plan to add these to Nonconvex.jl, automatically extracting the functions’ expressions but when I get sometime to integrate Nonconvex with ModelingToolkit.

2 Likes

Can a Jump model use a standard julia function as an objective function ? In which case, it might be easy to write it as a Jump model and i’ll try those two optimizers, for sure

It’s not impossible but only EAGO may work then. See Nonlinear Modeling · JuMP. If you don’t mind waiting, check Nonconvex again in a few weeks.

1 Like

No we already did a lot of investigations, and found out that multiple precision is indeed needed for this problem. The polynomials themselves are dense, in the sense that they have millions/billions of coefficients in the standard monomial basis (I cannot even compute them). The algo i have for the loss is currently stable with BigFloats, and that’s the reason i’m on Julia. MultiFloats.jl also works well.

If there are no MLSL that are Julia-native then this is not a solution for me

I do mind waiting, but i’ll definitely check it. Is there already a repo with some details about the infrastruture you are implementing ?

The work will be in GitHub - mohamed82008/Nonconvex.jl: Toolbox for non-convex constrained optimization.. There is no description other than that I want to run ModelingToolkit on the function, extract its expression and pass it to MathOptInterface for JuMP-based solvers to have access to it.

Looks perfect for what I want to do: allows me to specify my objective function as a genuine Julia function. Definitely tell me when you have a first working version

Isn’t this basically what GalacticOptim.jl is doing?

1 Like

So many solvers and packages ! Everyday i found a new one at least.

So i’m trying EAGO through JuMP, but it seems like BigFloats are not possible… I do not find how to specify that i want my variables to be BigFloats.

JuMP only supports Float64 currently (ref Generic numeric type in JuMP · Issue #2025 · jump-dev/JuMP.jl · GitHub)

1 Like

Thanks a lot I did not found it myself. Thus JuMP is not for me. I’m stuck with Optim.ParticleSwarm() for the moment then.

You can use EAGO directly without JuMP.

I’m sorry but i did not found an example without JuMP. As JuMP standards and syntax is quite new to me, I assumed that EAGo could not be used by itself. Do you have some kind of example ? I found nothing in the documentation (by a quick glance).

Note that I know of.

There is a WIP PR at

which will hopefully be finished soon, under GSOC.

2 Likes

As I understood this relies on Nlopt.jl for local searches, so no Bigfloats ? Is there the possibility to use native Julia local searches (like Optim.LBFGS for example), to allow the whole thing to be type-agnostic ?

Yes, you can use any local method you want, just use a closure. I will need to document it, but in the meantime look at the source.