JuliaOpt parallel algorithms

Hi,
I am currently using NLopt.jl to estimate parameters of differential equation systems. I am interested to speed up the calculations hence I am searching for solutions to solve optimization problem in parallel (using multiple workers on different nodes, threads, etc). Are there ways to handle it using JuliaOpt?

An important point to add is that @ivborissov’s objective function cannot easily be parallelized over more than 3 points, meaning that it needs to be parallelized elsewere (usually this is the place to parallelize!).

One trivial way to parallelize optimization problems is to use local optimizers and parallelize the search domain.

all_minx = [Float64[] for i in Threads.nthreds()]
all_minf = [Float64[] for i in Threads.nthreds()]
Threads.@threads for i in 1:N 
  (minf,minx,ret) = optimize(opt, rand(2))
  push!(all_minx[Threads.threadid()],minx)
  push!(all_minf[Threads.threadid()],minf)
end

minxs = vcat(all_minx...)
minfs = vcat(all_minf...)

That’s an easy way to solve with N random initial conditions in parallel using threads, and collect the minx and minf values. You can then extend this to solve using a grid of initial conditions by using i to determine what the IC is, or you can change this to be an @parallel or pmap call. This is then useful because local optimizers converge to local minimums, so if you have a good enough coverage you’ll hit the global one, so the “true minimum” can be taken as minimum(minfs) (and that index in minxs gives the minarg).

This is different from a parallel optimization algorithm though, and I am not of an implementation which exists.

Chris,thanks! Yes it is a good idea to solve the problem with a wide range of initial conditions. Also I would like to know if there are optimization algorithms that can be effectivly parallelized?

Most of the standard methods are very sequential and so they cannot. If the system is large enough, then it can be pretty effectively parallelized by parallelizing the solution of linear systems and having the user parallelize the objective function. Since many optimization algorithms have to do some form of linear solving as the main step, this then works for sufficiently large systems.

But form what I know about your system from Gitter chats, it doesn’t fall into this range. In this case, evolutionary and particle swarm based methods are the methods I know which can be parallelized. Essentially these work by having a ton of different individual agents moving through parameter space, and so you can parallelize across the agents quite effectively. I don’t think these algorithms are implemented in Julia, or at least I haven’t seen a package for it. But you can take a look at the Simulated Annealing implementation of Optim.jl (though be careful: Experience with SimulatedAnnealing? · Issue #173 · JuliaNLSolvers/Optim.jl · GitHub) or take a look at Evolutionary.jl and maybe build (/contribute?) a parallel algorithm from one of those.

P.S. There is this package but I’ve never used it.

Chris, thanks.
There are some really nice ideas how to run evolutionary algorithms in parallel

1 Like

Some parallel evolutionary algorithms are available in the package https://github.com/robertfeldt/BlackBoxOptim.jl/blob/master/README.md
Hopefully simple to test and see if they work for you, the Readme links to a parallel example

2 Likes

Oh nice! Never knew BlackBoxOptim.jl had this!

Great, thanks! I will test if it works for my problem