[ANN]: CMAEvolutionStrategy.jl

From the README:

The CMA Evolution Strategy is a stochastic method for derivative-free optimization of potentially non-linear, non-convex or noisy functions over continuous domains (Hansen 2016). A brief discussion of its performance in practice can be found on wikipedia.

The default settings and implementation details follow closely Hansen 2016 and pycma.

For details on noise handling see Hansen 2009.

Example

julia> function rosenbrock(x)
           n = length(x)
           sum(100 * (x[2i-1]^2 - x[2i])^2 + (x[2i-1] - 1)^2 for i in 1:div(n, 2))
       end

julia> using CMAEvolutionStrategy

julia> result = minimize(rosenbrock, zeros(6), 1.)
(4_w,9)-aCMA-ES (mu_w=2.8,w_1=49%) in dimension 6 (seed=17743412058849885570, 2020-05-12T16:22:27.211)
  iter   fevals   function value      sigma   time[s]
     1        9   6.06282462e+02   8.36e-01     0.008
     2       18   6.00709117e+02   8.42e-01     0.009
     3       27   2.40853796e+02   7.84e-01     0.009
   100      900   8.25748973e-01   1.44e-01     0.021
   200     1800   2.21358637e-05   1.12e-02     0.040
   266     2394   5.58767672e-12   2.76e-05     0.051
Optimizer{Parameters,BasicLogger,Stop}
(4_w,9)-aCMA-ES (mu_w=2.8,w_1=49%) in dimension 6 (seed=17743412058849885570, 2020-05-12T16:22:27.254)
  termination reason: ftol = 1.0e-11 (2020-05-12T16:22:27.255)
  lowest observed function value: 1.076905008476142e-12 at [0.9999990479016964, 0.9999981609497738, 0.9999990365312236, 0.9999981369588251, 0.9999994689450983, 0.9999988356249463]
  population mean: [1.000000255106133, 1.0000004709845969, 1.0000006232562606, 1.0000012290059055, 0.9999998790530266, 0.9999997338544545]

If you are interested, I’ll register it.

10 Likes

Could you register? We would like to set this up in GalacticOptim.jl. Also, could you add a batched parallelism interface, i.e. something so that you give the user N parallel points to evaluate at the same time? It would really be nice to get 1000 parameters at once so I can send it to the GPU and give the results back.

2 Likes

Thanks for the interest.

I see the following possibilities to switch between parallel and single evaluation (actually, by default I could just evaluate multi-threaded)

  1. boolean keyword argument parallel_evaluation.
  2. If applicable(f, Vector{Vector{Float64}}).
    Do you have any preference?

A switch would be much much more robust, because otherwise applicable wouldn’t play well on duck typed functions.

1 Like

It’s registered now and has switches for parallel_evaluation and multi_threading evaluation of the objective function.

1 Like

What’s the multi_threading one do? Wouldn’t the user just multithread CMAEvolutionStrategy.jl/runtests.jl at master · jbrea/CMAEvolutionStrategy.jl · GitHub ?

Yes, you can use parallel_evaluation = true and write your own multi-threaded evaluation of the objective function. Alternatively, you can pass your ordinary (single evaluation) objective function and profit with multi-threading = true from a generic multi-threaded evaluation.

2 Likes