Latest recommendations for global optimization

package

#21

I should have mentioned this, but I wanted to do it myself, but couldn’t really get it to work in the amount of time I had available. Also, before looking at the actual code, I thought R-Optim.NelderMead in the plot was nelder mead from R’s optim :slight_smile:


#22

You may be interested in this example of using simulated annealing to estimate a small DSGE model by GMM: https://github.com/mcreel/Econometrics/blob/master/Examples/DSGE/GMM/DoGMM.jl

Some discussion of this is found in the document econometrics.pdf, which is in the top level of the repo that contains this example. In my experience, Bayesian GMM-style methods are more reliable for this sort of model.


#23

If the function evaluation is expensive, you may also be interested in https://github.com/jbrea/BayesOpt.jl


#24

I’ve added SAMIN and made a couple of changes, now the best algorithm seems to be some of the BlackBoxOptim.jl ones. They don’t have very good final convergence, so I chained them into Nelder-Mead, which improved their performance in the benchmark.

I should run the benchmark with more repeats and at higher dimension, but the Python optimisers are so slow that it already take quite a long time to complete. Maybe there’s still some issues with options too.


#25

Thanks for looking at this again! Do you run this on v1.0.x?

Edit: Just had a look, yeah this seems much more in line with what I’d expect. SAMIN in action @mcreel ! Does quite well considering the winners are designed to handle very tough landscapes.


#26

Depending on how irregular the test functions are, SAMIN’s performance could possibly be improved by setting the temperature reduction factor (rt), which is 0.9, by default, to a lower value, e.g., 0.5. This speeds it up, but increases the chances of missing the global optimum. The 0.9 value is pretty conservative, designed to deal with challenging functions.


#27

Some are quite tough I’d say, although you cannot really put a number on toughness: http://coco.lri.fr/downloads/download15.01/bbobdocfunctions.pdf


#28

I agree that toughness is hard to quantify without experimentation. When I use SAMIN, I run it several times to verify that I always get the same final result. If I don’t, I set rt closer to 1, and start again. But doing that in a benchmarking framework would not be fun. I just wanted to point out that the default value is not necessarily a good choice for these test functions.


#29

I ran in on v0.7, I’ve updated the benchmark script but there’s maybe some issues remaining on v1.0.

The BBOB functions are designed to test different aspects of the optimisers (it’s well explained in the docs), one would check which functions SAMIN fails to solve and try to understand why. I’ve added an example script here :

                                            Sphere => 1.0 
                              Ellipsoidal Function => 1.0 
                                   Discus Function => 0.0 
                               Bent Cigar Function => 0.15
                              Sharp Ridge Function => 0.0 
                         Different Powers Function => 0.65
                                Rastrigin Function => 0.1 
                              Weierstrass Function => 0.95
                             Schaffers F7 Function => 1.0 
 Schaffers F7 Function, moderately ill-conditioned => 0.25
       Composite Griewank-Rosenbrock Function F8F2 => 0.05
                                       Ellipsoidal => 1.0 
                                 Schwefel Function => 0.7 
                                         Rastrigin => 0.5 
                                   Buche-Rastrigin => 0.6 
                                      Linear Slope => 1.0 
                                 Attractive Sector => 1.0 
                         Step Ellipsoidal Function => 1.0 
                     Rosenbrock Function, original => 1.0 
                      Rosenbrock Function, rotated => 1.0 

#30

Thanks for the script, it’s interesting to play with. Below is a version that tries to get the right answer, even if it takes some time. The run length limit is set much higher, and SA is tuned to be more conservative. The results using Ntrials=10 are

                                            Sphere => 1.0
                              Ellipsoidal Function => 1.0
                                   Discus Function => 1.0
                               Bent Cigar Function => 1.0
                              Sharp Ridge Function => 0.0
                         Different Powers Function => 1.0
                                Rastrigin Function => 1.0
                              Weierstrass Function => 0.9
                             Schaffers F7 Function => 1.0
 Schaffers F7 Function, moderately ill-conditioned => 1.0
       Composite Griewank-Rosenbrock Function F8F2 => 0.9
                                       Ellipsoidal => 1.0
                                 Schwefel Function => 1.0
                                         Rastrigin => 1.0
                                   Buche-Rastrigin => 1.0
                                      Linear Slope => 1.0
                                 Attractive Sector => 1.0
                         Step Ellipsoidal Function => 1.0
                     Rosenbrock Function, original => 1.0
                      Rosenbrock Function, rotated => 1.0

The Sharp Ridge function seems pretty tough! Is it possible that there’s an error in the coded true values? The thing I’m happy to see is that, in most cases, it is possible to get the solution if you’re willing to wait long enough.

The script:

using BlackBoxOptimizationBenchmarking, Optim
import BlackBoxOptimizationBenchmarking: minimizer, minimum, optimize 
import Base.string

const BBOB = BlackBoxOptimizationBenchmarking

box(D) = fill((-5.5, 5.5),D)
pinit(D) = 10*rand(D).-5

optimize(opt::Optim.SAMIN,f,D,run_length) =
    Optim.optimize(f, fill(-5.5,D), fill(5.5,D), pinit(D), opt, Optim.Options(f_calls_limit=run_length,g_tol=1e-120,iterations=run_length))

run_length = 10000000
dimensions = 3
Ntrials = 10
Δf = 1e-6

#=
# test on single function
f = BlackBoxOptimizationBenchmarking.F12 # 11 is discus, 12 bent cigar
success_rate, x_dist, f_dist, runtime = BBOB.benchmark(Optim.SAMIN(rt=0.95, nt=20, ns=20, f_tol=1e-12, x_tol=1e-6, verbosity=2), f, run_length, Ntrials, dimensions, Δf)
println(success_rate)
=#

#test on all functions
perf = [(f=>BBOB.benchmark(Optim.SAMIN(rt=0.95, nt=30, ns=30, f_tol=1e-12, x_tol=1e-6, verbosity=0), f, run_length, Ntrials, dimensions, Δf)[1][1]) for f in enumerate(BBOBFunction)] 
    

#31

I’m testing that the minimum is correct when loading the package so it should be fine. You can always check with:

    BBOB.F13(BBOB.F13.x_opt[1:3]) == BBOB.F13.f_opt

I think it’s a final convergence issue, if you look at the parameters that SAMIN returns it’s pretty close to the minimum but not quite there. That’s why I chain it (and other optimizers) with NelderMead in the benchmark.


#32

Yes, with even more conservative tuning, SA also converges for that function, but it’s quite a difficult case, I’ve never encountered anything like that in my “real life” applications.