BlackBoxOptim -- SearchRange problem?

I have used BlackBoxOptim with success in the past. Now, I try to solve an optimization problem L(p) with length(p) = 9 parameters. I then create a search range:

p_range = [(-1.e4,1e4),(-1.e4,1e4),(-1.e4,1e4),(1.5,5),(1.5,5),(1.5,5),(0.1,20),(0.1,20),(0.1,20)];

but get an error message:

julia> res_bb = bboptimize(L; SearchRange = p_range, TraceMode = :silent);
ArgumentError: Using Array{Tuple{Float64,Real},1} for SearchRange is not supported.

...

At the same time, the documentation says:

Any suggestions? Is the documentation outdated, or did I do something horribly wrong?

I’ve run into this before; the problem is that SearchRange accepts either a tuple of two floats, or a vector of tuples of floats. But (1.5, 5) for example is a tuple of a float and an integer. So just write 5.0 instead, and same for the other integers.

1 Like

Ah. I erroneously assumed that (e.g.) (0.5,5) is a tuple of floats just like [0.5,5] is an array of floats. But tuples are different… Thanks for reminding me!

1 Like

I’m a little bit surprised by BlackBoxOptim… I’m playing around with some least squares fitting, with simple data found on the internet:

  • The solid line shows the fit with 3 Gaussian type basis functions where I have hand-tuned the location and spread (“mean”, “standard deviation”), and then solved the linear regression problem using Linear Algebra.
  • The dotted line shows the fit when I have used BlackBoxOptim to find all 9 parameters: location, spread, and linear multiplying factor for all 3 basis functions.

To my eyes, the result using hand-tuning of location + spread looks better.

I have also compared the resulting Loss function for “optimal” parameters:

Here, I have fitted (1) a linear polynomial, (2) a quadratic polynomial, (3) a cubic polynomial, (4) 3 gaussian basis functions with hand-tuned location + spread, (5) gaussian basis-functions with a for loop of a 2-stage procedure of (a) fixed location+spread, followed by linear regression to find multiplying factors, and (b) fixed multiplying factors, followed by Newton minimization of location + spread, and (6) use of BlackBoxOptim for all parameters.

Question: Why does BlackBoxOptim perform so comparably poor? Any settings I should change?

(I’ve tried with res_bb = bboptimize(L; SearchRange = p_range, TargetFitness = 5.,TraceMode = :silent);, but it is not clear what TargetFitness does. In any way, introducing TargetFitness didn’t improve on the result.)

Well, this is difficult to answer… What is the output of BBO, why does it terminate? Have you tried the different algorithms? Do you have an MWE or something similar so people can try to play with your problem?

MWE… packages:

using LinearAlgebra;
using BlackBoxOptim;

Data: taken from Simple Linear Regression Examples: Real Life Problems & Solutions

X = [1.7, 1.5, 2.8, 5, 1.3, 2.2, 1.3] |> x->reshape(x,(1,length(x)))
Y = [368, 340, 665, 954, 331, 556, 375] |> x->reshape(x,(1,length(x)))
#
x = range(minimum(X),maximum(X),length=50);

Case of linear regression with linear polynomial basis function:

ϕ(x) = [1, x];
Φ = ϕ.(X)  |>  x -> reduce(hcat,x)
#
β_lin = Y/Φ;
L_lin = norm(Y-β_lin*Φ);
#
f(x;β=β) = β*ϕ(x)
ylin = f.(x;β=β_lin) |> x-> reduce(vcat,x);

Higher order polynomials in the basis function… just add terms in ϕ(x).

Hand-tuned location (mean M) and spread (standard deviation Σ) in gaussian basis functions:

M = [1.5,2.5,5]
Σ = [0.5,3,4]
ϕ(x) = [exp(-0.5*(x-μ)^2/σ^2) for (μ,σ) ∈ zip(M,Σ)]
Φ = ϕ.(X)  |>  x -> reduce(hcat,x)
#
β_gauss = Y/Φ
L_gauss = norm(Y-β_gauss*Φ)
#
f(x;β=β) = β*ϕ(x)
ygauss = f.(x;β=β_gauss) |> x->reduce(vcat,x);

BlackBoxOptim formulation:

# Range of parameters: beta_i, mu_i, sigma_i
p_range=[fill((-1.e4,1.e4),3)...,fill((1.5,5.),3)...,fill((0.1,20.),3)...];
#
# Loss function for BlackBoxOptim
#
L = p -> begin
    np = Int(length(p)/3)
    β = p[1:np]'
    M = p[np+1:2np]
    Σ = p[2np+1:end]
    ω = zip(M,Σ)
    Φ = ϕ.(X;Ω=ω) |> x->reduce(hcat,x)
    return norm(Y-β*Φ)
end
#
# Optimization using BBO
#
res_bb = bboptimize(L; SearchRange = p_range, TraceMode = :silent);
#
# Unpacking results
#
p_bb = best_candidate(res_bb)
L_bb = best_fitness(res_bb);
#
np = Int(length(p_bb)/3)
β_bb = p_bb[1:np]'
M_bb = p_bb[np+1:2np]
Σ_bb = p_bb[2np+1:end]
ω_bb = zip(M_bb,Σ_bb)
#
f(x;β=β) = β*ϕ(x;Ω=ω_bb)
ybbgauss = f.(x;β=β_bb) |> x->reduce(vcat,x);

I hope the code is readable.

… and here is the result of using Fminbox() in Optim:

and

I have missed this question so you probably already moved on but did you try just letting BBO run for a bit longer. The default is to run only 10K evaluations which might not be enough for your problem. By just letting it run for 2-3 seconds there doesn’t seem to be much more fitness improvement after that (I tried running for 30 seconds with a large population but got the same fitness of 31.11):

res_bb = bboptimize(L; SearchRange = p_range, MaxTime = 2.0);
Starting optimization with optimizer DiffEvoOpt{FitPopulation{Float64},RadiusLimitedSelector,BlackBoxOptim.AdaptiveDiffEvoRandBin{3},RandomBound{ContinuousRectSearchSpace}}
0.00 secs, 0 evals, 0 steps
0.50 secs, 125683 evals, 125618 steps, improv/step: 0.116 (last = 0.1156), fitness=31.112698372
1.00 secs, 254364 evals, 254355 steps, improv/step: 0.214 (last = 0.3103), fitness=31.112698372
1.50 secs, 378487 evals, 378537 steps, improv/step: 0.252 (last = 0.3284), fitness=31.112698372

Optimization stopped after 501991 steps and 2.00 seconds
Termination reason: Max time (2.0 s) reached
Steps per second = 250994.60
Function evals per second = 250943.60
Improvements/step = Inf
Total function evaluations = 501889


Best candidate found: [1384.49, 910.197, -1633.3, 2.71317, 2.60566, 2.48346, 10.2214, 0.723082, 1.5003]

Fitness: 31.112698372

In general, the BBO optimization methods take longer but can handle larger / more complex problems so depends on the specific problem if it is a good fit.

Hope this helps!

1 Like

BTW, a fitness of 31.11 seemed competitive since I got 52.59 from ygauss and very bad from the linear one.