BlackBoxOptim - beginner's questions

I am new to solving optimization problems. So please excuse any ignorance in my questions.

I am using BlackBoxOptim.jl, and I have a few questions:

  1. Initial guess and search range.
    I see that there is an optional argument of SearchRange. What happens when no range is specified? What is the initial guess? Is it random or deterministic? Is there a way to control the initial guess?

  2. Stopping condition.
    In case I do not know the minimum of the function, and therefore cannot use the TargetFitness parameter, what is the stopping condition of the optimizer? I guess that if it gets to more than MaxSteps iterations, it stops. But is there a way to provide other conditions? Like the wanted tolerance of the fitness function?

  3. Keeping track of the results.
    Is there a way to log the attempted values by the optimizer? I am interested to see which regions in the parameter space were explored, and what was the value of the fitness function.

  4. Choosing an optimizer.
    I do not know how to choose an optimizer. I read in the documentation the guide so I am using the recommended adaptive_de_rand_1_bin_radiuslimited and de_rand_1_bin DE. To that I compare the results of the random_search. Is there some more advice on how to choose / compare between the optimizers? (My problem has dimensions of O(100)).

2 Likes

I’m in the same boat and have a lot of similar questions, but I can also help with some answers maybe.

  1. It seems like the initial guess is randomly drawn from the initial range, which is (-1,1) if no range is provided.

  2. I just modified my objective function to print its arguments, which seems to work fine.

Thanks for the input!

Sorry for missing these questions earlier. Anyway:

  1. Default SearchRange is (-1.0, 1.0). Default sampling is latin hypersquare sampling with the goal being a better “spread” over the search space. There is currently no easy way to provide an initial guess but this is one of the most requested features so we are currently working on that.

  2. MaxSteps and MaxTime are the most commonly used stopping conditions. But you can also use a callback function and explicitly stop based on your own logic. There is an example of this in the test suite, see bottom of: fixed shutdown methods and added test shutting down optimization from… · robertfeldt/BlackBoxOptim.jl@5ed8cd9 · GitHub

  3. There is logging and tracing but they are not very flexible. If you want to log everything you can either do this in the fitness function or use a callback function, see for example: BlackBoxOptim.jl/save_fitness_progress_via_callback_function.jl at master · robertfeldt/BlackBoxOptim.jl · GitHub

  4. The default choice (adaptive_de_rand_1_bin_radiuslimited) is very robust and in particular when your problem is large. You can also try dxnes (can be more effective if smaller problem) or generating_set_search (can be faster for “simpler” problems). The other ones rarely have an edge on any of these three in my experience but YMMV. A new CMA-ES is coming soon which I have some high hopes for.

Hope this helps!

6 Likes

Hi I have another beginner’s question.

I have one version where using Tuples for the SearchRange
(like in the introductory Rosenbrock example) works, and another version of BlackBoxOptim where it rejects the use of Tuples for SearchRange.

I could not find any documentation for clarification of how to specify ranges.
[update: I figured out that the ranges cannot be Integers - you have to add the decimal points! Tuples are OK now]

I am also new with BlackBoxOptim.jl

Where can I find the references about optimizers?

Warmest Regards

Sorry for late reply; vacation here…

Anyway, documentation is lacking but if you want to see which methods are implemented you can check:

https://github.com/robertfeldt/BlackBoxOptim.jl/blob/master/src/optimization_methods.jl

Note though that the DE variants for single-objective and BORG for multi-objective have seen the most use and testing.

Hope this helps.

I have another question for @robertfeldt:
Is it possible to add an inequality constraint for two parameters, e.g., x[1]>x[2]?
Thanks!

These types of optimizers don’t support that out of the box so the way to do it is typically to add some large penalty for violating the constraint or use a multi-objective optimizer and have the second fitness be the penalty for violating the constraint. Some code to give you an idea (this is for the first option, can be adapted for the 2nd option):

const BasePenalty = 10000 # Select some large number that will dominate the "normal" fitness.

function penalty_constraint_x1_gt_x2(x)
    if x[1] < x[2]
        return BasePenalty + (x[2] - x[1]) # Smaller penalty as we get closer to non-violation of constraint
    else
        return 0.0
    end
end

my_new_fitness(x) = my_normal_fitness(x) + penalty_constraint_x1_gt_x2(x)

res = bboptimize(my_new_fitness; ...)
3 Likes

Got it. Thanks!

1 Like

I log the optimization progress (including fitness value and candidates) in my fitness function, but I am confused about what value is actually recorded. For example, I choose my population size to be 1000 and perform 10000 evaluations. I expect the log file to contain 1000*10000=10e07 records because each evaluation should input 1000 candidates to the fitness function. However, the log file contains around 10000 fitness values, and the best fitness value in the log file seems to perform worse than the best fitness value in the optimization output.
Could you explain this to me, please? Thank you!

I think I know why now. The number of evaluations applies to each candidate in each population. So for a full generation to evolve, I should select the number of evaluations as an integer multiplying the population size. For example, if I selected the population size to be 50 and I want the population to evolve 1000 times, I should set my evaluations to be 50*1000 = 50,000.

Hi,

Not sure what you want to achieve since your example code is a bit confusing.

Currently you state that the number of dimensions is the length of p_init. But if p_init is a vector of the initial guesses the dimension is the length of one of those vectors. So it seems you want

NumDimensions = length(first(p_init))

Please see the example code in the README of the package. It shows an example when we give a single initial guess (a vector of two floats since our dimension is 2) or when we give a vector of two such initial guesses.

Hope this helps.

Thank you for your response. My main problem is how I can define bboptimize for the vectorized problems. For example, suppose we want to estimate parameters with the type of Vector{Vector{Float64}}, so how can we define bboptimize for this?
Note: The initial values for parameters are like this
p_init =
[
[0.2, 0.2, 0.2, 0.2, 0.2, 0.2],
[0.4, 0.4, 0.4, 0.4, 0.4, 0.4],
[0.3, 0.3, 0.3, 0.3, 0.3, 0.3]
]

BlackBoxOptim always optimises a flat vector of floats. But you can just reshape it before you apply your fitness function. So, basically use vcat to turn a vector of vector into a straight vector and then reshape (or just view) before using your fitness function. Code to inspire you:

julia> p_init = [
       [0.2, 0.2, 0.2, 0.2, 0.2, 0.2],
       [0.4, 0.4, 0.4, 0.4, 0.4, 0.4],
       [0.3, 0.3, 0.3, 0.3, 0.3, 0.3]
       ]
3-element Vector{Vector{Float64}}:
 [0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
 [0.4, 0.4, 0.4, 0.4, 0.4, 0.4]
 [0.3, 0.3, 0.3, 0.3, 0.3, 0.3]

julia> p_init_as_vector = vcat(p_init...)
18-element Vector{Float64}:
 0.2
 0.2
...
 0.3
 0.3

julia> function asvectors(x::Vector{Float64}, numvecs, lenvec)
           map(s -> view(x, s:(s+lenvec-1)), 1:lenvec:(numvecs*lenvec))
           end
asvectors (generic function with 1 method)

julia> asvectors(p_init_as_vector, 3, 6)
3-element Vector{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}:
 [0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
 [0.4, 0.4, 0.4, 0.4, 0.4, 0.4]
 [0.3, 0.3, 0.3, 0.3, 0.3, 0.3]
1 Like