Initial guess and search range.
I see that there is an optional argument of SearchRange. What happens when no range is specified? What is the initial guess? Is it random or deterministic? Is there a way to control the initial guess?

Stopping condition.
In case I do not know the minimum of the function, and therefore cannot use the TargetFitness parameter, what is the stopping condition of the optimizer? I guess that if it gets to more than MaxSteps iterations, it stops. But is there a way to provide other conditions? Like the wanted tolerance of the fitness function?

Keeping track of the results.
Is there a way to log the attempted values by the optimizer? I am interested to see which regions in the parameter space were explored, and what was the value of the fitness function.

Choosing an optimizer.
I do not know how to choose an optimizer. I read in the documentation the guide so I am using the recommended adaptive_de_rand_1_bin_radiuslimited and de_rand_1_bin DE. To that I compare the results of the random_search. Is there some more advice on how to choose / compare between the optimizers? (My problem has dimensions of O(100)).

Sorry for missing these questions earlier. Anyway:

Default SearchRange is (-1.0, 1.0). Default sampling is latin hypersquare sampling with the goal being a better â€śspreadâ€ť over the search space. There is currently no easy way to provide an initial guess but this is one of the most requested features so we are currently working on that.

The default choice (adaptive_de_rand_1_bin_radiuslimited) is very robust and in particular when your problem is large. You can also try dxnes (can be more effective if smaller problem) or generating_set_search (can be faster for â€śsimplerâ€ť problems). The other ones rarely have an edge on any of these three in my experience but YMMV. A new CMA-ES is coming soon which I have some high hopes for.

I have one version where using Tuples for the SearchRange
(like in the introductory Rosenbrock example) works, and another version of BlackBoxOptim where it rejects the use of Tuples for SearchRange.

I could not find any documentation for clarification of how to specify ranges.
[update: I figured out that the ranges cannot be Integers - you have to add the decimal points! Tuples are OK now]

These types of optimizers donâ€™t support that out of the box so the way to do it is typically to add some large penalty for violating the constraint or use a multi-objective optimizer and have the second fitness be the penalty for violating the constraint. Some code to give you an idea (this is for the first option, can be adapted for the 2nd option):

const BasePenalty = 10000 # Select some large number that will dominate the "normal" fitness.
function penalty_constraint_x1_gt_x2(x)
if x[1] < x[2]
return BasePenalty + (x[2] - x[1]) # Smaller penalty as we get closer to non-violation of constraint
else
return 0.0
end
end
my_new_fitness(x) = my_normal_fitness(x) + penalty_constraint_x1_gt_x2(x)
res = bboptimize(my_new_fitness; ...)

I log the optimization progress (including fitness value and candidates) in my fitness function, but I am confused about what value is actually recorded. For example, I choose my population size to be 1000 and perform 10000 evaluations. I expect the log file to contain 1000*10000=10e07 records because each evaluation should input 1000 candidates to the fitness function. However, the log file contains around 10000 fitness values, and the best fitness value in the log file seems to perform worse than the best fitness value in the optimization output.
Could you explain this to me, please? Thank you!

I think I know why now. The number of evaluations applies to each candidate in each population. So for a full generation to evolve, I should select the number of evaluations as an integer multiplying the population size. For example, if I selected the population size to be 50 and I want the population to evolve 1000 times, I should set my evaluations to be 50*1000 = 50,000.