# Bayesian Optimization with Julia

Hi, anyone knows how to use bayesian optimization to get the minimum value of a blackbox function, I read the manual of BayesianOptimization.jl
https://github.com/jbrea/BayesianOptimization.jl
but it’s not easy to understand, and I don’t want to optimize the hyperparameters, I just have a function that can output a value once a group of parameters are input, and I want to use bayesian optimization to get the minimum value and the corresponding parameters. Can anyone give me an easy example, e.g. for` f=x^2+(y-2)^2`, we know the minimum parameter is` x=0,y=2`, I can use bayesopt function in matlab and get this parameter, but for Julia, I don’t know how to do it
Here is a example of Matlab

``````xrange = optimizableVariable('v1',[-2,2],'Type','real');
yrange = optimizableVariable('v2',[-5,5],'Type','real');

var=[xrange yrange];

bayesObject = bayesopt(@(tbl)mdlfun(tbl),var,...
'MaxObjectiveEvaluations',50);   % iteration numbers

function rel = mdlfun(tbl)

x=tbl.v1;
y=tbl.v2;
rel=f(x,y);

end

function output=f(x,y)
output=x^2+(y-2)^2;
end
``````

Can anyone help me to re-write it with Julia

We do not yet have good defaults in BayesianOptimization.jl, but it is planned to implement some as a GSOC project.

For now, you can copy-paste the example from the readme and replace `f(x) = sum((x .- 1).^2) + randn()` with the function you want to optimize, e.g. `f(x) = x[1]^2 + (x[2] - 2)^2`.

You do not need to optimize the hyperparameters, but you should define them. Prior knowledge helps the optimization.

Note that Nonconvex.jl also offers Bayesian optimization (so far I have not used it though).

I have some additional questions about your example, if I have more than 2 parameters, do I need to change position `1` and `2` in the graph accordingly? Why it needs to evaluate the function for each input 5 times? Finally, for all the other parts, just keep the value, right?

1. & 2.: yes. But the kernel should also be adapted. E.g. `kernel = SEArd(zeros(7), 5.)` or `kernel = SEIso(0., 5.)` and `kernbounds` should also be adapted (see `GaussianProcesses.get_param_names(model.kernel)`). Only for noisy functions it may make sense to have `repetitions > 1` (the default is 1).

It looks like Hyperopt.jl has a Bayesian optimization method for searching through model hyper-parameters, if that’s the kind of thing you’re looking for.

But note that it currently only supports optimizing continous numeric (floating point) values.