I am trying to get NLopt to work but it just stops immediately and gives :FORCED_STOP. I tried the following but it looks like the algorithm isn’t even going into _ff. Any thoughts? Thank you!
function ff(x)
return sum(x.^2)
#return sum(abs.(SSresid(mod_pars,x[1],x[2],x[3])));
end
function _ff(x)
try ff(x)
catch
println("error")
end
end
opt = Opt(:LN_SBPLX, 3);
opt.xtol_rel = 1e-1;
opt.min_objective = _ff;
(minf,minx,ret) = NLopt.optimize(opt, [1,25,0.1])
using ForwardDiff
using NLopt
opt = NLopt.Opt(:LN_SBPLX, 3);
f(x) = sum(x.^2)
function my_obj(x, g)
if length(g) > 0
ForwardDiff.gradient!(g, f, x)
end
return f(x)
end
opt.min_objective = my_obj
sol = NLopt.optimize(opt, [1.0, 25.0, 0.1])
Giving some additional details – may be helpful to other newbies like me:
I had similar errors and this page was one of the few that popped up. So it may be helpful to post my solution.
I was trying to use NLopt’s LN_BOBYQA to do a gradient free parameter search within box constraints.
I eventually figured it out but it was very obscure. One key thing that is an easy-to-miss requirement is that any function passed to NLopt has to take 2 inputs: (1) the parameter vector, and (2) a gradient function that modifies the input (e.g. g! rather than g).
However, the gradient function is pointless in a gradient-free search. But it still has to be there to get NLopt to work. Here was how I did it - YMMV but this worked for me:
# (Note: I have a pre-existing function, func_to_optimize, that takes
# input parameter vector x, as well as a variety of constant inputs.
# This could be anything, it just has to return a value depending on
# the input parameters in x. (e.g. a -1*log-likelihood, so that by minimizing
# you conduct a Maximum Likelihood search).
# Reduce func_to_optimize() to a single-input function that just takes x
func = x -> func_to_optimize(x, parnames, inputs, p_Ds_v5; returnval="bgb_lnL")
# Create func2, which adds a dummy_gradient! input to make NLopt happy, but then just runs func()
function func2(pars, dummy_gradient!)
return func(pars)
end # END function func2(pars, dummy_gradient!)
# Define starting guesses at the parameters; check that both func and func2 return log-likelihoods:
pars = [0.9, 0.9]
func(pars)
func2(pars, [])
# Limits on the parameter guesses
lower = [0.0, 0.0]
upper = [5.0, 5.0]
#######################################################
# Set up the optimization
#######################################################
using NLopt
opt = NLopt.Opt(:LN_BOBYQA, length(pars))
ndims(opt)
opt.algorithm
algorithm_name(opt::Opt)
opt.min_objective = func2
# Set the lower & upper bounds on the parameters
opt.lower_bounds = lower::Union{AbstractVector,Real}
opt.upper_bounds = upper::Union{AbstractVector,Real}
opt.lower_bounds
opt.upper_bounds
opt.ftol_abs = 0.00001 # tolerance on log-likelihood
# Run the optimization
(optf,optx,ret) = NLopt.optimize!(opt, pars)