How to use ForwardDiff with NLopt?

It is entirely possible I messed it up, but the rough idea is like that. NLopt will create an array and pass it to nlfunc, which must write the gradient into it. It may choose not to pass this at every step. I think it also creates vec, rather than re-using the start_vec you pass to NLopt.optimize(nlname, start_vec). Here myfun is the function you are actually minimising. Perhaps grad .= ForwardDiff.gradient(myfun, vec) is simpler.

Optim handles more of this for you, but not sure it has the same algorithm:

od = OnceDifferentiable(myfun, start_vec, autodiff=:forward)
Optim.optimize(od, start_vec, LBFGS())
1 Like