I’m working on an optimization problem, and I’m trying to make use of the gradient-based algorithms in the NLopt
library (specifically LD_SLSQP
). The objective function which I am trying to minimize does not have an analytic form (evaluating it involves computing the numerical solution of a system of ODEs), so the gradient must be computed numerically.
I’d like to compute the gradient using automatic differentiation with the ForwardDiff
package, but I’m not sure exactly how to do this. My question is: what is the correct way to supply a gradient to an NLopt algorithm which is computed via ForwardDiff
?
In this tutorial JuliaOpt/NLopt.jl, it says the gradient must be modified in place, e.g.
using NLopt
function myfunc(x::Vector, grad::Vector)
if length(grad) > 0
grad[1] = 0
grad[2] = 0.5/sqrt(x[2])
end
return sqrt(x[2])
end
But ForwardDiff needs to evaluate myfunc
in order to calculate grad
, so I don’t see how this would work using in place modification. Also, the FD documentation here ForwardDiff.jl/stable/user/limitations/ says that the target function must be unary (only one argument), while NLopt
requires that myfunc()
have two: x
and grad
.
Is there a way to make this work? Or do I need to use a different optimization algorithm/different AD algorithm?