Hello everyone
I was playing around with optimizing a problem using some code that looks as follows:
using Optimization
using OptimizationOptimJL
function cost(u,p)
# define the cost function here
end
prob = Optimization.OptimizationProblem(
Optimization.OptimizationFunction(cost),
[2e-5], #u0
(p1, p2, p3, p4), # p
lb=[1e-8], ub=[1e-4]
)
ρ = OptimizationOptimJL.solve(prob, NelderMead(), maxiters=50, maxtime=10^5)
But when I try to run it I get the following:
ERROR: ArgumentError: Fminbox(NelderMead{Optim.AffineSimplexer, Opt
im.AdaptiveParameters}(Optim.AffineSimplexer(0.025, 0.5), Optim.Ada
ptiveParameters(1.0, 1.0, 0.75, 1.0))) requires gradients, use `Opt
imizationFunction` either with a valid AD backend https://docs.scim
l.ai/Optimization/stable/API/ad/ or a provided 'grad' function.
Not sure why it would ask for derivatives here.
I tried playing around with NelderMead(parameters = FixedParameters()) but then I get Fixed parameters is not defined in Main
.
Am I missing something?
Derivatives in this case are impossible as far as I’m aware, since the cost function involves a big random walk algorithm. I don’t need to use Nelder Mead necessarily, so I’m open to any alternative suggestions.