Optim: What optimiser is best if your gradient computation is slow?

Well, no, the output of this Volterra process will be used to model the volatility of the stock-price, which is then used to compute the call price by averaging over all paths, resulting in the ‘model’ option price that I try to calibrate to market options prices. So in the end, there is 3 parameters and one objective function, so 3 derivatives.

I thought that the convolution part of my code was the bottleneck, but I should obviously have used btime to check this. However, the gradient call of my full code is still indeed ~100x slower using btime, than just the evaluation of the objective function. What can I do with @code_warntype ?