so the problem that I am having with speed relates to the compliation time, actually. Once compiled my function only takes a few thousanths of a second to run, but it takes well-over a minute to compile. Unfortunately, the function is not one that gets run over and over again so the first time it runs is the time that matters. I guess I’m willing to sacrifice some run time in order to get a shorter compilation time.
Depending on the amount of runtime performance you’re willing to lose, you might want to try interpreting.
julia> using JuliaInterpreter
julia> @btime @interpret third_d($p);
47.894 ms (149692 allocations: 5.71 MiB)
The first @interpret call in this place is still a bit slow, but not as bad as your compilation time for nested ForwardDiff. Here’s a fresh julia session:
julia> begin
using JuliaInterpreter, ForwardDiff
test(x) = x[1]^3*log(x[2])*sqrt(x[3])
first_d(x) = ForwardDiff.gradient(test,x)
second_d(x) = ForwardDiff.hessian(test,x)
third_d(x) = ForwardDiff.jacobian(second_d,x)
p = [1.5, 2.0, 3.0]
end;
julia> @time @interpret third_d(p)
4.670857 seconds (10.34 M allocations: 509.138 MiB, 3.84% gc time)
julia> @time @interpret third_d(p);
0.049506 seconds (149.71 k allocations: 5.712 MiB)
The other nice thing about this approach is that you re-use the compilation machinery from function call to function call so you pay the compile price the first time you interpret something but no more later on.