I’ve been trying (so far unsuccessfully) to figure out how to execute ForwardDiff’s jacobian! function such that it will not perform additional allocations once compiled. However, I’ve been unable to figure out how to accomplish this based on the documentation and advice from others (I found a number of outdated examples that don’t yield zero allocations for me, but they’re generally older versions).
I have a minimum viable example below. I would really appreciate advice on where these last few allocations are coming from, and perhaps more general suggestions on how to pinpoint their location.
using ForwardDiff
function f!(y, x)
y[1] = x[1]^2
y[2] = x[2] - x[3]
y[3] = x[1] + x[2]
y[4] = x[1] - x[3]^2
nothing
end
function test_jacobian_allocs()
x = ones(3)
y = zeros(4)
@time f!(y, x)
@time f!(y, x)
cfg = ForwardDiff.JacobianConfig(f!, y, x)
result = Matrix{Float64}(undef, length(y), length(x))
@time ForwardDiff.jacobian!(result, f!, y, x, cfg)
@time ForwardDiff.jacobian!(result, f!, y, x, cfg)
nothing
end
Running yields
julia> test_jacobian_allocs()
0.000001 seconds
0.000001 seconds
2.564904 seconds (3.52 M allocations: 199.496 MiB, 4.63% gc time, 99.99% compilation time)
0.000016 seconds (1 allocation: 96 bytes)
On another function which also has zero allocations (not the f! I made up here) the exact same code yields 2 allocations instead. I’m very confused about what’s going on!
ForwardDiff version is v0.10.23.