Just sharing a small benchmark in case someone finds it useful in the future—it’s about evaluating the Jacobian of a simple diffusion operator.
uin() = 0.0
uout() = 0.0
function Diffusion(u)
du = zero(u)
for i in eachindex(du,u)
if i == 1
ug = uin()
ud = u[i+1]
elseif i == length(u)
ug = u[i-1]
ud = uout()
else
ug = u[i-1]
ud = u[i+1]
end
du[i] = ug + ud -2*u[i]
end
return du
end
taking the jacobian of this function apply to u = rand(1000)
using multiple backends listed here
bcks = [
AutoEnzyme(mode=Enzyme.Reverse),
AutoEnzyme(mode=Enzyme.Forward),
AutoMooncake(config=nothing),
AutoForwardDiff(),
AutoSparse(
AutoForwardDiff();
sparsity_detector=TracerSparsityDetector(),
coloring_algorithm=GreedyColoringAlgorithm(),
),
AutoSparse(
AutoEnzyme(mode=Enzyme.Forward);
sparsity_detector=TracerSparsityDetector(),
coloring_algorithm=GreedyColoringAlgorithm(),
)
]
leads to,
1 │ AutoSparse(dense_ad=AutoEnzyme(m… 7.6e-6
2 │ AutoSparse(dense_ad=AutoForwardD… 1.09e-5
3 │ AutoEnzyme(mode=ForwardMode{fals… 0.003748
4 │ AutoForwardDiff() 0.0040038
5 │ AutoEnzyme(mode=ReverseMode{fals… 0.106355
6 │ AutoMooncake{Nothing}(nothing) 1.20643
The sparsity detection is incredible—I’m excited to see what others will achieve with it! Also, the last two examples use reverse differentiation, which isn’t particularly well-suited for this case.