Benchmark AutoDiff on simple 1D diffusion operator

Just sharing a small benchmark in case someone finds it useful in the future—it’s about evaluating the Jacobian of a simple diffusion operator.

uin() = 0.0
uout() = 0.0
function Diffusion(u)
    du = zero(u)
    for i in eachindex(du,u)
        if i == 1
            ug = uin()
            ud = u[i+1]
        elseif i == length(u)
            ug = u[i-1]
            ud = uout()
        else
            ug = u[i-1]
            ud = u[i+1] 
        end
        du[i] = ug + ud -2*u[i]
    end
    return du
end

taking the jacobian of this function apply to u = rand(1000) using multiple backends listed here

bcks = [
    AutoEnzyme(mode=Enzyme.Reverse),
    AutoEnzyme(mode=Enzyme.Forward),
    AutoMooncake(config=nothing),
    AutoForwardDiff(),
    AutoSparse(
        AutoForwardDiff();
        sparsity_detector=TracerSparsityDetector(),
        coloring_algorithm=GreedyColoringAlgorithm(),
    ),
    AutoSparse(
        AutoEnzyme(mode=Enzyme.Forward);
        sparsity_detector=TracerSparsityDetector(),
        coloring_algorithm=GreedyColoringAlgorithm(),
    )
    ]

leads to,

   1 │ AutoSparse(dense_ad=AutoEnzyme(m…  7.6e-6
   2 │ AutoSparse(dense_ad=AutoForwardD…  1.09e-5
   3 │ AutoEnzyme(mode=ForwardMode{fals…  0.003748
   4 │ AutoForwardDiff()                  0.0040038
   5 │ AutoEnzyme(mode=ReverseMode{fals…  0.106355
   6 │ AutoMooncake{Nothing}(nothing)     1.20643

The sparsity detection is incredible—I’m excited to see what others will achieve with it! Also, the last two examples use reverse differentiation, which isn’t particularly well-suited for this case.

1 Like

oh and the full code

using DifferentiationInterface
using DifferentiationInterfaceTest
using DataFrames
using LinearAlgebra
using SparseConnectivityTracer: TracerSparsityDetector
using SparseMatrixColorings
import Enzyme,ForwardDiff,Mooncake

bcks = [
    AutoEnzyme(mode=Enzyme.Reverse),
    AutoEnzyme(mode=Enzyme.Forward),
    AutoMooncake(config=nothing),
    AutoForwardDiff(),
    AutoSparse(
        AutoForwardDiff();
        sparsity_detector=TracerSparsityDetector(),
        coloring_algorithm=GreedyColoringAlgorithm(),
    ),
    AutoSparse(
        AutoEnzyme(mode=Enzyme.Forward);
        sparsity_detector=TracerSparsityDetector(),
        coloring_algorithm=GreedyColoringAlgorithm(),
    )
    ]

uin() = 0.0
uout() = 0.0
function Diffusion(u)
    du = zero(u)
    for i in eachindex(du,u)
        if i == 1
            ug = uin()
            ud = u[i+1]
        elseif i == length(u)
            ug = u[i-1]
            ud = uout()
        else
            ug = u[i-1]
            ud = u[i+1] 
        end
        du[i] = ug + ud -2*u[i]
    end
    return du
end
function DDiffusion(u)
    A = diagm(
        -1 => ones(length(u)-1),
        0=>-2 .*ones(length(u)),
        1 => ones(length(u)-1))
    return A
end

u = rand(1000)

scenarios = [ 
    Scenario{:jacobian,:out}(Diffusion,u,res1=DDiffusion(u))
    ]
df = benchmark_differentiation(bcks, scenarios)

(df[!,[:backend,:operator,:time]] |> df->filter(df->df.operator == :jacobian,df) |> df-> sort(df,order(:time)))[!,[:backend,:time]]

PR this to the SciMLBenchmarks? If you just add a .jmd here

it will automatically make the page and we will keep this updating. It is nice to have a simple case like this.

Done thank you for the advice.
Even thought, I’ve used julia for a while, it is my first PR ever, I hope I did everything right.