# Calculating Sparse Jacobians with Autodifferentiation

Is there a way with any of the autodifferentiation tools to specify a sprasity pattern or bandedness for the Jacobian? This is something which would be incredibly useful for the differential equation solvers, but I don’t know if it exists anywhere in the package ecosystem.

1 Like

Can you provide a couple of examples of such sparse Jacobians? In XDiff.jl, I actively experiment with the symbolic representation of sparse arrays using Einstein notation, e.g.:

``````using XDiff

dxs = rdiff(:(y = 2x); ctx=Dict(:outfmt => :ein), x=rand(3))
dxs[:x]
# ==>
# quote
#     tmp1 = 2
#     dy_dx[i, j] = tmp1 * (i == j)
# end
``````

Here the derivative `dy/dx` is a sparse matrix, whose elements are equal `tmp1` (i.e. 2), but only when the first index `i` is equal to the second index `j`, e.g.:

`````` 2  ⋅  ⋅
⋅  2  ⋅
⋅  ⋅  2
``````

If this is similar to what you need, we can work it out to support your concrete use case.

There is https://github.com/mauro3/MatrixColorings.jl which would need updating. I think the code reasonably clean and it would probably be easier to use this as starting point than nothing. The dependency of Ragged.jl would need to be exchanged, probably for https://github.com/mbauman/RaggedArrays.jl. I could move the package to DiffEq org, if that helps, or to JuliaDiff. (There is also these issues in ForwardDiff.jl https://github.com/JuliaDiff/ForwardDiff.jl/issues/43, https://github.com/JuliaDiff/ForwardDiff.jl/issues/91)

+1000 to get that functionality into Julia.

1 Like

This is exactly what ReverseDiffSparse does.

1 Like

Reverse-mode automatic differentiation for closed-form scalar algebraic expressions, producing gradients and Hessians.

Yes, Hessians are symmetric, thus different, (I think) more specialized coloring algorithms are used.

This is definitely something that’s planned for ReverseDiff, though there are items that I think would be more useful to tackle first, like factoring out the execution tracing infrastructure.

What about ForwardDiff? Won’t Jacobians do well there, or only for smaller Jacobians?

Yup. A ForwardDiff-only implementation should be a bit simpler as well, if anybody wants to pursue it. It should basically come down to implementing a function which takes in a sparsity pattern and spits out a seeded dual number configuration that matched it.

Eventually, I expect ReverseDiff will win the day in this arena because of the ability to write optimization passes over the tape. One such optimization is to elide whole execution passes by showing they are constant w.r.t. specific output directions; I had this implemented at one point but rolled it back for the sake of a refactor (before the package was released). ReverseDiff can also leverage mixed-mode AD (i.e. switch to forward-mode if it’s more efficient to do so), though making those kinds of decisions automatically requires some finesse.

My secret dream is to have a unified mixed-mode AD package that makes all of these decisions for the user…

3 Likes