Using of colorvec in computation of jacobian

I got three different matrices J:

FiniteDiff_color: [0.0 0.0; -9.486373847239932e-9 0.0]
ForwardDiff_color: [1.0 0.0; 6.123233995736766e-17 0.0]
FiniteDiff: [0.0 1.0; -9.486373847239932e-9 0.0]

The last one is correct. Where is my mistake?

using Symbolics, SparseArrays, SparseDiffTools, FiniteDiff

function dgl!(du,u,p,t)
du[1] = u[2];
du[2] = sin(u[1])
end

param = nothing; t0 = 0; input = [pi/2, 0]; output = zeros(2); J = zeros(2,2)
g!(y,x) = dgl!(y,x,param,t0)

sparsity = Symbolics.jacobian_sparsity(g!, output, input)
jac = Float64.(sparsity)
colors = matrix_colors(jac)

FiniteDiff.finite_difference_jacobian!(J, g!, input, colorvec=colors)
println("FiniteDiff_color: ",J)

forwarddiff_color_jacobian!(J, g!, input, colorvec=colors)
println("ForwardDiff_color: ",J)

FiniteDiff.finite_difference_jacobian!(J, g!, input)
println("FiniteDiff: ",J)

If you are writing the result into a dense J, then you need to make sure you pass the sparsity in order for the decompression to work for that coloring. The following works fine:

using Symbolics, SparseArrays, SparseDiffTools, FiniteDiff

function dgl!(du,u,p,t)
    du[1] = u[2];
    du[2] = sin(u[1])
end

param = nothing; t0 = 0; input = [pi/2, 0]; output = zeros(2); J = zeros(2,2)
g!(y,x) = dgl!(y,x,param,t0)

sparsity = Symbolics.jacobian_sparsity(g!, output, input)
jac = Float64.(sparsity)
colors = matrix_colors(jac)

FiniteDiff.finite_difference_jacobian!(J, g!, input, colorvec=colors, sparsity = sparsity)
println("FiniteDiff_color: ",J)

forwarddiff_color_jacobian!(J, g!, input, colorvec=colors, sparsity = sparsity)
println("ForwardDiff_color: ",J)

FiniteDiff.finite_difference_jacobian!(J, g!, input)
println("FiniteDiff: ",J)

forwarddiff_color_jacobian!(J, g!, input)
println("ForwardDiff: ",J)

Now the reason that isn’t done in the tutorials is because J is a sparse matrix, which is probably the most common case, and so sparsity = J is the default. You can use color differentiation to write into a dense matrix like you do here, but in that case you need to override that default (and it’s a perfectly sensible use case of course if the matrix is small enough for the sparse matrix to be slow but still need acceleration in the differentiation).

This is something that’s hard to throw an error on because while in theory we could throw an error for the sparsity pattern not allowing for an exact decompression (i.e. check the column sparsity against the colors and error, here the column sparsity being dense would be a quick error), it’s actually fairly useful to allow incorrect/partial colorings. For example, if you have a periodic boundary condition it’s banded plus two bits on the corners, but if you do the coloring of the banded case then you get a good enough approximation to be a great preconditioner for GMRES. So we allow for colorings to not match the sparsity and one needs to be then careful to pass the sparsity that’s required.

Thanks Chris for the explanation and improvement and best regards, Gerd.