# ReverseDiff.GradientTape fails for backslash operator when symmetric sparse matrix involved?

Hi!
I am using ReverseDiff to calculate the gradient of a scalar function with respect to its vector input.
A simplified piece of code is attached in the end. It’s observed the ReverseDiff.GradientTape(loss_rd, x) breaks when K in the example in symmetric with an error message (truncated) saying “`ERROR: LoadError: MethodError: no method matching lu!(::SparseMatrixCSC{ReverseDiff.TrackedReal{Float64,Float64,Nothing},Int64}, ::Val{true}; check=true)`”, but it works when K is nonsymmetric. I am not sure if I did something wrong in the code. Could someone help me with this? Thanks.

``````using ReverseDiff, SparseArrays
function loss_rd(x::AbstractArray{T,1}) where{T}
f = ones(T, 3)
K = sparse(I, J, x)
u = K \ f
return sum(abs2, u)
end

I = [1, 1, 2, 3, 3] # case that does not work
J = [1, 3, 2, 3, 1]
x = [3.0, 1.0, 3.0, 3.0, 1.0]

#I = [1, 1, 2, 3] # case that works fine
#J = [1, 3, 2, 3]
#x = [3.0, 1.0, 3.0, 3.0]
#
println("loss_rd=", loss_rd(x))

const f_tape = ReverseDiff.GradientTape(loss_rd, x) ###get stuck at this line
const compiled_f_tape = ReverseDiff.compile(f_tape)
dldx=similar(x)
I tried Zygote for the simplified code here and got the error message, `ERROR: LoadError: Need an adjoint for constructor SparseMatrixCSC{Float64,Int64}. Gradient is of type Array{Float64,2}`. I guess I need to define an adjoint for `sparse(I,J,x)`? After reading the custom adjoint session in Zygote, I get the general idea about how to do this for scalar functions but not for sparse matrix. Could someone point me to some references? Thanks.