Best current approach for working with sparse tensors?

I would like to hear ideas about how to work with this kind of application right now in Julia. Some previous posts seem to be in line with what I’m looking for, but I’m not sure what’s the conclusion, and what’s the state of the art.

As an example, suppose I’m simply computing the gradient from an image. We can represent the image img[j,k] as a vector img[:] and each gradient as a sparse matrix Dx and Dy. It’s useful to keep all of this data in the same array for further linear operations, so we can append these matrices into a D = [Dx; Dy]. But now if I want to access my gradient components I need to deal with computing the indices, or reshape my vector.

Ideally, I would like to be able to have my operation as a tensor, and using Einstein summation I can compute dimg[a,p] = D[a,p,q] * img[q]. This gets more exciting later if we also have separate color channels. Perhaps we might even preserve the original image shape as well?

In my specific application I don’t have a huge number of dimensions. I could write my code handling indexing by myself if necessary, but I want to be able to rely on Julia for indexing as much as possible.

I’m also not super worried about performance optimization, apart from exploiting the matrix sparsity, and hopefully Julia can optimize some matrix multiplications. In other words, I would like to avoid writing lots of for-loops, but I’m not expecting an amazingly optimized code à la http://tensor-compiler.org/.

The issues I’ve run into right now was that I tried to represent this gradient operator as a reshaped
SparseMatrixCSC, but both Einsum and TensorOperations seemed to have problems with that. And I’m not sure ITensors supports the kind of sparsity I need.

I hope this doesn’t sound like I’m criticizing the current tools because I have some super specific demand that is not fulfilled yet! I’m just thinking other developers might have dealt with similar problems in the past. What’s a good way to deal with this problem right now? For example, is there a different way I could be dealing with sparsity that might help me out? Or is there a handy way I can reshape arrays in my code whenever needed?

Some code for illustrative purposes:

using SparseArrays
jj,kk=4,4
N=jj*kk
Dx = sparse(1:N, 1:N, ones(N)) + sparse(1:N, [N;1:N-1], -1)
Dy = sparse(1:N, 1:N, ones(N)) + sparse(1:N, [N-jj+1:N;1:N-jj], -1) 
D = reshape([Dx; Dy], N,N,2)
img = rand(N)
dimg = zeros(N,2)

## That works
dimg = hcat(D[:,:,1]*img, D[:,:,2]*img)

## results in `ERROR: MethodError: no method matching Strided.UnsafeStridedView(::SparseMatrixCSC{Float64, Int64})`
using TensorOperations
@tensor begin
    dimg[p,g] = D[p,q,g] * img[q]
end
## Seems to have some performance issue for larger arrays (eg N=128).
using Einsum
@einsum dimg[p,g] = D[p,q,g] * img[q]

I have seen GitHub - ITensor/ITensors.jl: A Julia library for efficient tensor computations and tensor network calculations mentioned around here.