Custom array to simulate multidimensional indexing for SparseMatrixCSC: which functions to extend?

question
performance
indexing
array
#1

I am building a custom array type to emulate indexing in a special-case multidimensional sparse tensor.
My type is backed by a SparseMatrixCSC and exposes a Nsum-dimensional interface, where the dimensions are in 2 groups, Npre, Npost:

struct SparseSynapses{Npre,Npost,Nsum} <: AbstractArray{Int8,Nsum}
  data::SparseMatrixCSC{Int8,Int}
  preDims::NTuple{Npre,Int}
  postDims::NTuple{Npost,Int}
  # Translate cartesian to linear indices
  preLinIdx::LinearIndices{Npre}
  postLinIdx::LinearIndices{Npre}

  function SparseSynapses{Npre,Npost,Nsum}(data,preDims,postDims) where {Npre,Npost,Nsum}
    Npre+Npost == Nsum || error("Nsum must be Npre+Npost")
    preLinIdx= LinearIndices(preDims)
    postLinIdx= LinearIndices(postDims)
    new{Npre,Npost,Nsum}(data,preDims,postDims,preLinIdx,postLinIdx);
  end
end

I would like to be able to index into this array like so:

synapses= SparseSynapses((4,5),(2,3))
synapses[2:3,1:3,:,2]

My first solution (works) is to simply overwrite the scalar getindex, with a cartesian-to-linear transform:

function Base.getindex(S::SparseSynapses{Npre,Npost}, I::Vararg{Int,N}) where {Npre,Npost,N}
  pre= S.preLinIdx[CartesianIndex(I[1:Npre])]
  post= S.postLinIdx[CartesianIndex(I[Npre+1:Npre+Npost])]
  S.data[pre,post]
end

However, this scalar access pattern of the LinearIndices arrays should be very inefficient. If I want to consolidate these accesses, which function should I extend?
Base.getindex(A, I...), Base.to_indices(A, I::Tuple) or something else?

PS. My access pattern looks a bit similar to how SubArrays work, maybe that can help me?

0 Likes

#2

Going from Cartesian to linear indices is actually quite efficient, and the Julia implementation is very nicely written so I would expect it to be quite fast.

Going from linear to Cartesian is costly, but you are not doing that.

1 Like

#3

Yeah, I’m not worried how expensive single cartesian->linear calculations are, but rather assumed that performing them all in one step for the entire accessed range should be faster than individual transformations.
This is probably wrong though on second thought, because I also assumed that LinearIndices acts as an array from which I’m accessing many values, coalescent accesses would help. Now I see that LinearIndices only encapsulates a calculation. Therefore I should see no benefit from coalesced accesses?

0 Likes

#4

I may be missing something, but can’t you just do that by defining getindex for vector/range arguments?

You could also design a conversion step using to_indices.

0 Likes

#5

I guess I can – I was more wondering if there is any predetermined “wisdom” on how to do it, as this is my first such exercise and I was a bit lost among all the relevant methods. Thank you!

0 Likes

#6

I don’t think there is a standard way, but wrapping indices with some trivial struct, eg

struct MultiIndex{T}
    index::T
end

and resolving it with to_indices(::SparseSynapses, ...) may provide an elegant solution. Search the docs and the code for to_indices.

1 Like