Support CartesianIndices with Arrays and other iterables, not just ranges (alternative?)

performance
indexing
#1

I’m making a custom array (SparseSynapses{Npre,Npost}) that simulates multidimensional sparse tensors on top of SparseMatrixCSC (for a specific case). To perform the transformation of multidimensional CartesianIndex-es to linear indices that index into the underlying SparseMatrixCSC, I use CartesianIndex and LinearIndex. However, implementing just the scalar Base.getindex(S::SparseSynapses{Npre,Npost}, I::Vararg{Int,N}) where {Npre,Npost,N} method in this case incurs a large performance cost.
(I’m not sure if this is because of the scalar indexing of the LinearIndex or the SparseMatrixCSC)

I accordingly extended the general getindex method:
Base.getindex(S::SparseSynapses{Npre,Npost}, I...) where {Npre,Npost}
I relied on CartesianIndices for this (see code below), and now it works for scalar and range indices. But not for Array indices, because CartesianIndices can’t be constructed with Arrays (code) and its iteration (code) doesn’t rely on an underlying iterator.
For what works, this approach is indeed much faster (SparseMatrixCSC indexing becomes the bottleneck, instead of LinearIndex construction)

I think my best solution now is to make a custom type that imitates CartesianIndices for general iterables and delegates the iteration on the encapsulated type. Can you suggest a solution that reuses some of the CartesianIndices machinery?

I guess my use case is out of scope for the design of CartesianIndices…

Appendix: code

My custom array:

struct SparseSynapses{Npre,Npost} <: AbstractSynapses{Npre,Npost}
  data::SparseMatrixCSC{Int8,Int}
  preDims::NTuple{Npre,Int}
  postDims::NTuple{Npost,Int}
  preLinIdx::LinearIndices{Npre}
  postLinIdx::LinearIndices{Npost}
  function SparseSynapses{Npre,Npost}(data,preDims,postDims) where {Npre,Npost}
    preLinIdx= LinearIndices(preDims)
    postLinIdx= LinearIndices(postDims)
    new{Npre,Npost}(data,preDims,postDims,preLinIdx,postLinIdx);
  end
end

Extending the general getindex:

Base.@propagate_inbounds \
function Base.getindex(S::SparseSynapses{Npre,Npost}, I...) where {Npre,Npost}
  cartesianIdx(idx::NTuple{N,Int}) where {N}= CartesianIndex(idx)
  cartesianIdx(idx::Tuple)= CartesianIndices(idx)
  linearIdx(cidx::CartesianIndex, linTransform)::Int= linTransform[cidx]
  linearIdx(cidx::CartesianIndices, linTransform)::Vector{Int}= vec(linTransform[cidx])
  idx= to_indices(S,I)
  S.data[linearIdx(cartesianIdx(idx[1:Npre]),            S.preLinIdx),
         linearIdx(cartesianIdx(idx[Npre+1:Npre+Npost]), S.postLinIdx)]
end
0 Likes

#2

I think you’re misattributing your performance cost. Scalar indexing into SparseMatrixCSC is expensive. Sparse matrices work largely due to the fact that they specialize nearly everything else to avoid repeated scalar indexing.

But I’d just define Base.getindex(S::SparseSynapses{Npre,Npost}, I::Int...) — note the ::Int annotation. Then the builtin generic fallbacks will handle the array cases for you. Again, though, it’ll likely be slower than you want because scalar indexing into a sparse matrix is slow.

1 Like

#3

I wasn’t sure, but I understood this as probable suspect. Thanks for clarifying!

Exactly, that’s what I want to avoid! The principle of my solution is to transform in user code the multidimensional indices to arrays of linear indices and index the sparse matrix with those. Should I maybe extend the sparse matrix’s getindex? I didn’t think of this approach.

0 Likes

#4

Oh, now I understand your original question a bit better; yes, it makes sense to define getindex for ::Any indices like you originally did.

I still haven’t spent the time to fully wrap my head around your code, so apologies if this is still off-base, but can’t your entire getindex method be simplified to:

  S.data[S.preLinIdx[idx[1:npre]...], S.postLinIdx[idx[Npre+1:Npre+Npost]...]]

?

1 Like

#5

@mbauman, I need to buy you a beer…
It very much can. I simply complicated things with the CartesianIndices and then I was stuck in a rut. Thank you!

Given that this isn’t a use case any longer, do you think the original question (supporting AbstractVector and/or other iterables for CartesianIndices) still holds any water?

0 Likes

#6

CartesianIndices is a highly specialized object specifically for ranges and use-cases like this. We also have Iterators.product which is a more general-purpose structure that does roughly the same thing (albeit returning a general tuple instead of a specialized CartesianIndex).

0 Likes

#7

Why exactly are CartesianIndicies only meant for unit ranges?

I guess Iterators.filter could produce an iterator that is not over a unit range. This could be pretty useful, like the nearest neighbors function below.

> points = Iterators.product(1:3, 1:3)
> points |> collect
3×3 Array{Tuple{Int64,Int64},2}:
 (1, 1)  (1, 2)  (1, 3)
 (2, 1)  (2, 2)  (2, 3)
 (3, 1)  (3, 2)  (3, 3)

> Iterators.filter(x -> isodd(sum(x)), points) |> collect
4-element Array{Tuple{Int64,Int64},1}:
 (2, 1)
 (1, 2)
 (3, 2)
 (2, 3)

Is there currently a way to construct this with just CartesianIndicies or a simple 1-liner? I feel like that would be an elegant solution, prehaps something like CartesianIndicies((1:2:3, 1:2:3), offset=1) or something could be the syntax.

0 Likes

#8

They’re just highly specialized structures, designed to represent all the indexable locations in an array or a contiguous rectangular subsection. That’s their purpose, and the advantage of this limitation is that they have a slew of optimizations that are only valid for unit ranges.

1 Like

CartesianIndices syntax