Why is LinearIndices hardcoded to use Int?


The LinearIndices type is defined in Base at line 382 of indices.jl as

struct LinearIndices{N,R<:NTuple{N,AbstractUnitRange{Int}}} <: AbstractArray{Int,N}

I’m wondering why it’s defined to hold an NTuple{N,AbstractUnitRange{Int}}} rather than being defined along the lines of:

struct MyLinearIndices{N,Tn,R<:NTuple{N,AbstractUnitRange{Tn}}} <: AbstractArray{Tn,N}

The issue with the current implementation is that you can’t convert a Cartesian index to a linear index if the resulting linear index is too big to represented by an Int. For example (with Int=Int64 on my system, running on current master branch julia):

julia> l = Int128(10_000_000);

julia> sz = (l,l,l)
(10000000, 10000000, 10000000)

julia> LI = LinearIndices(sz);

julia> LI[l,l,l]

julia> l^3

I do need to convert cartesian indices to such large linear indices in practice. It’s part of a sparse 3D array implementation. I was able to get what I was looking for by extending a few methods for the MyLinearIndices type defined above:

MyLinearIndices(sz::NTuple{N,Tn}) where {N,Tn<:Integer} = MyLinearIndices(map(Base.OneTo, sz))

IndexStyle(::Type{<:MyLinearIndices}) = IndexLinear()
axes(iter::MyLinearIndices) = map(axes1, iter.indices)
size(iter::MyLinearIndices) = map(unsafe_length, iter.indices)
function getindex(iter::MyLinearIndices, i::Integer)
    @boundscheck checkbounds(iter, i)

function getindex(iter::MyLinearIndices, i::AbstractRange{<:Integer})
    @boundscheck checkbounds(iter, i)
    @inbounds isa(iter, LinearIndices{1}) ? iter.indices[1][i] : (first(iter):last(iter))[i]

Using this new type gives me

julia> myLI = MyLinearIndices(sz);

julia> myLI[l,l,l]

I don’t know the array indexing code very well so I don’t know whether or not there’s a good reason to force LinearIndices to use Ints rather than working with arbitrary integer types? Is there a good reason or was this just an oversight? If it was just an oversight then I can prepare a PR with a fix based on the MyLinearIndices stuff in this post. I believe it could count as a bugfix and not as a breaking change. On the other hand, if the current behaviour is intended I would be grateful if someone could explain the reasoning behind it a little.

Thanks very much, Patrick


Linear indices generally represent offsets into regions of memory, which cannot be larger than the system integer type. Using linear indices is not very efficient otherwise since the linear index must be decoded before being used (involving shockingly slow integer division operations). Are you sure you need linear indices here? Can’t you use cartesian indices? Especially for sparse matrices, linear indexing is really inefficient.


Thanks for the reply Stefan. That explanation makes sense. Let me give a bit more context on the application. As I said, this relates to a three dimensional sparse array implementation. I say array but it’s not technically a subtype of AbstractArray. These 3D “arrays”, called SparseArray3D, are part of a private project I’m involved with. In building and manipulating SparseArray3Ds we work with Cartesian indices but under the hood the non-zero entries are stored in a dictionary keyed by the linear index. So say we have a sparse 3D array N. We’ll get a value from the array using

val = N[i,j,k]

where i,j,k will only be single integers, not ranges or anything else like that. getindex for SparseArray3Ds will convert the i,j,k cartesian index to a linear index and grab the value from the dictionary. So the LinearIndices object is only ever used to convert a single Cartesian index to a linear index. Just a drop in replacement for sub2ind. Clearly we can do this manually. I was wondering though why it wasn’t possible using LinearIndices[i,j,k]. I see from your reply that there’s a good reason for this. Thanks again.

Regarding performance, we’ve found that for our typical access patterns our implementation is faster in practice than using a dictionary keyed by an (i,j,k) tuple or using a 3D analogue of the compressed sparse column/row format used for sparse matrices.


Do you know why performance is worse with (i,j,k) keys? I thought that the hashing might be it, but no. Comparison is a bit slower but not enough to make up for the hashing:

julia> l = Int128(10_000_000); ll = Int(l); t = (ll,ll,ll);

julia> @btime hash($l);
  19.996 ns (0 allocations: 0 bytes)

julia> @btime hash($ll);
  2.965 ns (0 allocations: 0 bytes)

julia> @btime hash($t);
  11.252 ns (0 allocations: 0 bytes)

julia> @btime isequal($l,$l);
  0.018 ns (0 allocations: 0 bytes)

julia> @btime isequal($ll,$ll);
  0.018 ns (0 allocations: 0 bytes)

julia> @btime isequal($t,$t);
  1.968 ns (0 allocations: 0 bytes)


Henh. I’m not sure @mauro3. I’ll have to check with my colleague who did the benchmarking. I wasn’t actually involved in that. I had thought it was a matter of hashing performance but that doesn’t jive with what you show here.


FWIW, this is a benchmarking artifact (function call gets optimized away or constant folder). Also, benchmarking something like isequal between two Ints is almost useless because the timing of such a code will heavily depend on the context it is used in.


Would it be possible to just skip the linear indexing operation and use a dictionary keyed by the (i,j,k) index tuple directly?

I guess that you tried that already and found that it was slow… which is weird because it seems like what you’re currently doing is equivalent to using a hash function that consists of linearizing first and then hashing, whereas I would think that the built-in hash function which just mixes the hashes of the indices would be faster.


Thanks for the replies all. It looks like we should dig a bit deeper on the benchmarking and see if some greater insight can be gained on what’s going on here.



One thought is that instead of using linear indices, you could just use large random odd numbers as if they were strides. I.e. hash (i, j, k) as c₁*i + c₂*j + c₃*k—this should be faster than the default hashing but is a pretty decent hash function. You could, for example use these constants:

const c₁ = -5318449269026300911
const c₂ =  4566921428329651793
const c₃ = -6967723246364373735

In order to override the default hashing without monkey-patching, you’d need to wrap the (i, j, k) indices in a custom type like this:

struct Indices

Base.hash(x::Indices, h::UInt) = hash(c₁*x.i + c₂*x.j + c₃*x.k, h)

Untested but something like that.