[Julia usage] How to get 2D indexes from 1D index when accessing a 2D array?

Hi! I have a 2D array x. I can access its elements by using “1D indexing” (e.g. x[3]) or “2D indexing” (e.g. x[3, 1]).
Question: given an integer i that could be used as a 1D index, how to retrieve the 2D indexes r, c that would access the same element of x (i.e. x[i] == x[r, c] is true)?

Here is a function that does what I am looking for using divrem, though I wonder if there exists built-in functions or more elegant alternatives.

function index1d_to_index2d(x::AbstractArray, index::Int64)::Tuple{Int64,Int64}
    n_rows = size(x)[1]
    q, r = divrem(index, n_rows)
    if r == 0
        return n_rows, q
    else
        return r, q + 1
    end
end

Test code:

x = rand(7, 10)
for i in 1:length(x)
    r, c = index1d_to_index2d(x, i)
    @assert x[i] == x[r, c]
end

Thanks!

Take a look at CartesianIndices:

x = rand((7, 10))
CI = CartesianIndices((7, 10))
for i in 1:length(x)
    r = CI[i][1]
    c = CI[i][2]
    @assert x[i] == x[r, c]
end
7 Likes

Is there a way to use CartesianIndices while avoiding the allocation of the array of indices?

This is the non-allocating operation (no array of indices is created):

using BenchmarkTools

@btime CI = CartesianIndices((7, 10))

0.018 ns (0 allocations: 0 bytes)
2 Likes

Interesting. Seems miraculous to me :slight_smile:

Just a more comprehensive test:

julia> function mutx!(x)
         CI = CartesianIndices(size(x))
         for i in eachindex(x)
           r = CI[i][1]
           c = CI[i][2]
           x[r,c] += 1 
         end
       end
mutx! (generic function with 1 method)

julia> @btime mutx!(x) setup=(x=zeros(10,10))
  332.971 ns (0 allocations: 0 bytes)

That is slower than running over the indexes directly, though:

julia> function mutx2!(x)
         for j in 1:size(x,2)
           for i in 1:size(x,1)
             x[i,j] += 1 
           end
         end
       end
mutx2! (generic function with 1 method)

julia> @btime mutx2!(x) setup=(x=zeros(10,10))
  51.810 ns (0 allocations: 0 bytes)

(probably that doesn’t matter for loops doing something more meaningful at each iteration, but maybe someone has something to comment on this).

Interesting

julia> function mutx!(x)
                CI = CartesianIndices(size(x))
                for i in eachindex(x)
                  r = CI[i][1]
                  c = CI[i][2]
                  x[r,c] += 1
                end
              end
mutx! (generic function with 1 method)


julia> function mutx2!(x)
                for j in 1:size(x,2)
                  for i in 1:size(x,1)
                    x[i,j] += 1
                  end
                end
              end
mutx2! (generic function with 1 method)

julia> function mutx3!(x)
                for i in CartesianIndices(x)
                  x[i] += 1
                end
              end
mutx3! (generic function with 1 method)

julia> function mutx4!(x)
                CI = CartesianIndices(size(x))
                       for i in CI
                           x[i] += 1
                       end
                     end
mutx4! (generic function with 1 method)

julia> @btime mutx!(x) setup=(x=zeros(10,10))
  276.409 ns (0 allocations: 0 bytes)


julia> @btime mutx2!(x) setup=(x=zeros(10,10))
  45.005 ns (0 allocations: 0 bytes)

julia> @btime mutx3!(x) setup=(x=zeros(10,10))
  42.740 ns (0 allocations: 0 bytes)

julia> @btime mutx4!(x) setup=(x=zeros(10,10))
  42.757 ns (0 allocations: 0 bytes)

1 Like

one more for the list:

julia> function mutx!(x)
                for pair in CartesianIndices(x)
                  i = pair[1]
                  j = pair[2]
                  x[i,j] += 1
                end
              end
mutx! (generic function with 1 method)

julia> @btime mutx!(x) setup=(x=zeros(10,10))
  47.928 ns (0 allocations: 0 bytes)


the slowness comes from not iterating over the CartesianIndexes.

1 Like

Probably because indexing into a CartesianIndices requires a division operation, while iterating over one does not, right?

1 Like

Probably. I guess that overhead will always exist if one tries to iterate over indices that do not comprise necessarily all Cartesian indices, thus having to actually compute the indices (and then the miracle I was referring to starts to make sense).

The fact that the overhead exists if one iterates over eachindex(x) is something that could possibly be solved by a compiler optimization, but it does not.