I’ve been playing around with CartesianIndices
. It works beautifully for the most part, but I’ve noticed that it becomes slower compared to linear indexing, when the dimension of the array is large. Here is a minimal example using Julia 1.2.0.
using BenchmarkTools
#input is 5-dimensional array
dim_5 = rand(Float64, (27*ones(Int64,5))...)
#input is 15-dimensional array
dim_15 = rand(Float64,(3*ones(Int64, 15))...)
#the number of elements is 3^15 = 14,348,907 for both of them
function cartesiansum(array)
sum = zero(eltype(array))
for index in CartesianIndices(array)
sum += array[index]
end
return sum
end
function linearsum(array)
sum = zero(eltype(array))
for index in eachindex(array)
sum += array[index]
end
return sum
end
println("Cartesian sum with dimension = 5")
@btime cartesiansum(dim_5)
println("Linear sum with dimension = 5")
@btime linearsum(dim_5)
println("Cartesian sum with dimension = 15")
@btime cartesiansum(dim_15)
println("Linear sum with dimension = 15")
@btime linearsum(dim_15)
Note that I made the number of elements to be the same between dim_5, dim_15. Here is the output.
Cartesian sum with dimension = 5
84.114 ms (1 allocation: 16 bytes)
Linear sum with dimension = 5
24.007 ms (1 allocation: 16 bytes)
Cartesian sum with dimension = 15
407.303 ms (1 allocation: 16 bytes)
Linear sum with dimension = 15
25.045 ms (1 allocation: 16 bytes)
As you can see, there is a significant slowdown when I use CartesianIndices for dimension 15 array.
Is there any workaround around this. or are we advised not to use CartesianIndices for large dimension arrays?