Overflow with Iterators.product

I need to deal with values assigned to a d-dimensional cube, where the dimension d varies. Say L is the side length of a cube. To access a d-dimensional array with L^d entries I used the following.

idx = Iterators.product(repeat([1:L],d)...)
for n in idx

While this works for smaller values, it starts to behave weirdly for larger (yet not too large) parameters. Here is a minimal example:

using Random
L = 300; d = 3;
data = randn((L,L,L))
idx = Iterators.product(repeat([1:L],d)...)
for n in idx

This sometimes works, sometimes does not. I rand this code 4 times, and it gave this error on my 4th run. (The number of runs to get an error is arbitrary for each try.)

ERROR: BoundsError: attempt to access 300×300×300 Array{Float64, 3} at index [139898804512912, 139898846725200, 140729172278576]

As I have 96GB of rams, there is no way that my system cannot handle 300^3 indices. Are there anything I am missing?

In addition, are there better ways to get d-dimensional indices than Iterators.product(repeat([1:L],d)…)?

I can’t reproduce your error. Your posted code works fine for me. I also tried putting your code into a function to make it a bit faster, and accumulated a sum of the data in the loop to make sure the compiler doesn’t optimize it away:

function f(data, L, d)
    s = zero(eltype(data))
    for n in Iterators.product(repeat([1:L],d)...)
        s += data[n...]
    return s

and this also runs without error for me on your example inputs L = 300; d = 3; data = randn((L,L,L));.

Are you sure you didn’t accidentally change L or d to be inconsistent with data?

Much, much better ways. First, for efficiency, you really want the number of loops (d) to be a compile-time constant here, and to use tuples rather than arrays so that the compiler knows the length. A simple modification of your code is to use:


instead. However, it’s even cleaner to use


instead — that’s what CartesianIndices are for. Note that iterating over CartesianIndices returns a CartesianIndex, not a tuple, but you can efficiently convert it to a tuple via Tuple(index). In particular, you can do:

for n in CartesianIndices(data)
    s += data[n]   # no splatting required


for n in CartesianIndices(data)
    t = Tuple(n)
    s += data[t...]   # splatting a tuple of indices

Quantitatively, your original repeat method gives (via BenchmarkTools.jl):

julia> @btime f($data, $L, $d);
  6.186 s (162000023 allocations: 5.63 GiB)

due to the type instability of your runtime-dependent number of loops (d). Using Iterators.product(axes(data)...) gives me a 200× speedup:

julia> @btime f($data);
  31.091 ms (0 allocations: 0 bytes)

and using CartesianIndices(data) gives about the same performance, but more elegant code:

julia> @btime f($data);
  31.013 ms (0 allocations: 0 bytes)

PS. Please quote your code in the future to make it easier to read and to copy-paste. (I edited your post to fix this.) See PSA: how to quote code with backticks


Thanks for a more efficient way!

Actually, I wonder if you can run the examples with L=1000. I got this error within a few seconds:

Please submit a bug report with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks.
Exception: EXCEPTION_ACCESS_VIOLATION at 0x7ff9021f2689 – gc_try_claim_and_push at C:/workdir/src\gc.c:1841 [inlined]

I think the same error occurred for the previous repeat method, for much smaller L values. Actually, I also randomly got this error for your method, too, for L=500. It might be possible that my system is not stable.

julia> for n in CartesianIndices(data)
           s += data[n]   # no splatting required
ERROR: BoundsError: attempt to access 500×500×500 Array{Float64, 3} at index [225458506064, 1869857706224, 225458506000]
 [1] getindex
   @ .\essentials.jl:14 [inlined]
 [2] getindex(::Array{Float64, 3}, ::CartesianIndex{3})
   @ Base .\multidimensional.jl:696
 [3] top-level scope
   @ .\REPL[7]:2

Works fine for me, with both the CartesianIndices method for which f(data) takes 1.4 seconds and for your original slow repeat method for which f(data) takes 204 seconds.

Are you running Julia in 32-bit or 64-bit mode? Make sure Int == Int64.

Then something should have been broken on my system. I will try again after clean-installing julia then. It is on 64-bit mode. Thanks for your help!

Could you just use

for n in eachindex(data) 
   x = data[n]

Or even skip the indexing completely:

for val in data
    s += val


1 Like

Thanks for your suggestions! I think there are alternatives, given that you can get indices from your multi dimensional arrays. One reason I have not thought of that way is because for me the indices came first before any data.

In other words, say I want to manipulate two values data1 and data2 for each point in the cube, where the values are some complicated functions of indices. My thinking process was: using indices of data1 to manipulate unrelated data2 is not logical, so I should construct the index map first. (But I would not create a dummy L^d array just to index others.)

It turns out that there is a huge performance demerit in this approach.

You can still construct a CartesianIndices object without constructing any other array, via CartesianIndices(axes), e.g. where axes = ntuple(i -> 1:100, 3). Just make sure that the dimension you pass to this is a compile-time constant, either by directly passing a tuple of axes, or by passing the dimension 3 via a Val argument.