Note that this (and equivalent `map`

version) are allocating because they materialize a vector of tuples.

Compare with a non-allocating alternative (`unpack2`

below) which runs about 7x faster on my machine.

```
function unpack1(dims)
s = 0
for (i, j) in Tuple.(CartesianIndices(dims))
#println(i, j)
s += i + j
end
return s
end
function unpack2(dims)
s = 0
for ci in CartesianIndices(dims)
i, j = Tuple(ci)
#println(i, j)
s += i + j
end
return s
end
```

```
using BenchmarkTools
const dims = (1000, 1000)
@btime unpack1($dims)
@btime unpack2($dims)
```

Apparently, iteration of a `CartesianIndex`

is disabled to avoid a possible performance trap when splatting a `CartesianIndex`

. See #23719

If you want to live dangerously, you could define iteration on `CartesianIndex`

yourself.

This version (`unpack3`

below) is also non-allocating, and is also ~7x faster than the â€śmaterializingâ€ť version.

```
Base.iterate(ci::CartesianIndex) = iterate(Tuple(ci))
Base.iterate(ci::CartesianIndex, state) = iterate(Tuple(ci), state)
function unpack3(dims)
s = 0
for (i, j) in CartesianIndices(dims)
#println(i, j)
s += i + j
end
return s
end
```