I do a lot of work with high-dimensional arrays, where the dimension of my array is often 20+. I noticed that CartesianIndices fail when the dimension is too large, causing later type inferences to also fail. Here is a minimal example with Julia 1.2.0 where that behavior can be observes.

```
using BenchmarkTools
function f(array)
a = CartesianIndices(array)
return a
end
#input is 15-dimensional array
small = rand(Float64,(2*ones(Int64, 15))...)
#input is 16-dimensional array
big = rand(Float64,(2*ones(Int64,16))...)
@code_warntype f(small)
@code_warntype f(big)
```

The output is given by the following.

```
Variables
#self#::Core.Compiler.Const(f, false)
array::Array{Float64,15}
a::CartesianIndices{15,NTuple{15,Base.OneTo{Int64}}}
Body::CartesianIndices{15,NTuple{15,Base.OneTo{Int64}}}
1 β (a = Main.CartesianIndices(array))
βββ return a
Variables
#self#::Core.Compiler.Const(f, false)
array::Array{Float64,16}
a::Any
Body::Any
1 β (a = Main.CartesianIndices(array))
βββ return a
```

As you can see, type inference failed for the bigger array.

Is there a fix for this, or do we just need to avoid this failure to propage with e.g. function barriers?