Deferred-shape arrays

Does Julia have deferred-shape array type like in Fortran? The feature can reduce clogging in main function by taking care of variable declarations somewhere else, if variables have parametric dimensions.

I am not sure why you need this in Julia, which does not require that you declare variables.

Please provide an MWE with some context to clarify.

In general, if some code doesn’t care about the shape of an array, it can just not say anything about it. If you want to access the same array data with different shape, you can use reshape. Between those features, it seems like everything you might want to do would be covered.


reshape still causes a (small) allocation for the reshaped view, doesn’t it? Since Julia 1.5, these kind of small allocations for views have been eliminated (as long as the view doesn’t “escape”) but in my tests even with Julia 1.6-dev a reshape still allocates. Is there hope that those allocations can be avoided in the future, like for views?

For completeness:

using BenchmarkTools
function sum_as_mtx(A)
   B = reshape(A, (20,10))
A = randn(200)
@btime sum_as_mtx($A)
# 52.413 ns (1 allocation: 64 bytes)

Variable declaration undergoes implicitly, in my comprehension. Compiler infers variable type, primitive or composite or “Any” or “Core.Box”.
I suppose, i can obtain deferred-shape array in a way, in Julia. I can create constant global array A, which is exported and used from within any scope. Variable name “A” can be then reassigned to array of same type of element and number of dimensions but new length along each dimension. A warning will rise. Hadn’t tried it myself. I want to think, array A will be then present with new shape everywhere it was imported before reassignment.

Array is a special case, as revealed by methods(reshape) (easiest if you have no other packages loaded, since some add their own methods). Most other array types go through ReshapedArray, which can have a 0-allocation wrapper:

julia> @btime reshape($A, (10, 20));
  34.681 ns (1 allocation: 64 bytes)

julia> @btime Base.ReshapedArray($A, (10, 20), ());
  6.600 ns (0 allocations: 0 bytes)

I wonder if it’s time to reconsider this choice.


Ironic (and great) that the language has gotten so good that the user-defined thing is better than the built-in thing now! Reminds me of back in 0.2 (IIRC), when immutable structs were added and they were immediately better than tuples because they allowed inline storage of immutable fields whereas tuples of the time always consisted of pointers to boxed values.


ReshapedArray seems quite a bit slower to read from, but is this a fair test?

julia> A = rand(2000); B = zeros(100,20);

julia> @btime $B .= reshape($A, (100, 20));
  276.312 ns (1 allocation: 64 bytes)

julia> @btime $B .= Base.ReshapedArray($A, (100, 20), ());
  529.658 ns (0 allocations: 0 bytes)

This is also related to I feel like these allocations should be avoidable, but one would need to figure out how to avoid allocating pointers in the ccall.

1 Like

Yeah, it’s a fair test, though it may be avoidable. The ::Array variant spends its time in unsafe_copyto! (specifically whereas the ::ReshapedArray variant spends its time in copyto_unaliased! (

julia> using BenchmarkTools

julia> A = rand(2000); B = zeros(100,20);

julia> @btime $B .= reshape($A, (100, 20));
  234.554 ns (1 allocation: 64 bytes)

julia> @btime $B .= Base.ReshapedArray($A, (100, 20), ());
  991.833 ns (0 allocations: 0 bytes)

julia> Revise.track(Base)    # add @simd inside copyto_unaliased!

julia> @btime $B .= reshape($A, (100, 20));
  232.961 ns (1 allocation: 64 bytes)

julia> @btime $B .= Base.ReshapedArray($A, (100, 20), ());
  192.912 ns (0 allocations: 0 bytes)

Want to PR it?


OK, that’s simple, done, #38014.

Trying to test scalar access… any idea here?

julia> @btime sum(x for x in view($(reshape(A, (100, 20))),20,:));
  11.289 ns (0 allocations: 0 bytes)

julia> @btime sum(x for x in view($(Base.ReshapedArray(A, (100, 20), ())),20,:));
  15.275 ns (0 allocations: 0 bytes)

Can’t look now, but what I do is put a @profile in front of that @btime and then check the result, ignoring all the gcscrub and inference that @btime triggers.

1 Like