Strange behavior of `reshape`

It seems that reshape won’t make a shallow copy of the input array when the input array has the same shape as the output:

julia> a = rand(4);

julia> b = reshape(a, :);

julia> pop!(b);

julia> length(a) == 3

I’m not sure if this is intended? I thought no matter what shape the output array has compared to the input, it’s at least a shallow copy?

Another strange behavior is that if I do two consecutive reshape, I would get an error for doing the pop! even though variable d should already rebind to the latest reshape output:

julia> c = rand(4);

julia> d = reshape(c, 4, 1);

julia> d = reshape(c, :);

julia> pop!(d)
ERROR: cannot resize array with shared data

What made you think that?

help?> reshape
search: reshape promote_shape

  reshape(A, dims...) -> AbstractArray
  reshape(A, dims) -> AbstractArray

  Return an array with the same data as A, but with different dimension sizes or number of dimensions. The two arrays
  share the same underlying data, so that the result is mutable if and only if A is mutable, and setting elements of
  one alters the values of the other.

Because the documentation only says that they share the same underlying data? And since it’s called reshape as in “changing the shape of the container of the data”, I would assume even if the output array has the same shape, it will still construct a new container to hold the data just like in other cases?

A shallow copy of a Float64 array would allocate an entirely new array with the same amount of memory. For plain data there is no difference between (shallow) copy and deepcopy.

So, no, reshape does not do a shallow copy, it just reuses the same memory with a different wrapper (and that’s not a shallow copy).

I guess the real question is: when can you resize arrays with shared data? It seems like that is possible when the arrays are all 1D. Not sure about this, and that docstring seems a bit inaccurate.

1 Like

Cool thanks! I just want to make sure this is a design choice instead of some consistency issue. Do you have any idea of the second MWE?

A bit odd that the last example fails, when d is actually 1D, but a 2D version existed shortly before.

I don’t know for sure. Maybe you can do some experiments with sharing data between several 1D arrays, and then try 2D, and find exactly when it fails?


reshape always creates a view of the same data. You can think of data as synonymous with “memory” in cases where there is actual memory allocated for the data (which is not true for all arrays).