Hi there ! Is there any way to minimize the **memory usage** of the following custom function **graded_reshape()** ( Or even achieve the same performance as the Julia **reshape** function ) ?

Suppose we have a usual Julia reshape function which receives Array like **rand(10, 10, 10, 10)** and do the following reshaping :

```
function usual_reshape(A)
C = reshape(A, (10, 100, 10))
return C
end
```

But instead of naive reshaping above, we want to achieve the following **graded_reshape()** function :

```
function graded_reshape(A)
B1 = A[:, 1:6, 1:6, :]
B2 = A[:, 7:10, 7:10, :]
B3 = A[:, 1:6, 7:10, :]
B4 = A[:, 7:10, 1:6, :]
C1 = reshape(B1 , (10, 36, 10))
C2 = reshape(B2, (10, 16, 10))
C3 = reshape(B3, (10, 24, 10))
C4 = reshape(B4, (10, 24, 10))
C = cat(C1, C2, C3, C4; dims=2)
return C
end
```

The latter function has more memory allocations due to the **cat** function :

```
A = rand(10, 10, 10, 10)
@btime usual_reshape($A)
```

25.148 ns (2 allocations: 96 bytes)

```
@btime graded_reshape($A)
```

23.406 μs (61 allocations: 158.50 KiB)

Note that the size after reshaping is the same for **reshape()** and **graded_reshape()**, i.e. (10, 100, 10). While the latter one adjust the elements within the reshaped dimension.

Is there any good idea ? Thanks in advance !!!