# Convert matrix of 3D arrays to 3D array

I have N^2 3D arrays, all with size (nx, ny, nz). These arrays are arranged in a square matrix (N x N). I would like to expand the inner arrays concatenating them along the first two dimensions but not along the third, so the end result should be a matrix with size (N*nx, N*ny, nz).

For clarity, you can imagine the following scenario: I collected videos using N^2 cameras arranged in a grid, and now I want to stitch them into a single video.

So for a simple example you can consider I have these 4 arrays

a = rand(1, 2, 3)
b = rand(1, 2, 3)
c = rand(1, 2, 3)
d = rand(1, 2, 3)


and they are arranged as

M = [[a] [b]; [c] [d]]


what I want to do is transform M into a 3d array V whose dimensions are (2, 4, 3). It should then satisfy
V[1:1, 1:2, :] == M[1,1], V[1:1, 3:4, :] == M[1,2], V[2:2, 1:2, :] == M[2,1], V[2:2, 3:4, :] == M[2,2].

It didn’t feel too complicated initially but after trying various combinations of hcat, vcat, cat… I still did not find a simple solution.

Thanks

1 Like

Did you try stack?

1 Like

I didn’t manage to find a stack solution, but opening your link led me to hvcat (that I originally missed) which seems to do exactly what I want as hvcat(size(M,2), permutedims(M)...) (which also works if M is not square).

If anyone has other approaches to propose I’d be happy to learn them still.

1 Like
vcat((hcat(r...) for r in eachrow(M))...)

reshape( permutedims(stack(M),(1,4,2,5, 3)), 2,4,3)

reshape(stack(M,dims=(2)), 2,4,3)

2 Likes

This one looks nice and compact but it does not seem to deliver the result requested by OP?

1 Like

Yes. I think so too.
Maybe it can be useful to a future OP .
I propose a different one.

splat(hcat)(splat(vcat).(eachcol(M)))


I confess that these transformations are not very intuitive, in most cases

TensorCast has a notation for that type of reshaping:

using TensorCast
@cast V[n ⊗ nx, m ⊗ ny, nz] := M[n, m][nx, ny, nz]


and using @macroexpand you can easily check what functions it uses under the hood. Further, "⊗" can be typed by \otimes<tab>.

1 Like

the solution with tensorcast seems to lead to the same result as
reshape(stack(M,dims=(2)), 2,4,3)
which seems to be a more “simple/direct” casting than the one requested by the OP

using @macroespand you can see that the scheme adopted by tensor cast seems similar to the one adopted by me in a previous solution.

@macroexpand @cast V[n ⊗ nx, m ⊗ ny, nz] := M[n, m][nx, ny, nz]

# local var"##n...#298" = TensorCast.transmute(TensorCast.lazystack(M), Base.Val((4, 1, 5, 2, 3)))
# V = Base.reshape(var"##n....#298", (TensorCast.star(ax_n, ax_nx), TensorCast.star(ax_m, ax_ny), ax_nz))

reshape( permutedims(stack(M),(4,1,5,2, 3)), 2,4,3)


the difference lies in the different permutation used

reshape( permutedims(stack(M),(1,4,2,5, 3)), 2,4,3)


For TensorCast I think the question wants the indices the other way around. The index from the inner array should be the fast-changing one, hence left-most in nx⊗Nx etc.

julia> using TensorCast

julia> @cast _[(i,I), (j,J), k] := M[I,J][i,j,k]
2×4×3 reshape(transmute(lazystack(::Matrix{Array{Float64, 3}}), (1, 4, 2, 5, 3)), 2, 4, 3) with eltype Float64:
[:, :, 1] =
0.076708  0.830297  0.657548   0.394117
0.954511  0.442791  0.0738904  0.103809
...

julia> @cast out[nx⊗Nx, ny⊗Ny, nz] |= M[Nx, Ny][nx, ny, nz]  # writing ⊗ to look like question...
2×4×3 Array{Float64, 3}:   # ... and |= is less lazy, collects an Array, with same numbers
[:, :, 1] =
0.076708  0.830297  0.657548   0.394117
0.954511  0.442791  0.0738904  0.103809
...

julia> out ≈ hvcat(size(M,2), permutedims(M)...)
true

3 Likes

Sorry, had missed that detail of the question. The good think is that TensorCast is very explicit and allows to easily change the indexing (thanks to @mcabbott for fixing it).

@mcabbott, if you allow me, your ideas are quite impressive.

Thanks for all the suggestions! In particular the TensorCast solution looks fantastic, I didn’t know the package.
I guess I should ask questions more often, there’s always something new to learn around here