How to "transpose" a vector of vectors?

I have a vector V1 of length m where each element is a vector of length n.
How do I “transpose” that into a vector V2 of length n where each element is a vector of length m and V2[i][j] == V1[j][i]?

As an example:

V1 = [rand(3) for x in 1:10]
V2 = [[x[1] for x in V1], [x[2] for x in V1], [x[3] for x in V1]]
julia> V1[7][2] == V2[2][7]

This is doable for n small, but there must be a more general solution.

Try TensorCast:

using TensorCast
@cast V2[i][j] := V1[j][i]

This would also work if all vectors have the same length

V2 = [[x[i] for x in V1] for i in eachindex(V1[1])]
1 Like


V2 = collect(eachrow(reduce(hcat, V1)))
1 Like
using SplitApplyCombine

V2 = invert(V1)

This invert function is pretty general:

Take a nested container a and return a container where the nesting is reversed, such that invert(a)[i][j] === a[j][i] .

Currently implemented for combinations of AbstractArray , Tuple and NamedTuple . It is planned to add AbstractDict in the future.


Thank you all. This is such a great community!
So much to choose from.
I’ll definitely look more into TensorCast.jl and SplitApplyCombine.jl

Here’s a quick comparison, where especially TensorCast is really impressive:

using TensorCast, SplitApplyCombine

function transpose_tensor(V1)
    @cast V2[i][j] := V1[j][i]

function transpose_comprehension(V1)
    [[x[i] for x in V1] for i in eachindex(V1[1])]

function transpose_funct(V1)
    collect(eachrow(reduce(hcat, V1)))

function transpose_invert(V1)

using BenchmarkTools

julia> @btime transpose_tensor(data) setup=(data=[rand(5) for x in 1:500]);
  545.250 ns (4 allocations: 344 bytes)

julia> @btime transpose_comprehension(data) setup=(data=[rand(5) for x in 1:500]);
  6.287 μs (6 allocations: 20.44 KiB)

julia> @btime transpose_funct(data) setup=(data=[rand(5) for x in 1:500]);
  5.119 μs (3 allocations: 19.92 KiB)

julia> @btime transpose_invert(data) setup=(data=[rand(5) for x in 1:500]);
  5.318 μs (6 allocations: 20.44 KiB)

Please note that the := syntax in TensorCast returns a view, thus the greater speed observed. If |= was used to create a copy, then the time would be comparable to the other solutions. @mcabbott can advise.

1 Like

You can also use ArraysOfArrays.jl. It gives you a compact memory representation of nested arrays that you can operate on very efficiently:

julia> using ArraysOfArrays, BenchmarkTools

julia> V1 = [rand(3) for x in 1:10];

julia> V1aoa = @btime ArrayOfSimilarArrays(V1);
  257.127 ns (2 allocations: 320 bytes)

julia> V2aoa_view = @btime nestedview(flatview(V1aoa)');
  38.514 ns (2 allocations: 32 bytes)

julia> V2aoa_copy = @btime nestedview(copy(flatview(V1aoa)'));
  103.347 ns (3 allocations: 336 bytes)

(Disclaimer: It’s my package :slight_smile: )