For a significantly faster approach, you can use reinterpret
and reshape
to do the conversion without even having to copy the data:
julia> cartesian_array = [(1,2) (2,2) (3,2)]
1×3 Array{Tuple{Int64,Int64},2}:
(1, 2) (2, 2) (3, 2)
julia> reshape(reinterpret(Int, cartesian_array), (2, :))
2×3 reshape(reinterpret(Int64, ::Array{Tuple{Int64,Int64},2}), 2, 3) with eltype Int64:
1 2 3
2 2 2
reinterpret
creates a lazy view into an existing array, interpreting it as a different data type. In this case, we are interpreting a list of tuples as a flat vector of integers. reshape
then creates another view of that same data with a different shape, in this case a 2xN matrix.
Note that this produces a 2xN matrix instead of an Nx2 matrix. That’s almost always what you want in Julia, since Julia is column major (this is the opposite of C, C++, and NumPy which are row-major). If it’s not what you want, you can call transpose
to get the Nx2 version instead (transpose
is lazy too, so this doesn’t copy any of the data).
Comparing performance:
julia> N = 100
100
julia> cartesian_array = [(rand(Int), rand(Int)) for _ in 1:N];
julia> function f1(c)
[x[i] for x in [cartesian_array...], i in 1:2]
end
f1 (generic function with 1 method)
julia> function f2(c)
transpose(reshape(reinterpret(Int, c), (2, :)))
end
f2 (generic function with 1 method)
julia> using BenchmarkTools
julia> @btime f1($cartesian_array);
2.450 μs (107 allocations: 6.81 KiB)
julia> @btime f2($cartesian_array);
10.288 ns (0 allocations: 0 bytes)
Using reshape
and reinterpret
is about 240 times faster than the list comprehension approach for a 100-element array.