I need to convert an array of nx1 arrays to an nxn array. I have Array{Array{Float64,2},1}. Is this a reshape use case?
If n
is relatively small, you can write
hcat(A...)
Though it seems like this is better and faster:
reduce(hcat, A)
Though reduce
will work as expected, I think foldl
gives the proper guarantees.
foldl
seems significantly slower, though. What are the advantages/guarantees that foldl
offers?
That’s interesting. I am getting a 9x slowdown with foldl
. AFAIU, foldl
is guaranteed to go through the elements from left to right, or perhaps more correctly start to end, with the accumulated result always as the first argument of the function. On the other hand, from the reduce
docs:
The associativity of the reduction is implementation-dependent; if you need a particular associativity, e.g.
left-to-right, you should write your own loop or consider using foldl or foldr. See documentation for reduce.
But the slowdown is surprising and perhaps buggy.
I suspect foldl
’s guaranteed associativity means that it has to allocate a new matrix for the result of hcat
for every single column. That’s unnecessary and wasteful, but it’s required by the guarantee that foldl makes. Since reduce
makes no such guarantee, it is free to take a faster path and avoid the extra copies.
Edit: see Problem with cat() - #9 by nalimilan
Thanks for the pointer. But arguably, the same optimization can be applied to foldl
since allocating once seems orthogonal to associativity, I don’t see how they are related.