reduce(vcat, itr)
and reduce(hcat, itr)
have special implementations that are supposed to make them faster than vcat(itr...)
and hcat(itr...)
, at least for long iterators. But I am seeing the opposite:
It starts out ok:
julia> A = rand(10, 10);
julia> @btime vcat(eachcol($A)...);
1.700 μs (34 allocations: 3.34 KiB)
julia> @btime reduce(vcat, eachcol($A));
1.200 μs (34 allocations: 6.34 KiB)
But then:
julia> A = rand(10, 100);
julia> @btime vcat(eachcol($A)...);
18.300 μs (314 allocations: 44.42 KiB)
julia> @btime reduce(vcat, eachcol($A));
38.300 μs (394 allocations: 425.31 KiB)
Now it gets quite a bit worse:
julia> A = rand(10, 1000);
julia> @btime vcat(eachcol($A)...);
181.700 μs (3512 allocations: 476.06 KiB)
julia> @btime reduce(vcat, eachcol($A));
3.029 ms (4790 allocations: 38.44 MiB)
And now it explodes, worse than quadratic time complexity, it seems:
julia> A = rand(10, 10000);
julia> @btime vcat(eachcol($A)...);
1.771 ms (39516 allocations: 3.89 MiB)
julia> @btime reduce(vcat, eachcol($A));
1.602 s (49790 allocations: 3.73 GiB)
The situation is similar for hcat
.
Furthermore, why is the splatting version not slowing down? Splatting containers of dynamic size is supposed to be slow, but that’s not apparent at all?