I compute svds of N roughly ~100x100 matrices and note a 4x slowdown when I do this in a loop for N > 100.
function f_svd(n::Int64,a::Array{Float64,2})
for i in 1:n
u,s,v = svd(a);
end
end
(in my real code I compute svds of different matrices and store away results).
using BenchmarkTools
using LinearAlgebra
a = rand(100,100)
@btime f_svd($1,$a);
@btime f_svd($10,$a);
@btime f_svd($100,$a);
@btime f_svd($1000,$a);
3.101 ms (16 allocations: 482.02 KiB)
64.479 ms (160 allocations: 4.71 MiB)
1.317 s (1600 allocations: 47.07 MiB)
13.964 s (16000 allocations: 470.72 MiB)
Note that the time for n = 10 is roughly 2 x 10 x (time for n = 1). The time for n = 100 is roughly 4 x 100 x (time for n = 1). Beyond n = 100 the extra slowdown is minimal.
Is this effect purely due to the processors having a hard time swapping data in and out of memory, or is there a Julia trick i can use to speed up my code (by roughly a factor of 4)?