Slow Cholesky factorization for sparse matrices?

I am trying to compare Julia’s Cholesky factorization against MATLAB for sparse matrices, and Julia seems almost 2 times slower than MATLAB.

Julia:

using SparseArrays,BenchmarkTools, LinearAlgebra
n = Int(5e3)
A = rand(n,n)
A = sparse(A*A')
@benchmark cholesky($A)

 memory estimate:  1.96 GiB
  allocs estimate:  42
  --------------
  minimum time:     4.458 s (0.59% GC)
  median time:      4.494 s (0.58% GC)
  mean time:        4.494 s (0.58% GC)
  maximum time:     4.530 s (0.57% GC)
  --------------
  samples:          2
  evals/sample:     1

And MATLAB:

n = 5e3;
A = rand(n,n);
A = sparse(A*A');
t = zeros(20,1);
for it = 1:20
tic;chol(A);t(it)=toc;
end

mean(t) =  2.6595
min(t)  = 2.5605

Am I doing or measuring something wrong?

From the look of your code, A is not sparse.

julia> n=1000; A=rand(n,n); typeof(A)
Array{Float64,2}
julia> B=sprand(Float64, 1000, 1000, .01);
julia> typeof(B)
SparseMatrixCSC{Float64,Int64}

Try it with sprand. sparse(A) will not change a fully populated dense matrix to a sparse one in an effective way. It may well make things worse. Matlab would be no better, but may be internally smart enough to refuse to convert A to the sparse data structure and simply use the dense factorization.

The code above is just a toy example, I’m actually trying factorize a 259584×259584 symmetric matrix with 1814016 no-zeros, and Julia is considerably slower than MATLABs chol().

Could you construct ramdom sparse matrices with sprand in both matlab and julia with a similar % of nonzeros and report the timings? The sparse solvers in matlab and Julia are exactly the same code, namely

https://people.engr.tamu.edu/davis/suitesparse.html

I notice you did not put “using SuiteSparse” in your Julia file. Try that

So it’s hard to believe that any difference in timings is caused by the solver. Something else is going on.

julia> let                                                                                                                                                                               
           N = 10000                                                                                                                                                                     
           A = sprand(N, N, 0.001)                                                                                                                                                       
           A = (A+A')/2                                                                                                                                                                  
           d = abs.(sum(A, dims = 2)) .+ 1;                                                                                                                                              
           A = A + sparse(collect(1:N), collect(1:N), vec(d), N, N);                                                                                                                     
           b = rand(N)                                                                                                                                                                   
           @time x = A \ b                                                                                                                                                               
       true                                                                                                                                                                              
       end;                                                                                                                                                                              
  2.997495 seconds (52 allocations: 623.688 MiB, 0.45% gc time)  
>> N = 10000;
A = sprand(N, N, 0.001);
A = (A+A')/2;
d = abs(sum(A)) + 1;
A = A + sparse(1:N, 1:N, d, N, N);
b = rand(N,1);
tic; x = A \ b; toc
Elapsed time is 3.059487 seconds.
>> 
1 Like
n = Int(2e3)
A = sprand(n,n,0.02)
A = (A*A')

@benchmark factorize($A)
memory estimate:  173.12 MiB
  allocs estimate:  39
  --------------
  minimum time:     207.259 ms (0.00% GC)
  median time:      234.729 ms (0.00% GC)
  mean time:        246.013 ms (0.30% GC)
  maximum time:     363.838 ms (0.00% GC)
  --------------
  samples:          21
  evals/sample:     1

@benchmark cholesky($A)
 memory estimate:  189.89 MiB
  allocs estimate:  42
  --------------
  minimum time:     252.155 ms (0.00% GC)
  median time:      283.299 ms (0.00% GC)
  mean time:        390.594 ms (0.25% GC)
  maximum time:     909.470 ms (0.21% GC)
  --------------
  samples:          13
  evals/sample:     1
n=2e3;
A = sprand(n,n,0.02);
A = A*A';
t = zeros(10,1);
for it = 1:10
    tic;
    chol(A);
    t(it)=toc;
end
mean(t) = 0.2894
min(t) =0.1849

Cholesky still seems slower in Julia, however factorize() gives is more or less as fast as MATLAB’s chol(). Both of them call CHOLMOD and produce the same result, but factorize allocates less memory and is a bit faster…

It might make for a better apples to apples comparisons if you generated a random sparse matrix and saved it to a file. Then use that same matrix with Julia and MATLAB for the timing.

1 Like