 # Sparse Cholesky of Gram Matrix (SuiteSparse?)

I’m curious as well. I discarded this “expanded system” solution some time ago since the matrices are quite large, but I can definitely give it a try to see how quickly the problem is solved!

I actually haven’t tried HSL at all for this (I actually forgot it existed…), so I will do that at some point in the near future. Thanks for the heads up!

You’re right. And if the matrix is very non-square, then forming AA^T first is much faster:

``````A = sprand(1000, 10^6, 1e-6);
A += SparseMatrixCSC(I, size(A)...)
cholgram2(A) = cholesky(A*A')
@btime cholgram(\$A);
@btime cholgram2(\$A);
``````

gives

``````21.030 ms (38 allocations: 23.15 MiB)
1.880 ms (47 allocations: 339.10 KiB)
``````

whereas a square matrix

``````A = sprand(10^6, 10^6, 1e-9);
A += SparseMatrixCSC(I, size(A)...)
@btime cholgram(\$A);
@btime cholgram2(\$A);
``````

gives

``````207.157 ms (38 allocations: 213.74 MiB)
200.582 ms (52 allocations: 240.73 MiB)
``````

I wonder what the point of this feature of CHOLMOD is, if it isn’t faster? Maybe it is more accurate if A is badly conditioned, since it avoids squaring the condition number (I guess)?

no need for an interpolation mark (`\$`) with `@btime` ?

Interpolating global arguments to functions is always a good idea with `@btime`, so that there is no dynamic-dispatch overhead.

That is why I wonder about the impact of missing \$ marks in @guille measurements.

These matrices are big enough that the overhead of dynamic dispatch is probably negligible, so `\$` interpolation is just a good habit.

2 Likes