# Surprisingly inaccurate sparse Cholesky factorization

I am surprised by how inaccurate `cholesky()` can be for sparse matrices; so much so it me wonder if I am not using it correctly.

Define a finite different matrix A:

``````L=2;
dvec=-2*ones(2^L);restvec=ones(2^L-1);
A =sparse(-SymTridiagonal(dvec,restvec)+diagm(ones(2^(L))));
``````

This gives me

``````A
4×4 SparseMatrixCSC{Float64, Int64} with 10 stored entries:
3.0  -1.0    ⋅     ⋅
-1.0   3.0  -1.0    ⋅
⋅   -1.0   3.0  -1.0
⋅     ⋅   -1.0   3.0
``````

Now I obtain the Cholesky factor

``````C = cholesky(A);
cholL_sparse = sparse(C.L)
4×4 SparseMatrixCSC{Float64, Int64} with 7 stored entries:
1.73205    ⋅          ⋅        ⋅
-0.57735   1.63299     ⋅        ⋅
⋅         ⋅         1.73205   ⋅
⋅       -0.612372  -0.57735  1.51383
``````

, which is wildly wrong from

``````cholL_dense = cholesky(Matrix(A)).L
4×4 LowerTriangular{Float64, Matrix{Float64}}:
1.73205    ⋅          ⋅         ⋅
-0.57735   1.63299     ⋅         ⋅
0.0      -0.612372   1.62019    ⋅
0.0       0.0       -0.617213  1.61835
``````

Indeed

``````norm(cholL_sparse*cholL_sparse'-A)
2.0
``````

whereas

``````norm(cholL_dense*cholL_dense'-A)
4.710277376051325e-16
``````

That said, we still get

``````cholL_sparse*cholL_sparse' ≈ A[C.p, C.p]
true
``````

c.f. Linear Algebra · The Julia Language, so something seems to be correct, but I find how it is misleading at best…

It’s not wrong, it’s a different factorization: the sparse Cholesky factorization is pivoted (i.e. for a permuted A) whereas the dense Choleky factorization is not. The reason for this is that sparse Cholesky uses pivoting to reduce fill-in (i.e. to keep the Cholesky factor as sparse as possible), while in the dense case this is irrelevant.

3 Likes

Thank you. I just read through