LDLt factorization for full matrices

I’m not on a computer where I can try Matlab, although maybe I can check it out later. Looking it over, it’s not clear to me where on that doc page it says that it will produce 2\times 2 blocks if the function is applied to a sparse indefinite matrix. Does Matlab do that when you try it? I thought Matlab used CHOLMOD, which pretty clearly says in CHOLMOD Docs that A needs to have well conditioned leading principal submatrices. That is a pretty strong hint that it isn’t pivoting for stability.

Update: I did try it on Matlab. It looks like they are using the algorithm MA57 which is in HSL. There appear to be an interface to HSL at HSL.jl. If it works as expected, it looks like you can do this and get back the separate factors to modify and enforce positive definiteness.

2 Likes

Thank you very much @mstewart and others, you have helped me a lot.

What about a symmetric matrix with non negative eigen values (Semi Definite / SPSD)?
Will that case have a stable \boldsymbol{L} \boldsymbol{D} \boldsymbol{L}^{T} decomposition?

Assuming you mean an LDL^T with pivoting, it does exist for symmetric positive semidefinite matrices. Even somewhat better, there is a pivoted Cholesky that can be computed with

C = cholesky(A, RowMaximum())

There are some keyword parameters for checking the decomposition and providing a tolerance for rank determination.

1 Like

Any chance you elaborate on that?
What if the LDL Decomposition is specifically needed. How can one achieve it?

You just factor out the diagonal from L on both sides and L becomes unit lower triangular. Here it is

using LinearAlgebra

n=10
r = 5
A = randn(n,r)
A = A * A'
tol = 1e-15

F = cholesky(Hermitian(A), RowMaximum(); tol = tol, check = false)
r = F.rank # rank
L = F.L # lower triangular Cholesky factor
p = F.p # permutation of the rows/columns of A
d = diag(F.L) # diagonal of L
L[:, 1:r] = L[:,1:r] / Diagonal(d[1:r]) # Scale to make L unit lower.
L = UnitLowerTriangular(L)
d = vcat(d[1:r], zeros(n-r)) # last n-r diagonal elements should be zero. (A has rank r.)
D = Diagonal(d.^2) # The diagonal has a contribution from $L$ and $L'$, which square.
@show opnorm(A[p, p] - L * D * L') # check the backward error.

In most applications you could probably work with the Cholesky factor directly, so I’m not sure there’s much advantage in pulling out D from the triangular factors.

Just to align my thought, I’ll state the trivial.

Assume \boldsymbol{A} = \boldsymbol{C}^{T} \boldsymbol{C} for some matrix \boldsymbol{C}.
Mathematically \boldsymbol{A} is guaranteed to be SPSD. Yet numerical issues might cause it to have some small negative eigen values.

My idea is that using LDL Decomposition I will be able to clip the small negative values (I know the \boldsymbol{D} in LDL is not the \boldsymbol{D} in the Eigen Decomposition, just borrowed the concept).
So, given what’s available in Julia, is there a way to get the LDL of an SPSD matrix?

This is handled by the code I posted. (Note I made an edit to the scaling of L; I had ignored that you would be dividing by a small or even zero element at L_{rr}). You can stop pivoted Cholesky at the first pivot that is sufficiently small or zero. The built in pivoted Cholesky does this for you.

It is perhaps worth noting that the block L[r+1:n, r+1:n] is essentially arbitrary. I returned what was stored in the Cholesky factor at the end of the factorization, but that isn’t significant and you could just as easily make that block equal the identity.