Thanks @mstewart for your detailed answer, and sorry for the late reply.
I need to modify the Hessian in a Newton optimization algorithm. I need the modified Hessian to be positive definite to guarantee that the search direction is descendent.
The first option was \overline{H}=V|\Lambda|V^T, but it wasn’t performing well. So I thought about \widehat{H}=P L |D| L^T P^T, where D is strictly diagonal. (I think I have seen somewhere that \overline{H}=P^T\widehat{H}P are equal, but I am not sure either.) And then I saw that in Julia there was LDL^t support for sparse matrices, but not for full ones. That’s why I opened this post.
Nevertheless, computing \widehat{H} is not the best option to modify the Hessian for Newton optimization algorithms. After reading the post pointed by @zdenek_hurak, Shouldn’t bunchkaufman
be actually named ldlt
?, and Section 3.4 in [Nocedal and Wright, Numerical Optimization, 2006], I realize that LDL^t factorization does not seem stable, and that Bunch-Kaufman H=P L B L^T P^T is the appropriate one. Since B is block-diagonal with block size \leq 2, it is easy to compute its eigenvalue decomposition and modify it so that the Hessian is positive definite.
So now my doubts are:
-Is is true that H=P L D L^T P^T factorization is unstable for full matrices? If not, why using the Bunch–Kaufman factorization instead? I also think there should be a function to compute the LDL^T factorization for full matrices if the latter is stable.
-In the case of sparse matrices, what does ldlt
compute? Is it the strict LDL^T factorization or the Bunch–Kaufman? (In the help it just says it uses CHOLMOD.) If the first, isn’t it unstable? If the second, I’d find confusing to name the same factorization in a different way depending on if the matrix is full or sparse.
-If I execute F=ldlt(A)
, with A some sparse symmetric matrix, is there a way to display D and/or L? Because F.D
, Matrix(F.D)
, sparse(F.D)
, etc., does not work.