I have a function that I wish to minimize via a 2nd order newton’s method with line search. I have an explicit gradient and Hessian. Moreover, the Hessian is sparse, and I care about performance (e.g. if possible, re-use the symbolic factorization). How can I do this? Is there a way to get the negative eigenvalues out of the failed sparse Cholesky factorization? Or a package that implements a sparse modified Cholesky factorization algorithm, that ensures the result is positive-definite?
2 options that I’ve looked into so far:
-
Add a scalar multiple of the identity: for this, I’d need to know the eigenvalues. SparseArrays.jl does not have a “diagonalize sparse symmetric matrix [with pivoting]” routine. I could try a sparse Cholesky factorization of
H+cI
, increasing c each time it fails. Could even binary search for the minimalc
that works. But all those failed factorizations will hurt performance. -
Use PositiveFactorizations.jl: this is what Optim.jl does. However, I suspect this package is aimed at dense matrices, not sparse ones.
Aside: the Apple Accelerate libSparse library does has LDLT factorizations. One could write a Julia wrapper like I was working on here, and get this feature into AppleAccelerate.jl…but that’s a different project.