I have been trying my luck with using the conjugate gradient method to solve a sparse symmetric positive definite system of equations. I found IterativeSolvers.jl, Krylov.jl and KrylovMethods.jl, of which the first seems to be the most active and well documented. IterativeSolvers.jl also supports supplying a preconditioner assuming that it respects the following interface:
These preconditioners should support the operations
- `A_ldiv_B!(y, P, x)` computes `P \ x` in-place of `y`;
- `A_ldiv_B!(P, x)` computes `P \ x` in-place of `x`;
- and `P \ x`
The docstring of A_ldiv_B! highlights that P must be a factorization object, not a matrix. All of this is great.
Being the most popular general-purpose preconditioner (AFAIK), I thought of trying an incomplete Cholesky factor. A simple search got me to the cldlt function of IncompleteSelectedInversion.jl (recommended in IterativeSolvers docs) and the lldl function of LLDL.jl. These implement an incomplete LDL’ factorization instead, but I am not complaining! The former package returns a sparse lower triangular matrix as a SparseMatrixCSC{Float64,Int64} with the strict lower triangle hosting the strict lower triangle of L, and the diagonal is the diagonal of D. The latter returns a tuple of:
A sparse strictly lower triangular matrix as a SparseMatrixCSC{Float64,Int64} hosting the strict lower triangle of L,
A vector of the diagonal elements of D, and
The diagonal shift value used (not important for me).
Unfortunately none of these outputs can be just plugged in the A_ldiv_B! function because they are not factors, so I cannot use them as preconditioners in the cg function of IterativeSolvers.
Now, I can apply the preconditioner myself in some linear operator and pass that instead to cg, then post-process the output, but I think such an important procedure should be more streamlined than this. So I guess my question is how to create my own Factorization subtype instance using the information provided by the incomplete LDL’ factorization functions? Calling the constructor Base.SparseArrays.CHOLMOD.Factor{Float64} naively doesn’t just work, and reading a bit of cholmod.jl didn’t really help.
Edit: I could obviously subtype Factorization and implement the above functions myself in a PR or something. But I am asking if such a thing already exists.
module IncompleteCholeskyPreconditioner
using IncompleteSelectedInversion
import Base.LinAlg: A_ldiv_B!, \
struct CholeskyPreconditioner{T, S <: AbstractSparseMatrix{T}}
L::LowerTriangular{T, S}
end
function CholeskyPreconditioner(A, c)
L = cldlt(A,c)
@inbounds for j in 1:size(L, 2)
d = sqrt(L[j,j])
L[j,j] = d
for i in Base.Iterators.drop(nzrange(L,j), 1)
L.nzval[i] *= d
end
end
return CholeskyPreconditioner(LowerTriangular(L))
end
function A_ldiv_B!(y::AbstractVector{T}, C::CholeskyPreconditioner{T, S}, b::AbstractVector{T}) where {T, S}
y .= C.L \ b
return y
end
(\)(C::CholeskyPreconditioner{T, S}, b::AbstractVector{T}) where {T, S <: AbstractSparseMatrix{T}} = C.L \ b
export CholeskyPreconditioner
end
If you think there is a better package or way to do this, a suggestion is more than welcome.
Yes, I will submit it as a PR once I am convinced there is no other obvious more efficient way of doing it. In particular, I am concerned with the line y .= C.L \ b. The function \ allocates internally for the output vector and then it is copied to y. I am not sure if there is a version that takes y as input and fills it up with the result straight away.
Works beautifully, thanks! I guess the PR is ready. Although, I think it does the same amount of work as before but instead of copying the result, I am copying b into y, but with less allocations.
This is my question as well. Since it only assumes Symmetric Matrix I wonder if one can also assume Positive Definite Matrix can lead for farther optimizations.
Matlab’s ichol is exclusively drop-tolerance-based. LimitedLDLFactorizations is first and foremost a limited-memory factorization, i.e., it lets you predefine how much fill-in you’re willing to tolerate. In addition to that, I added a simple drop tolerance long ago. Please open issues (or better yet, pull requests) for feature requests.
I am not sure this is accurate. Have a look at my implementation IncompleteCholeskyDecompositionThreshold Project.
From what I understand we can have 3 types of incomplete (At least those are 3 options):
Pattern Based - IC( l )
MATLAB’s ichol() supports this with l = 0, namely only where non zero elements exists. This basically also guarantees upper memory boundary.
Number of Non Zero Elements - IC( p )
MATLAB doesn’t support this. This allows exact memory allocation control.
So what I see is that MATLAB has 2 out of 3 and has a mode to control the amount of memory.
It also has a mode for Modified Cholesky which compensate the diagonal components for the inaccurate decomposition.
If I get it correctly, LimitedLDLFactorizations.jl is doing something like (2), hence the memory allocation is a multiplication of the number of non zeros of the input matrix. Am I right?
I can see there is also support for Thresholding so if one allocates enough memory then this become (1) as well.
So we’re missing (3) and the Modified ICT.
OK, So If it does (3) why can’t I pre allocate memory and hand it to the function?
What about the diagonal compensation? I saw something like that on code. Could you clarify it?
See the lldl docstring. You control the amount of memory allocated for the factors via the integer p. p = 0 means that no fill-in is allowed in the factors, i.e., at most nnz(A) elements are reserved for them. p > 0 means that nnz(A) + p * n elements are reserved.
There’s no option to pass in your own memory, but feel free to open an issue (or better yet, a pull request) if that’s a crucial feature for you.