Dear all,
I would like to solve a linear(ised) system of equations using IterativeSolvers, however I am not sure how to define the preconditioner.
The preconditonner is a symmetric positive-definite sparse matrix and the action of the preconditonner is defined in 2 steps: (1) compute Cholesky factorisation of the preconditionner (done before calling the iterative solver), (2) apply Cholesky factors during the iterative solver (backsubstitutions).
I understand that the action of the preconditionner is dealt internally with in-place backsubstitutions, which is great.
But how should I input the Cholesky factors (SuiteSparse.CHOLMOD.Factor{Float64}) to say, gmres! or bicgstabl!? Can I simply do this?:
PC_fact = cholesky(PC) # Factor preconditionner
gmres!(x, A, b; Pl=PC_fact)
So far I use a self programmed gcr! translated from an earlier MATLAB code (where I actually hard-coded the action of the preconditionner) but I am confident that Iād be better off using solvers from IterativeSolvers.jl
Yes, I think so. According to the docs, the preconditioner P is used by IterativeSolvers.jl via P \ x etcetera, and cholesky(A) in Julia returns a factorization object that supports \ and ldiv! (out-of-place and in-place left division), implemented via back/forward-substitution.
Hi, this probably would work, but cholesky(A) is a full factorization. For a Preconditioner see e.g. IncompleteLU.jl or AlgebraicMultigrid.jl.
The point is that a preconditioner M for a matrix A is much simpler to factorize than the full matrix A. In practice one often doesnāt represent M, but just the factorization.
Also, cholesky is for symmetric matrices, in that case you can use cg instead of gmres.
My understanding is that the original poster is factorizing some matrix Pdifferent from the original matrix A. It can be very sensible to use the full factorization of P in this case if P is much cheaper to factorize than A (e.g. P is banded).
@steveng : exact, Iām solving for a Newton step but the Jacobian is not symmetric. Hence, I use the Cholesky factorisation of the system resulting from Picard linearisation (symmetric positive definite). To my knowledge, itās an efficient way to solve 2D non-linear problems (FDM/FEM). Not a candidate for 3D thoughā¦ @j-fu : thanks for the suggestion, Iāll definitely try the available ILU and AMG once Iām back at work.
@Gravlaxldiv!(y, F, x) is not implemented for F::SuiteSparse.CHOLMOD.Factor{Float64}.
You can use LDLFactorizations.jl, if you want to use ldiv! with the factorization of a symmetric (positive definite) matrix.
Oh nice, thatās indeed very useful. If I understand well, you manage to overcome the ldiv!/cholesky factor issue by āoverloadingā or adding a method to āldiv!ā with the following line, right?
ldiv!(y::Vector{T}, F::SuiteSparse.CHOLMOD.Factor{T}, x::Vector{T}) where T = (y .= F \ x)
Yes, youāre right.
Note that we allocate a new vector at each call of ldiv! for F \ x.
Itās not a ārealā ldiv! routine.
We canāt do better because the in-place backward and forward sweeps with permutations are not implemented in CHOLMODā¦
Hi there, Iāve ran into a similar issue recently, but even factorisations that do provide in-place ldiv! method (such as UMFPACKās LU decomposition) turn out to allocate quite a lot. However looking into UMFPACKās documentation, it looks like there it defines a set of functions (umfpack_*_wsolve) that takes a pointer to a preallocated buffer from the user (and therefore does not allocate itself).
Iām curious as to how hard it would be to expose this feature in the Julia wrapper, much like what FastLapackInterface does for LAPACK (only problem is that the matrix formats LAPACK supports is quite limitedā¦).
Great, thanks a lot for this. I did not understand that the problem was inherent to CHOLMOD. I thought it was related the way it was wrapped in Julia.
That said, it is true that when using the library in C: s = cholmod_solve (CHOLMOD_A, Lfact, f, c); s is internally allocated. So, itās probably a similar issue.