Recommendation to solve linear systems with very large positive definite symmetric sparse matrix (~10^6 x 10^6)

Since there are many people in this forum that have a wide knowledge in not only programming but also in numerical analysis, I would like to ask for a recommendation on which solvers to consider to solve stardard linear systems Ax = b, where A is a very large sparse matrix (around 3GB of RAM). This matrix has the property of being positive definite and symmetric.

I have seen that LinearSolve.jl has many methods, but I feel the documentation does not give advice towards which solvers should be considered in the case that I am interested.

I must also mention that I do not want to solve the same linear system for different b, since the A matrix will be changing from one solve to another.

Try sparse Cholesky cholesky(A) \ b first (the built-in one from CHOLPACK in SparseArrays.jl, or you could try Pardiso.jl or others). (For a sufficiently big problem, this may run out of memory, depending on the sparsity pattern.) Sparse-direct solvers like this have the advantage of being relatively bulletproof with no adjustable parameters: they either work, or they run out of memory.

Also try conjugate gradient (this is usually the best iterative method for SPD matrices). May require some tuning of the tolerance/convergence parameters, depending on the matrix and how accurate you need the solution to be.

Ideally use preconditioned conjugate gradient, but that requires you to figure out a preconditioner, which is problem-dependent and requires some knowledge and experimentation. RandomizedPreconditioners.jl is one option that is easy to try.

2 Likes

What is the source of such matrix?
Can you tell us about it?

1 Like

The source of the matrix is basically a finite element stiffness matrix.

I will be trying today the advice provided by @stevengj (thank you very much by the way) and experiment a little bit.

I endorse the recommendation by #stevengj to use preconditioned conjugate gradient. Computational efficiency then boils down to finding a good preconditioners. Given the finite element source of the problem, I expect multigrid to do well. Algebraic multigrid offers an easy gateway to multigrid. Not sure how versatile the implementations in GitHub - JuliaLinearAlgebra/AlgebraicMultigrid.jl: Algebraic Multigrid in Julia are. Alternatives are in e.g. GitHub - JuliaGPU/AMGX.jl (on GPU) and GitHub - JuliaParallel/PETSc.jl: Julia wrappers for the PETSc library (look for AMG under preconditioners). Other implementation surely exist.

Good luck. Please keep us posted on what you learn.

1 Like

What is the space dimension ?

The matrix dimension is six million by six million

Thank you very much for the suggestions. I will comment my findings.

As suggested by @stevengj, I first tried doing sparse Cholesky, but my real problem ran out of memory (I tried this on a smaller problem and it worked beautifully). The conjugate gradient methods were taking too much time, probably because I was not using a suitable preconditioning.

As per your suggestion, I tried the Algebraic Multigrid methods (which I did not know anything about, so thank you for pointing them out) and I had incredibly good results. Using the example here from AlgebraicMultigrid I found the solution of my system extremely quickly (less than 5 seconds for my system, which as I have said is quite large and occupies 3GB of memory in sparse form).

So I will spend a little bit learning about these methods but they are clearly what I need, so I do not know if you have any additional suggestion but thank you again for pointing me in this direction.

1 Like

Very happy be here :wink:

The literature on multigrid methods is quite vast.

A possible point of entry is

Other tutorials can be found online.

The WaterLilly.jl developers wrote a multigrid method from scratch.

AMGX.jl saw an update recently.

Happy to provide further input.

3 Likes

I mean the space dimension of the domain discretized with the finite element method. If it is 2 , sparse direct solvers and cholesky factorization are still competitive. If it is 3, the fill-in (reduction of sparsity during the factorization) from these methods scales badly.
What your revealed so far speaks for space dimension 3.

You als may test AMGCLWrap.jl for a preconditioner which wraps an algebraic multigrid library written in C++.

1 Like

The rule of thumb is that these work well on matrices coming from elliptic PDEs, which I’m guessing may be the case since you have an SPD matrix.

1 Like

Correct.

The original approach by Klaus Stueben and John Ruge https://epubs.siam.org/doi/abs/10.1137/1.9781611971057.ch4 required the matrix to be sparse, SPD and M-matrix. Later, the approach was generalized to the non-M-matrix case.

Oh sorry, I did not understand. Yes, as you guessed, my space di mension is 3.

There’s also a recent implementation based on PartitionedArrays: Examples · PartitionedArrays.jl

The link on PartionedArrays does not work, at least for me.

Possibly @fgerick intends Examples · PartitionedArrays.jl