Sparse solve with sparse rhs

is there any news on this? It appears it is still not possible solving problems with sparse rhs. This is a problem when dealing with very large, and very sparse, matrices, where it is not possible allocate memory for a full rhs, and solving for each column of the rhs separately it would be too slow.

1 Like

You mean sparse-matrix rhs?

1 Like

(Moved to a new thread — please don’t resurrect ancient threads.)

In general, even if you have a sparse rhs and a sparse matrix, the solution will be dense, so if you don’t have enough memory to allocate the rhs then you will be in trouble with the solution, no?

However, if you have a sparse rhs (or set of right-hand-sides) and you only want a sparse subset of the solution vectors, then there is a nice technique recently identified by a colleague of mine (Lin, Wang, & Hsu (2022)) that may give enormous speedups. (They applied it to scattering problems, but it’s quite a general linear-algebra trick.)

Suppose you are solving AX=B for some matrix B of right-hand sides, but you are only interested in a sparse “projection” Y = PX of the solutions involving some small subset of the outputs (rows P << cols P). Then, you can form the following “augmented” sparse matrix K and only do a partial LU factorization to introduce zeros in the block corresponding to -P:

K = \begin{pmatrix} A & B \\ -P & 0\end{pmatrix} \rightsquigarrow \begin{pmatrix} U & F \\ 0 & Y\end{pmatrix}

at which point the block Y (the Schur complement) is exactly the desired solution Y = PA^{-1}B. They find that this can often be vastly faster than a traditional sparse solve, mainly because it doesn’t need to do a full LU factorization of A (the matrix U can be discarded, and does not even need to be upper triangular). It also fully exploits sparsity of B.

They write that MUMPS and PARDISO already support such partial factorizations, so you could presumably do this via the Julia MUMPS.jl or Pardiso.jl packages (possibly with a little digging to access the lower-level API). I haven’t checked yet whether SuiteSparse supports this kind of partial factorization.

7 Likes

yes, the situation is the type

\mathbf{A} \mathbf{X} = \mathbf{Y}

with \mathbf{A} and \mathbf{Y} sparse matrices, where \mathbf{Y} has fewer columns than \mathbf{A} but still too many for storing the full matrix. But, as @stevengj, just pointed out in general \mathbf{X} will not be sparse even if \mathbf{A} and \mathbf{Y} are.

I have a large FE problem and I would like to solve it in chunks, but plain sectioning of the domain will not solve memory problems, on the contrary it seems it will make it worse.

thanks I will look into it, I realized just after posting the message that I was not going to be able to store the result anyway even if I could solve the linear system.

Apparently @miles.lubin used a similar trick in a 2014 paper (equations 3.11–3.13), where they also commented that it’s surprisingly little known in the context of direct solves with sparse inputs/outputs.

2 Likes

Wade Hsu tells me that SuiteSparse probably cannot do this efficiently. His group currently uses MUMPS, which (because it is multifrontal) is optimized for this sort of thing—in fact, both MUMPS and PARDISO can return the desired Schur complement Y without even storing the other factors U and F. See the mumps_schur_complement function in MUMPS.jl.

1 Like