(Moved to a new thread — please don’t resurrect ancient threads.)
In general, even if you have a sparse rhs and a sparse matrix, the solution will be dense, so if you don’t have enough memory to allocate the rhs then you will be in trouble with the solution, no?
However, if you have a sparse rhs (or set of right-hand-sides) and you only want a sparse subset of the solution vectors, then there is a nice technique recently identified by a colleague of mine (Lin, Wang, & Hsu (2022)) that may give enormous speedups. (They applied it to scattering problems, but it’s quite a general linear-algebra trick.)
Suppose you are solving AX=B for some matrix B of right-hand sides, but you are only interested in a sparse “projection” Y = PX of the solutions involving some small subset of the outputs (rows P << cols P). Then, you can form the following “augmented” sparse matrix K and only do a partial LU factorization to introduce zeros in the block corresponding to -P:
at which point the block Y (the Schur complement) is exactly the desired solution Y = PA^{-1}B. They find that this can often be vastly faster than a traditional sparse solve, mainly because it doesn’t need to do a full LU factorization of A (the matrix U can be discarded, and does not even need to be upper triangular). It also fully exploits sparsity of B.
They write that MUMPS and PARDISO already support such partial factorizations, so you could presumably do this via the Julia MUMPS.jl or Pardiso.jl packages (possibly with a little digging to access the lower-level API). I haven’t checked yet whether SuiteSparse supports this kind of partial factorization.