Seems that LinearAlgebra.jl does not work with sparse matrices, and the libraries that do (e.g. Arpack.jl, ArnoldiMethod.jl, KrylovKit.jl) all use iterative methods, which are not efficient to find *all* the eigenvalues. Short of using LinearAlgebra.jl with a regular matrix, do I have an alternative?

in general, there do not exist efficient algorithms for all the eigenvalues of a spare matrix.

To elaborate a bit on @Oscar_Smithâ€™s answer: The main reason why there is in general in no efficient algorithm is because the matrix of eigenvectors is generally not sparse even if your original matrix is. Thus if you compute all eigenvectors you are computing a dense matrix anyways and there is no benefit of starting with a sparse one.

For a small sparse matrix (1000x1000 or so), convert to dense. Otherwise, ask yourself â€śwhy do I think I need ALL eigenvalues?â€ť.

Definitely. @PetrKryslUCSD, I could do without rhetorical questions about my thought process. There are problems in mathematics (kinetic equations, asymptotics, multi-scale problems among others) where knowing only *some* eigenvalues of a very large matrix is not good enough. Hence the very specific title to my very specific question!

I donâ€™t think itâ€™s necessary to get rancorous. As others have said, the general answer to your question is that it currently seems impossible (or at least impractical) to avoid dense methods for general sparse matrices if you want precise values for all the eigenvalues. (Unless your sparsity pattern is very special?)

For many problems in applied mathematics that at first seem to require the whole spectrum, people have worked hard to find sparse-compatible workarounds that compute a final result in a different way while avoiding computing the explicit spectrum as an intermediate step. Maybe you can explore this route? If you want to share more about your ultimate goal â€”what are you using the spectrum to accomplish?â€”then maybe someone here will have an idea.

For many problems in applied mathematics that at first seem to require the whole spectrum, people have worked hard to find sparse-compatible workarounds that compute a final result in a different way while avoiding computing the explicit spectrum as an intermediate step.

This is a good point. For some problems, it is enough to approximate the *density* of the eigenvalue spectrum. Or, perhaps, to approximate the trace of some function of the given sparse matrix. For such problems, there are powerful stochastic approximation techniques that avoid explicit computation of the individual eigenvectors. From the physics perspective, see, for example, the kernel polynomial method and subsequent â€śprobingâ€ť techniques to accelerate stochastic convergence.

In a grad-school class based on Trefethenâ€™s text I also came across a need to compute all eigenvalues/eigenvectors of sparse matrices. I ended up just using my own hand-written Arnoldi method, but in my literature review I came across a figure:

(Google Image search finds this presentation)

Which would suggest Kernel Polynomial Methods based (unsurprisingly) on Chebyshev polynomials. I never investigated this further as I only needed a one-off approach and my own naive code was good enough.

@kbarros â€“ jinx!

If you run Arnoldi long enough to compute all of the eigenvalues (i.e. m steps without restarting for an m \times m matrix), isnâ€™t it essentially equivalent in cost (or worse) to just using a dense eigensolver from the beginning?

Or did you somehow partition your spectrum into chunks and run Arnoldi with shift-and-invert to separately compute eigenvalues a few at a time in each chunk?

Donâ€™t those compute continuous approximations for the spectral density / DOS (as well as other functions of the spectrum), rather than individual eigenvalues *per se*?

Do tell: if some eigenvalues of a VERY large matrix (what is very large in your neck of the woods?) are not enough for some people, what do they do?

IIRC at the time there was some shaking going on in the Julia ecosystem regarding eigenvalue solvers? In any case, the class was numerical linear algebra and Iâ€™d just written an Arnoldi solver in Matlab for an assignment (which I then used to make this animation on the Arnoldi Iteration page in Wikipedia) I thought it would be fun to rewrite in Julia.

Admittedly I donâ€™t know much about the method(s), but found the graphic at least interesting on the surface - Iâ€™d never before needed to calculate all eigenvectors of a (large) matrix before and didnâ€™t even know about FEAST (or PFEAST) until I saw the graphic.

FEAST and others build efficient solvers for large numbers of eigenpairs by separating the spectrum into pieces and distributing the pieces over compute nodes. Although they take more operations than the standard (also iterative!) dense-matrix methods, they would win out for the full spectrum of big enough problems because of the cleaner parallelism and memory use. Iâ€™m not aware of any implementations or wrappers of these in Julia as of now.

Some such approaches use Chebychev series of matvec products or solves to emphasize segments of the real line (for Hermitian matrices), a significant cousin of the shift-and-invert strategy. This is not the same as KPM, which â€śsets aside the requirement for a complete and exact knowledge of the spectrumâ€ť in favor of an approximate density (which is a preferable goal in some applications).

6 posts were split to a new topic: Meta discussion about *all* the eigenvalues

Iâ€™m skeptical of this claim, do you have any references for it? Itâ€™s usually possible to get very good parallel scaling for dense \Theta(m^3) methods on m \times m matrices, because doing \Theta(m^3) work on \Theta(m^2) data makes it possible to hide the communications latency behind computation. (This is the same reason as why BLAS3 algorithms on a *single* CPU, properly implemented, can hide the memory latency and hit nearly the peak flop rate even for matrices that exceed the cache size.)

For example, here is a graph from Petschow et al. (2013) (arXiv) showing that Elementalâ€™s dense Hermitian eigensolver (accessible in Julia via Elemental.jl) attains nearly linear scaling up to 2048 cores on a fixed 20,000 \times 20,000 matrix (which is only 3GB in double precision). An even bigger matrix should scale even better (because the computational work scales faster than the communication).

See also Davidovic (2021) for a review of more parallel dense eigensolvers.

So, if using FEAST for *all* eigenvalues takes more computation (more arithmetic operations) than a dense eigensolver, which seems plausible(?), and both FEAST and the dense eigensolver get linear speedup from parallelism, then it seems like FEAST should still lose to a dense eigensolver for computing all eigenvalues.

The downside of good parallel scaling of \Theta(m^3) algorithms is that, even if you give me 1000x more processors, I can only solve a 10x larger matrix in the same amount of time. And if this comes from a 3d physical problem where m \sim \mathrm{diameter}^3, I can therefore only increase the diameter of my computational volume by \sqrt[3]{10} \approx 2.15 times with 1000x more CPUs. So dense-matrix parallelism doesnâ€™t accomplish what you usually *really* want, which is to enable you to attack much bigger problems. Thatâ€™s why itâ€™s still so important to find clever approaches that avoid computing the entire spectrum explicitly if you can.

Another possible option is the Eigenvalues Slicing Library: [1802.05215] The Eigenvalues Slicing Library (EVSL): Algorithms, Implementation, and Software

(No wrapper in Julia that I am aware)

That sounds like a nice approach, but as they say in the concluding remarks, it is still apparently a research problem to make a scalable parallel implementation of this kind of algorithm (â€śA fully parallel version of EVSL using MPI is currently being developed.â€ť). (The examples in the paper where they computed *all* the eigenvalues appear to have been only for relatively small matrices, e.g. 2000x2000, where probably a dense method would be faster; in the larger examples of the paper they computed at most a few hundred eigenvalues.)

Unfortunately, if you look at the github EVSL repository, it doesnâ€™t seem like itâ€™s seen major development since the paper came out (2019), and in particular there doesnâ€™t seem to have been any progress on a parallel version.

Petschowâ€™s paper concerns the tridiagonal problem, not the full dense case. The reduction to tridiagonal (or Hessenberg) form was traditionally dominated by level 2 BLAS and thus memory/communication bound, so was the bottleneck that inspired my claim. TIL that the ELPA and SLATE teams have built blocking schemes which radically improve this for the Hermitian case, so you are correct there. I havenâ€™t found similar results for the nonsymmetric (Hessenberg) case, but the SLATE team seems to be working on it.

Petschowâ€™s paper explicitly looked at this for ScaLAPACK in section 2.1 of their paper, and contradicts your claim â€” they found that the reduction to tridiagonal form was less than 10% of the compute time and that the clear parallelization bottleneck was the tridiagonal eigensolver. Thatâ€™s why they focused in the paper on improving this stage of the computation.