No, the FEAST reverse-communication API documentation is very confusing, but my understanding is that FEAST only requires that you provide black-box matvec and shifted solve (A*x and (A-λI)\x). This is essentially the same cost as shift-and-invert. (The references in the manual to “factorizing” the matrix are basically just a step where the user can do re-usable initialization, typically factorization or preconditioning, but the only actual communication with FEAST is done via matvec and solve. The reason for this is that the underlying algorithm is based on essentially integrating (A-λI)⁻¹X around a contour for some subspace X given by a bound on the number of eigenvalues you want.)
That’s right, and to do the shifted solve you pretty much need to factor your matrix. There’s a recent paper where they do the linear solves iteratively: https://arxiv.org/pdf/1706.00692.pdf, and they show the total number of matvecs is very much larger than that of ARPACK (although they do gain in a massively parallel setting by doing those matvecs in parallel).
EDIT: sorry if I misunderstood your point: yes, the interface is still black-box linear solve, as in shift-and-invert ARPACK, I did not mean to suggest it wasn’t.
My understanding is that the main advantage of FEAST comes when you know a specific region of the complex plane or real line that you are interested, and don’t want eigenvalues outside of that contour.
In both of these cases, shift-and-invert is very suboptimal. FEAST also parallelizes very well.
Why is shift-and-invert very suboptimal here? It amplifies the eigenvalues you’re interested in (near the shift) and not the others, pretty much the same as what FEAST does. FEAST has a better amplification factor, but requires more linear solves to do it.
From what I understand, a good way to compare eigensolvers is that they have two orthogonal parts:
-
transformation of a matrix (ideally to make the eigenvalues you’re interested in exterior and well-separated from the rest of the spectrum), e.g. shift-invert, polynomial filtering (chebyshev), rational filters (FEAST & several others)
-
extraction of eigenvalues from that matrix, e.g. arnoldi/lanczos, subspace iteration, LOBPCG…
The more complex the transformation, the better the separation between eigenvalues and the faster the convergence, but you need to do more matvecs/linear solves. The more complex the extraction part, the better the convergence, but you need to do more more work/have more storage.
Standard ARPACK uses no transformation, and does arnoldi. Shift-invert replaces A by (A-sigma)^-1 and does Arnoldi on top of that. FEAST (and similar methods based on rational filtering) replaces A by a discretization of a contour integral (sum of many (A - sigma_i)^-1) and does subspace iteration on top of that. The relevant criterion to compare those methods is the relative cost of matvecs/factoring vs orthogonalization/subspace diagonalization/storage costs. FEAST has an expensive transformation, but cheap extraction part, so if you can do your linear solves efficiently and in parallel it works great.