No, the FEAST reversecommunication API documentation is very confusing, but my understanding is that FEAST only requires that you provide blackbox matvec and shifted solve (A*x and (AλI)\x). This is essentially the same cost as shiftandinvert. (The references in the manual to “factorizing” the matrix are basically just a step where the user can do reusable initialization, typically factorization or preconditioning, but the only actual communication with FEAST is done via matvec and solve. The reason for this is that the underlying algorithm is based on essentially integrating (AλI)⁻¹X around a contour for some subspace X given by a bound on the number of eigenvalues you want.)
That’s right, and to do the shifted solve you pretty much need to factor your matrix. There’s a recent paper where they do the linear solves iteratively: https://arxiv.org/pdf/1706.00692.pdf, and they show the total number of matvecs is very much larger than that of ARPACK (although they do gain in a massively parallel setting by doing those matvecs in parallel).
EDIT: sorry if I misunderstood your point: yes, the interface is still blackbox linear solve, as in shiftandinvert ARPACK, I did not mean to suggest it wasn’t.
My understanding is that the main advantage of FEAST comes when you know a specific region of the complex plane or real line that you are interested, and don’t want eigenvalues outside of that contour.
In both of these cases, shiftandinvert is very suboptimal. FEAST also parallelizes very well.
Why is shiftandinvert very suboptimal here? It amplifies the eigenvalues you’re interested in (near the shift) and not the others, pretty much the same as what FEAST does. FEAST has a better amplification factor, but requires more linear solves to do it.
From what I understand, a good way to compare eigensolvers is that they have two orthogonal parts:

transformation of a matrix (ideally to make the eigenvalues you’re interested in exterior and wellseparated from the rest of the spectrum), e.g. shiftinvert, polynomial filtering (chebyshev), rational filters (FEAST & several others)

extraction of eigenvalues from that matrix, e.g. arnoldi/lanczos, subspace iteration, LOBPCG…
The more complex the transformation, the better the separation between eigenvalues and the faster the convergence, but you need to do more matvecs/linear solves. The more complex the extraction part, the better the convergence, but you need to do more more work/have more storage.
Standard ARPACK uses no transformation, and does arnoldi. Shiftinvert replaces A by (Asigma)^1 and does Arnoldi on top of that. FEAST (and similar methods based on rational filtering) replaces A by a discretization of a contour integral (sum of many (A  sigma_i)^1) and does subspace iteration on top of that. The relevant criterion to compare those methods is the relative cost of matvecs/factoring vs orthogonalization/subspace diagonalization/storage costs. FEAST has an expensive transformation, but cheap extraction part, so if you can do your linear solves efficiently and in parallel it works great.