# Eigensolver for interior eigenvalues of sparse (banded) Hermitian matrices?

I am trying to compute many interior eigenvalues of large sparse banded complex Hermitian matrices.

By the way, the (absolute value of the) matrix looks like this for the ~1500 by ~1500 case:

Here, ~ 98.8% of the elements are zero.

So far the fastest thing is just to convert to a dense representation and use the LAPACK routine underlying the `eigvals` function. (What is more, it seems to be faster to compute all the eigenvalues rather than ~20% of them? Didn’t expect that.)

I have tried Arpack.jl (with shift-and-invert) and BandedMatrices.jl coupled with PyCall + `scipy.lingalg.eig_banded`, but these attempts are at least 10 times slower. I briefly looked at some packages like KrylovKit.jl but all the sparse eigensolvers in Julia that I have come across besides Arpack.jl appear to be only good for extremal eigenvalues.

I saw the FILTLAN package which appears to be made for (something close to?) the kind of thing I am trying to do, but it has no documentation and is written in C++ and hence seems like it would be difficult to interface with my other code.

So the question: any advice on where to look for the best user-friendly eigensolver for this purpose? For now I am going with just converting to a dense representation, but a) if there is something faster for my particular kind of matrices that takes better advantage of their structure that would be cool and b) I am not sure this will be an option for larger sizes in the future.

For now I am just computing eigenvalues. Also, in case it matters, the matrices are not only banded but they are constructed from sums of Kronecker products of Hermitian tridiagonal matrices.

I think BandedMatrices.jl only supports symmetric at the moment but adding support for Hermitian should be easy. So not sure what you were trying that was 10x slower, probably was just a generic fallback.

Note for symmetric it tridiagonalises and calls the `SymTridiagonal` eigenvalue routine, which is O (n^2). Dense eigenvalue solver is O(n^3), so there’s no way it’s faster for “large” matrices, though I suspect your matrices are not very large, in which case dense solvers may very well be the faster.

@dlfivefifty Thanks for the quick reply!

My lazy code was just

``````function LinearAlgebra.eigvals(A::Hermitian{T,<:BandedMatrix{T}}) where T <: Complex
return pyla.eig_banded(A.data.data, eigvals_only = true)
end
``````

By the way, `pyla = pyimport("scipy.linalg")`. The SciPy function `eig_banded` calls LAPACK’s `hbevd` function which is specific for Hermitian banded eigenvalue problems. The sizes I was using for benchmarking were about 3500 by 3500 so I could get a quick sense. The times were about 20 seconds for the dense LAPACK approach and 200 seconds for the banded LAPACK approach (SciPy thing above). How big do you think I would have to go before the banded LAPACK function outperforms the dense approach?

That’s not a good routine as it also forms the eigenvectors so O(n^3), see discussion https://github.com/JuliaMatrices/BandedMatrices.jl/issues/179

3500 is large enough that a banded solver should be much faster