# Special method for Eigenvectors of Sparse+Hermitian+Complex matrix?

After searching Google, the docs and discourse, I did not find any reference to any method that finds eigenvecs and eigenvals of Sparse, Hermitian and Complex matrices. I found a method specialized for Real and Symmetric matrices.

Right now I use:
`eigs(Hmat, maxiter = 1000)` (takes about 22 seconds) with my Hmat being something like:

``````30364×30364 SparseMatrixCSC{Complex{Float64},Int64} with 90580 stored entries:
[15183,     1]  =  -2.8-0.0im
[15185,     1]  =  -2.8-0.0im
[15183,     2]  =  -2.8-0.0im
[15186,     2]  =  -2.8-0.0im
[15184,     3]  =  -2.8-0.0im
[15188,     3]  =  -2.8-0.0im
[15184,     4]  =  -2.8-0.0im
⋮
[15180, 30361]  =  -2.8+0.0im
[15181, 30361]  =  -2.8+0.0im
[15177, 30362]  =  -2.8+0.0im
[15181, 30362]  =  -2.8+0.0im
[15179, 30363]  =  -2.8+0.0im
[15182, 30363]  =  -2.8+0.0im
[15180, 30364]  =  -2.8+0.0im
[15182, 30364]  =  -2.8+0.0im
``````

These matrices are the bread and butter of almost all areas of theoretical Solid State Physics, Quantum Transport and many many more areas of Physics connected with quantum mechanics, which is way I am a bit surprised.

Anyone knows of any method in Julia?

See http://www.caam.rice.edu/software/ARPACK/UG/node43.html#SECTION00790000000000000000 for a discussion of the Hermitian case in ARPACK. They argue that it is not worth a specialized routine. Lehoucq was probably pretty tired of writing Fortran at this point.

You might also want to take a look at https://github.com/primme/primme to see if there is anything useful there for your problem. I’ve started wrapping the solver in https://github.com/andreasnoack/Primme.jl but the wrappers are absolutely minimal at this point. I’d be happy to help you to contribute to the wrappers.

1 Like

I was not aware of PRIMME. However, they seem to not say anything about sparse matrices? Or it doesn’t make a difference for them (they treat them the same way)? It is clearly relevant to me since it treats Hermitian complex matrices, which is what I use.

I would gladly use your Primme.jl wrapper, and try to contribute, however I did not see a callable (and easy to understand) function for the eigenvectors. Did you not wrap it?

If I can use your package I will contribute documentation, but I cannot contribute anything else as I dont know C99 or wrapping C code.

However, they seem to not say anything about sparse matrices?

Iterative methods don’t care about how the matrix is actually stored. It will just have to support matrix-vector multiplication.

I would gladly use your Primme.jl wrapper, and try to contribute, however I did not see a callable (and easy to understand) function for the eigenvectors. Did you not wrap it?

So far, I’ve only wrapped the svd function but I think the Hermitian eigensolver would be very similar so wrapping it would probably be a matter of mimicking the svd wrapper while reading the header files for the Hermitian eigensolver. Unfortunately, I don’t think I’ll have time to do this anytime soon. Feel free to try and I can try to help.

However, be aware that it might not be the case that it is much faster than `eigs`. It depends on the spectrum. Do you know anything about it? If there are values with higher multiplicities then Primme might be much faster since it supports blocking but if the values are well separated then `eigs` might be as fast.

1 Like

The eigenvalues do not have high multiplicities, but they are extremely close. As an example,

``````Complex{Float64}[30]
8.50… + 1.9364884933104093e-19im
8.493836357250023 - 3.39e-18…im
8.493836357248593 - 4.33e-18…im
8.488929499171801 - 2.99e-18…im
8.488929499170174 - 2.19e-17…im
8.487210806434998 - 9.55e-19…im
8.482965081872845 - 1.37e-17…im
8.482869000861937 - 9.56e-18…im
8.47934805496017 + 7.77e-18…im
8.47934805495935 - 7.57e-18…im
...
8.458545742532564 - 7.58e-18…im
8.456605417266761 - 1.73e-18…im
8.45660541726676 - 2.78e-18…im
8.448697466431314 + 6.76e-18…im
8.448697466431208 + 1.40e-17…im
8.4484637029396 + 1.68e-18…im
8.448463702939568 + 2.06e-17…im
8.443407934680252 + 7.89e-18…im
8.443407934680204 - 5.89e-18…im
8.441731933795301 + 2.64e-17…im
``````

Are some typical 30 first eigenvalues of a typical matrix. Notice that they are actually real.

Ok. Then you might actually benefit a lot from using Primme.

Sounds good, but I won’t be able to contribute code

I don’t have the skills to do it in a reasonable amount of time, and If I really get into it will be too much time.

I do promise however that if I can use your package in the future I will contribute documentation, which I know how to do and thus it will only take a reasonable amount of time.

@andreasnoack Do you have a reference for the fact that block eigensolvers are better for clusters? I don’t have a lot of experience with ARPACK but I didn’t know that was a big limitation.

I would not expect PRIMME to be a big win here: even with blocks, your spectrum looks pretty packed, and isolating a few eigenvectors in this is going to be tricky (unless every eigenvalue of the matrix is around 8.4 of course). The deciding factor in the convergence of a block eigensolver is the ratio of the separation of the cluster you’re trying to compute to the rest of the spectrum over the total width of the spectrum.

A big advantage of PRIMME & friends is the support for preconditioning. If your matrix has a specific structure (typically, if your hamiltonian is a perturbation of a simpler hamiltonian) then you can benefit from that.

@antoine-levitt I’m not an expert in this area so I might be wrong. My statement is based on things like http://www.caam.rice.edu/software/ARPACK/UG/node51.html and my attempt to replicate Figure 4 in [1412.3510] An implementation of a randomized algorithm for principal component analysis (not that I recommend the paper).

@Datseris How hard is it to construct your matrix? Can I do it?

@andreasnoack it would be very very hard to create it. You need to call a python package called Kwant through PyCall, then construct a tightbinding system, then call one of the solvers to get the Hamiltonian and then transform the Python sparse matrix into Julia one. I don’t think it is worth the effort for you to do this.

However, I would be glad to sent it to you. I saved the matrix in a `.jld2` file (I have stopped using `.jld` long ago). I cannot upload here (it is 1.2mb) but I can send it via email if you want.

Feel free to send it by email to andreasnoackjensen@gmail.com

Since I only have PRIMME’s svd solver wrapped, I just tried to compare Julia’s `svds` to the wrapper of PRIMME’s svds-like function to see if it makes a difference and PRIMME seems to be slower than Julia’s ARPACK based `svds`. Maybe it is time to write a blocked implicitly restarted blocked Lanczos in Julia to compare with ARPACK.

1 Like

Yes, I would not expect PRIMME to beat ARPACK in general, it looks tailored towards interior/preconditioned eigenproblems. Exploring the Potential of the PRIMME Eigensolver | SIAM has a partial overview, and in particular point out that block solvers should be slower in terms of matvecs. In fact, without restarts lanczos is optimal: if you want to find the largest eigenvalue for instance, it maximizes the Rayleigh quotient in the n-th krylov subspace, so it gives you the best approximation for a given budget of n matvecs. I guess trouble starts when you deflate/restart.

Thanks all, good to know!

That may be true for Arnoldi vs. Lanczos (with implicit restarting). However, there are other methods for Hermitian matrices that are worth considering.

One of my favorites for extremal eigenvalues is LOBPCG, which only works for Hermitian eigenproblems; I don’t know if there is a good way to access this algorithm from Julia right now other than via SciPy? It’s pretty simple to implement IIRC.

Another interesting example is the FEAST eigensolver (http://www.feast-solver.org/), which is designed to find eigenvalues on an interval of the real line (for Hermitian problems) or inside a contour in the complex plane (for non-Hermitian).

2 Likes

One of my favorites for extremal eigenvalues is LOBPCG, which only works for Hermitian eigenproblems; I don’t know if there is a good way to access this algorithm from Julia right now other than via SciPy? It’s pretty simple to implement IIRC.

As seems to be the rule with this kind of algorithms, it’s easy to implement, but tricky to implement well. It’s implemented in PRIMME (as a special case of a generalized davidson solver).

FEAST requires factoring the matrix. I would imagine it’s only better than shift-and-invert ARPACK on massively parallel computers or in the regime where the orthogonalization costs are large compared to the linear solves.

1 Like

No, the FEAST reverse-communication API documentation is very confusing, but my understanding is that FEAST only requires that you provide black-box matvec and shifted solve (`A*x` and `(A-λI)\x`). This is essentially the same cost as shift-and-invert. (The references in the manual to “factorizing” the matrix are basically just a step where the user can do re-usable initialization, typically factorization or preconditioning, but the only actual communication with FEAST is done via matvec and solve. The reason for this is that the underlying algorithm is based on essentially integrating (A-λI)⁻¹X around a contour for some subspace X given by a bound on the number of eigenvalues you want.)

My understanding is that the main advantage of FEAST comes when you know a specific region of the complex plane or real line that you are interested, and don’t want eigenvalues outside of that contour. For example:

• If you are solving for resonant modes in a scattering problem, you are usually only interested in eigenvalues in a strip close to the real axis.

• In solid-state problems where you are looking for “localized” eigenvalues inside a spectral gap, you are only looking for λ in a segment of the real line — outside of that segment, there is a (discretized) “dense” spectrum of zillions of uninteresting eigenvalues.

In both of these cases, shift-and-invert is very suboptimal. FEAST also parallelizes very well.

1 Like

No, the FEAST reverse-communication API documentation is very confusing, but my understanding is that FEAST only requires that you provide black-box matvec and shifted solve (A*x and (A-λI)\x). This is essentially the same cost as shift-and-invert. (The references in the manual to “factorizing” the matrix are basically just a step where the user can do re-usable initialization, typically factorization or preconditioning, but the only actual communication with FEAST is done via matvec and solve. The reason for this is that the underlying algorithm is based on essentially integrating (A-λI)⁻¹X around a contour for some subspace X given by a bound on the number of eigenvalues you want.)

That’s right, and to do the shifted solve you pretty much need to factor your matrix. There’s a recent paper where they do the linear solves iteratively: https://arxiv.org/pdf/1706.00692.pdf, and they show the total number of matvecs is very much larger than that of ARPACK (although they do gain in a massively parallel setting by doing those matvecs in parallel).

EDIT: sorry if I misunderstood your point: yes, the interface is still black-box linear solve, as in shift-and-invert ARPACK, I did not mean to suggest it wasn’t.

My understanding is that the main advantage of FEAST comes when you know a specific region of the complex plane or real line that you are interested, and don’t want eigenvalues outside of that contour.
In both of these cases, shift-and-invert is very suboptimal. FEAST also parallelizes very well.

Why is shift-and-invert very suboptimal here? It amplifies the eigenvalues you’re interested in (near the shift) and not the others, pretty much the same as what FEAST does. FEAST has a better amplification factor, but requires more linear solves to do it.

From what I understand, a good way to compare eigensolvers is that they have two orthogonal parts:

1. transformation of a matrix (ideally to make the eigenvalues you’re interested in exterior and well-separated from the rest of the spectrum), e.g. shift-invert, polynomial filtering (chebyshev), rational filters (FEAST & several others)

2. extraction of eigenvalues from that matrix, e.g. arnoldi/lanczos, subspace iteration, LOBPCG…

The more complex the transformation, the better the separation between eigenvalues and the faster the convergence, but you need to do more matvecs/linear solves. The more complex the extraction part, the better the convergence, but you need to do more more work/have more storage.

Standard ARPACK uses no transformation, and does arnoldi. Shift-invert replaces A by (A-sigma)^-1 and does Arnoldi on top of that. FEAST (and similar methods based on rational filtering) replaces A by a discretization of a contour integral (sum of many (A - sigma_i)^-1) and does subspace iteration on top of that. The relevant criterion to compare those methods is the relative cost of matvecs/factoring vs orthogonalization/subspace diagonalization/storage costs. FEAST has an expensive transformation, but cheap extraction part, so if you can do your linear solves efficiently and in parallel it works great.

Both methods find eigenvalues in the interior of the spectrum, it’s true, but shift-and-invert finds eigenvalues near a given shift μ, whereas FEAST finds them only inside a contour. There is a big difference if there are a lot of eigenvalues you are not interested just outside the contour — ordinary shift-and-invert Arnoldi will pick up a lot of those (and hence require a larger subspace). It’s also a big difference if the contour is very non-circular (e.g. a narrow strip in the complex plane) — basically, FEAST picks an optimized set of shifts for you. As you say, FEAST basically optimizes an “eigenvalue filter” for you.

Caveat: My experience with contour-integral eigensolver methods is mainly for nonlinear eigenproblems, where the competitors’ flaws are much worse. My knowledge of FEAST is only theoretical and from talking with the FEAST authors and looking at the FEAST papers.

I suspect the hard cases for shift invert are going to be the hard cases for FEAST as well: if there’s a lot of eigenvalues just outside the contour, it’s going to require a large number of poles in the rational filter to achieve a good enough separation. I would imagine a narrow strip is similarly hard. From what I understand the two have the same easy and hard cases, and they have different strengths in different regimes (for instance, I would expect FEAST to lose to ARPACK in the sequential regime - which admittedly is not that relevant for the kind of eigenproblems one uses these methods for - and win in the highly parallel case). But I also don’t have a lot of experience with this, just papers and discussions with the authors.

The one comparison I’m aware of between contour-based eigensolvers and shift-invert is http://www.sciencedirect.com/science/article/pii/S0167819114000325.