Computing left-eigenvectors with high precision

I need to get left-eigenvectors of a general complex matrix H. Especially, I want to treat a large and ill-conditioned one.
I know two ways under using LinearAlgebra:

  1. computing the right-eigenvectors of H'.
  2. computing the inverse matrix of (v_1,v_2,...,v_n) ,where {v_i} are right-eigenvectors of H.

However, there are some problems:

  1. Because the first one diagonalizes H and H' respectively, I don’t know the correspondence between left- and right- eigenvectors. Even if I know, there is no reason why they are with the same precision (because H and H' are different matrices in general). In addition, to my understanding ,if H is large or ill-conditioned, both diagonalizations don’t work well with low precision in the first place.
  2. The latter diagonalizes only once, but computing the inverse require high precision.

Is there any way to calculate left-eigenvectors of a large and ill-conditioned matrix with high precision?

1 Like

If you don’t mind piracy:

using GenericSchur
S = schur(H)
VL = eigvecs(S, left=true)

or do it yourself with LAPACK.trevc! in LinearAlgebra:

S = schur(H)
VL = LAPACK.trevc!('L','B',Int[],S.T,S.Z,S.Z)
1 Like

Thanks!
Since I don’t understand how to use LAPACK.trevc! yet, I have a few questions about the former code.

  1. Do eigvecs and LAPACK.trevc! give the same results? Are there some differences?
  2. Does GenericSchur compute only eigenvectors? eigen(S) and eigvals(S) don’t seem to work. I want not only a left-eigenvector but also the corresponding right-eigenvector. The most simple way to get it is checking its eigenvalue.
    answer: S.values gives the eigenvalues.
  3. Most important question
    Is BigFloat the highest precision in GenericSchur ? Could I tune more finely? I’m concerned about the case that I can’t get the correct results even with BigFloat (but I don’t have the concrete example…).
    I just have heard, due to the diagonalization algorithm nature, there are some cases eigenvalues are calculated correctly but its eigenvectors don’t something like
    (1,0,0,...,0) :analytical result
    (0,...,0,0,1) :numerical result (for extreme example).
    So I think enough precision is needed (I don’t know if just only sufficient precision will avoid this problem…).

You can increase the precision of BigFloat beyond the default 256 bits, if you want.

julia> precision(BigFloat)
256

julia> BigFloat(pi)
3.141592653589793238462643383279502884197169399375105820974944592307816406286198

julia> setprecision(512)
512

julia> BigFloat(pi)
3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848115

The relevant question for eigenproblems is not whether the original matrix is ill-conditioned (nearly singular), but whether its eigenvector matrix is ill-conditioned (nearly singular, i.e. nearly parallel eigenvectors, i.e. the matrix is nearly defective). In this case the whole concept of an eigenvector is problematic — the eigenvectors tend not to be a very useful basis.

Can you give a concrete example of what you are trying to do? What do you need the eigenvectors for?

3 Likes

Thanks! I’ll try it.

My purpose is to diagonalize a (non-Hermitian) Hamiltonian and to know its energies and wave functions. So the eigenvectors are needed. Especially, if the Hamiltonian has some type of time-reversal symmetry, the left-eigenvector relates to the Kramers pair. Therefore, I also want the left-eigenvectors.
As you said, the ill-condition of the eigenvector matrix is important in eigen problem. I’m treating such matrices. In this case, the original matrix numerically return pseudospectra under low precision or large matrix sizes. So I think high precision is also needed.
I organized my questions as follows (it is further and further away from the original question…).

  1. In such an ill-condition case, How to get the eigenvectors with high precision?
  2. When the eigenvalues are spectra ,not pseudospectra under sufficiently high precision, are the eigenvectors also eigenvectors (not pseudoeigenvector - I don’t know whether such a word exists.)? If not, how to avoid it?
    Namely, I’m concerned the case I replied to Ralph_Smith.

So, the reason you want the eigenvectors and eigenvalues is to get the eigenvectors (wave functions) and eigenvalues (energies) … this reasoning is a more than a little circular. The question is, why do you think the eigenvectors are meaningful if the matrix is nearly defective? (If it’s non Hermitian, you aren’t talking about directly observable quantities in quantum mechanics.)

In such an ill-condition case, How to get the eigenvectors with high precision?

If the eigenproblem is badly conditioned, then you may need additional floating-point precision (depending on how badly conditioned it is and how accurately you need the the eigenvectors).

(Of course, if you have any error in computing/defining the matrix, that error will propagate to the eigenvectors, regardless of the precision you use in between.)

2 Likes

trevc! is just for the types handled in the standard LAPACK libraries.

The eigvecs(S::Schur) method in GenericSchur is a Julia translation of LAPACK code (nothing fancy, just linear solves), so results are usually similar. The vectors are in the order of the eigenvalues in S.values. (We should probably have an eigvals(::Schur) method for completeness, but don’t.) As Steven points out, some eigenvectors may not be numerically well-defined (beyond the obvious ambiguities of phase and degeneracy), So two different implementations may give visibly different results which are both numerically correct in the sense that VL' * H - Diagonal(S.values) * VL' is tiny. You can examine the sensitivity/accuracy of eigenvectors with the subspacesep function in GenericSchur.

By the way, you may find the Pseudospectra.jl package useful in the analysis of your matrices. As Steven implies, resolvents are more informative than eigenvectors for some applications in non-Hermitian QM.

Do you need more concrete physical reasons? Umm… In non-Hermitian systems, almost all eigenvectors could be localized at the edges under the open boundary condition. This is called the non-Hermitian skin effect. It is difficult for me to explain more details here, but anyway, I’m interested in this phenomena and this is the reason I need eigenvectors, not only eigenvalues. (I want to plot eigenvectors to visualize the localization.)

Is the additional floating-point precision done by setprecision() as John_Gibson’s comment? Is there another way?

This function may be helpful. I’ll try later. Thanks.

Yes. I know pseudospectra has many information, but as I wrote above, eigenvectors also has good information from the viewpoint of physics.

Trying to chase instabilities by increasing numerical accuracy rarely ends well. If your problem is so ill-conditionned that you can’t get a reasonable answer with 16 digits of accuracy, it’s doubtful you can get a reasonable answer with 32, or 100 - and if you can, probably the next thing you will want to do is to increase a parameter (eg here your system size) and it will get exponentially worse. So I would try first a small-ish system (where presumably things stay relatively well-conditionned), understand what’s going on there and then try to find a relevant quantity that is more stable to compute before going to bigger things.

2 Likes

Yes, if you are using BigFloat; there are also other packages defining alternative extended-precision and arbitrary-precision types.

Yes, I’m familiar with non-Hermitian physics, but I’m also familiar with physicists being led astray by focusing on misleading eigenvectors in near-defective systems. (e.g. false attribution of physical meaning to diverging Petermann factors, as we analyzed here for spontaneous emission.) Usually, the physically observable quantities are related more closely to the Green’s function / scattering matrix / resolvent (all aspects of the same thing) of the system, and I get the impression that a lot of work on the NHSE is focusing on something less meaningful (ill-conditioned, fragile, not-directly-observable eigenfunctions).

4 Likes

I totally agree with you.
To set high precision was one step to understand what’s going on.

Could you tell me other package names?

Umm… I don’t intend to discuss physically meaningful/meaningless here, but thank you for telling me the paper.

Arb has several routines to compute eigenvalues/vectors (both left and right). Using arbitrary precision, providing guesses, with mathematical guarantees™. We wrap (most of) the functionality in
https://github.com/kalmarek/Arblib.jl/blob/master/src/eigen.jl

However besides some very specialized situations you should take the advice given above and re-think the problem instead of doubling the requested precision :wink:

Get in touch if you need any help with using Arblib.jl.

2 Likes