Eigenvector precision issues

I am using eigenfact to extract the eigenvectors corresponding to the 1000 largest eigenvalues from a symmetric matrix.

V = eigfact(Symmetric(M))[:vectors][,1:1000]
V = [:,end:-1:1] #sorts V such that V[:,1] is the eigenvector corresponding to the largest eigenvalue

For some reason, after the 5th eigenvector corresponding to the 5th largest eigenvalue, the precision of the values in the eigenvectors drops dramatically. E.g., 0.2 (what I get in R using eigen(M, symmetric=TRUE)) vs 0.02 (what I get in Julia). However, the precision in eigenvectors 1 through 5 is excellent.

How do you measure your precision? Beware that eigenvectors are only defined up to sign (or phase if hermitian), and that it only makes sense to compare them if the eigenvalues are well separated (e.g. In case of multiplicity any choice of orthogonal basis for the invariant subspace is valid)

It’s hard to say what the problem might be without access to your matrix M. What is its size, and how was it formed?

Also, it would be good to make a direct check of how precisely the eigenvalue and eigenvector satisfy Av = lambda v, say by computing norm(A v - lambda v)/norm(lambda v).

I’m comparing it to the results I get in R, where the result of an algorithm depends on the values of the eigenvectors. There are no errors downstream of the eigenvectors in the Julia code because I can manually set the parameters in Julia from R to get the same results in Julia. Maybe precision was the wrong term since both Julia and R use the same LAPACK library. I understand that signs can be different, which they are for some eigenvectors 1 through 5. I’ll try to create a reproducible example, but it will require IO between Julia and R.

Maybe I’m misunderstanding you, but I’d caution against thinking of the eigenvalues from R as right and differences from R as errors. Numerical algorithms in finite precision are sensitive to the condition number and non-normality of matrices. Your matrix might be so non-normal that the fifth eigenvalue cannot be computed to any digits of accuracy, in which case R and Julia give equally good answers. That’s why I suggested doing a direct check of norm(A v - lambda v)/norm(lambda v). If that comes out as 1e-16 for R but closer to 1 for Julia, then Julia has a problem for sure.

Julia and R both call Lapack, but are they calling the same Lapack function here?

2 Likes

If it is difficult to provide a reproducible example, you could start by providing the 10 largest eigenvalues as well as the diff of them.

For the record, norm(A v - lambda v)/norm(lambda v) results show that Julia has much better precision. My mistake actually was downstream where I didn’t perform elementwise division (/ vs ./) right after the eigenvector step. The step afterward was an optimization step, so I just took the optimum value from R to check the Julia code and when I went back for a more thorough debugging, the different eigenvectors caught my attention before the optimization step. My apologies.

2 Likes