(Plotted above is the eigenvalues of a matrix A(k), with k being momentum on the x axis).
So I am trying to get the eigenvalues and eigenvectors of a hermitian matrix that continuously depends on a parameter k, which is a central problem in condensed matter physics. Unfortunately, these eigenvalues can get very close or even degenerate at some points, and I am observing noise in the projection of these eigenstates onto a constant basis vector when the eigenvalues get close (seen in the color).
I am curious if there is possibly a way to increase the precision of the eigen() call, or perhaps find another library to use? I am fine to write my own solver if need be with tunable convergence, but I would prefer not to if it already exists.
eigen should just call LAPACK. It’s possible that Julia does that imperfectly, and also possible (but unlikely) that LAPACK has a bug. Perhaps there’s an artifact introduced by your analysis, or your problem is ill-conditioned? I’d suggest comparing against something expected to be equivalent, e.g. numpy or Matlab. If problem is unique to Julia, can you provide a MWE?
By the way, from your plot I wouldn’t call that noise. It seems that when the eigenvalues cross, they both end up on different (but consistent) loci. Whatever is happening seems quite deterministic.
The relevant Julia code is probably in
symmetriceigen.jl. Glancing through, it mostly seems to use LAPACK directly. There’s a bit of computation for some similarity transforms, but nothing fancy and unlikely to introduce numerical instability. It’s also unclear whether your case actually hits those transforms. You could trace your example to find out exactly which branches it hits, starting from
There are extensions of
eigen(::Hermitian) which handle more precise types (e.g.
Float128 from the Quadmath package, or
BigFloat) in the
GenericSchur packages. If you only need a few eigenpairs, I have a translation of the LAPACK algorithm for subsets (which also works with arbitrary precision) in an unregistered package GenericMRRR. If your matrices are large, there are packages with generic Krylov schemes too. You can also try iterative refinement.
What is it that is wrong in the picture? I don’t understand what the colour represents. Are you sure that the result is not correct? This looks like repulsion between eigenvalues.
EDIT: Welcome to the Julia Discourse site!
The direct eigenvalue solvers are basically as robust as they can be. I think your issue is because small errors in the computation of the matrix have strong effects on the behavior of eigenvalues. The basic example is the avoided crossing model [k eps;eps -k] with eigenvalues \pm sqrt(k^2+eps^2), where you see a transition region of size eps. So I’d check if the size of the transition region is compatible with possible inaccuracies in other parts of your code.
This is all assuming you make sure your eigensolves specific to hermitian matrices by either making sure your matrices are exactly hermitian, or by wrapping in the
Hermitian type; if not, make sure of that first.
This is also assuming that the crossing is indeed a proper crossing, and not an avoided crossing; generically, the Wigner/von-Neumann theorem says eigenvalues of hermitian matrices cross on a manifold of codimension 3, not 1, so there’s absolutely no hope to see a proper crossing on a line plot unless you have two symmetries. If you expect to see a crossing because of symmetries, I would make sure these symmetries are reflected in the actual thing you’re computing (eg that the discretization respects the symmetry)
Do you see any difference in the plot if you compute the eigenvalue with
eigvals instead of
eigen? LAPACK will use a different algorithm if only the values are requested and it might but more robust.