# How to get eigenvalues to take the same order as in Matlab

So I am rewriting some code from Matlab into Julia. I want the final plots to be the same however Julia is sorting the eigenvalues in ascending order which is making the plot confusing for my use case.

The following plot is produced by the Matlab code:
Matlab Plot

Matlab Code

g1=1.0;
g2=4.0;
eps=sym('eps');
A=[(-eps+(g1+g2)*mu*B)/2 0 0 0 0;
0 -eps/2 0 (g1-g2)*mu*B/2 0;
0 0 (-eps-(g1+g2)*mu*B)/2 0 0;
0 (g1-g2)*mu*B/2 0 -eps/2 t;
0 0 0 t eps/2];

E=eig(A); %energy levels (eigenvalues)
figure
subplot(2,2,1)
fplot([real(E(1)),real(E(2)),real(E(3)),real(E(4)),real(E(5))],[-0.260,0.260])
title('Energy spectrum')
xlabel('\epsilon (meV)')
ylabel('E (meV)')


The points of intersection are important and unfortunately, I lose them in the Julia code because of the ascending order.

Julia Plot Julia Code

using PyPlot
using LinearAlgebra

function model_g1g2(g1, g2, B, t)
μ = 57.88e-3
ϵ = collect(range(-0.26, length=100, stop=0.26))
A = Array{Float64,2}(undef, 5, 5)
E = Array{Float64,2}(undef, 5, 100)

for i = 1:100
A = [(-ϵ[i]+(g1+g2)*μ*B)/2 0 0 0 0;
0 -ϵ[i]/2 0 (g1-g2)*μ*B/2 0;
0 0 (-ϵ[i]-(g1+g2)*μ*B)/2 0 0;
0 (g1-g2)*μ*B/2 0 -ϵ[i]/2 t;
0 0 0 t ϵ[i]/2]
E[:,i] = LinearAlgebra.eigvals(A)
end

figure, ((ax1)) = subplots(1,1, sharex=true)
for j = 1:5
ax1.plot(ϵ, E[j,:])
end
show(figure)
return figure

end
model_g1g2(1, 4, 0.5, 0.06)


So essentially I want to replicate the ordering Matlab’s eig function uses without switching the eigenvalues after the fact. I will be tweaking the parameters so ideally don’t want to have to go back in and adjust it every time the points of intersection change.

Thanks Strictly speaking, this is not much about ordering of eigenvalues in Matlab vs ordering of eigenvalues in Julia. It is rather that in Matlab you solve the problem much differently than in Julia. Strict Matlab equivalent of your Julia code is

mu = 57.88e-3;
B = 0.5;
t = 0.06;
g1=1.0;
g2=4.0;

eps=linspace(-0.26,0.26,100);

E = zeros(5,100);

for i=1:100
A=[(-eps(i)+(g1+g2)*mu*B)/2 0 0 0 0;
0 -eps(i)/2 0 (g1-g2)*mu*B/2 0;
0 0 (-eps(i)-(g1+g2)*mu*B)/2 0 0;
0 (g1-g2)*mu*B/2 0 -eps(i)/2 t;
0 0 0 t eps(i)/2];
E(:,i)=eig(A); %energy levels (eigenvalues)
end

figure(1)
plot(eps,E)

title('Energy spectrum')
xlabel('\epsilon (meV)')
ylabel('E (meV)')


which gives the figure Therefore, rather than investigating the issue of possibly different ordering of eigenvalues in Julia compared to Matlab, you may have to think about how to compute eigenvalues of a matrix parameterized by a single (symbolic) parameter eps in Julia, which is what you did in Matlab. I may have a look at it, but I am not quite fluent with symbolics in Julia unfortunately.

3 Likes

Since AFAIK Matlab doesn’t document what ordering they use, and in general the ordering of eigenvalues can be arbitrary in any case, it doesn’t seem possible to come up with an algorithm that is guaranteed to reproduce it. (For Hermitian matrices, most numerical eigensolvers default to ascending order since that’s what LAPACK does.)

If you are trying to detect eigenvalue crossings, you should probably use a more reliable algorithm than “give it to Matlab and hope it does what I want”.

Normally, eigenvalue crossings arise when there is some symmetry commuting with your matrix. If you can identify the symmetry group of your problem, you can block-diagonalize your matrix and separate the eigenvalues robustly into eigenvalues within each block (equivalently, label the eigenvectors according to the irreps of the symmetry group that fall into).

5 Likes

Ah, thank you very much. I am not very fluent in Matlab or symbolics and was not aware this method solved the eigenvalues in a different manner. I will take a look at the symbolics in Julia and see if I can get something similar.

That does sound like a much more rigorous approach. I am fairly new to programming so thanks for pointing this out Well, the difference I am talking/writing about is nothing very deep and complicated, it is just that while in Julia you are computing eigenvalues for a sequence/array of matrices, and you obtain a sequence of vectors of eigenvalues, in Matlab you are computing eigenvalues for a function of a single parameter, at which, lucky you, Symbolic toolbox succeeded, and you get a vector of five functions.

Just have a look at what your E stands for in Matlab:

>> E

E =

- eps/2 - 1447/20000
1447/20000 - eps/2
((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3) - eps/6 + (eps^2/9 + 18281427/10000000000)/((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3)
- eps/6 - (3^(1/2)*(((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3) - (eps^2/9 + 18281427/10000000000)/((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3))*1i)/2 - ((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3)/2 - (eps^2/9 + 18281427/10000000000)/(2*((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3))
- eps/6 + (3^(1/2)*(((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3) - (eps^2/9 + 18281427/10000000000)/((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3))*1i)/2 - ((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3)/2 - (eps^2/9 + 18281427/10000000000)/(2*((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432 + (((17155719*eps)/40000000000 - (eps*(eps^2/4 + 54844281/10000000000))/12 + (25*eps^3)/432)^2 - (eps^2/9 + 18281427/10000000000)^3)^(1/2))^(1/3))

1 Like

Yup, the symbolic diagonalization is picking up on the ordering of the eigenvalues that is analytic in the parameter but, as @stevengj says, this is highly non generic and almost any small perturbation of the problem will result in a different ordering. If you really want to get the analytic ordering numerically, one thing you can do is to match the ordering based on the similarity of eigenvectors (as measured by the absolute value of the dot product, ie the cosine of the angle) from point to point.

4 Likes

This is a useful heuristic, but it can be fooled if there is a weakly avoided crossing and you take too large a step in the parameters. That’s why it’s better to understand why there can be a crossing in your problem (usually from symmetry).

(There may also be more reliable purely numerical techniques, e.g. some kind of adaptive algorithm similar to bifurcation analysis in numerical-continuation methods. However, I haven’t seen anything like this in the literature. A symbolic-algebra technique to detect such “level crossings” was described in this paper.)

1 Like

Yes, it’ll skip over avoided crossings smaller than the parameter increment, which could actually be useful when you expect exact crossings by symmetry (eg a band structure on a high symmetry line) but may not have numerically exact crossings (eg because your discretization does not respect the symmetry exactly, or because of numerical noise). Anyway this kind of connecting the dots approach is probably best avoided entirely - it’s intuitively appealing at first but it’s too brittle, and a non starter in higher dimensions.

I was wondering if you could recommend any material so I can learn about identifying the symmetry groups and the other techniques you mentioned. I am still an undergad and not too familiar with these techniques but this sounds very interesting and I’d like to explore it further. 1 Like

I like this book: Group Theory and Its Applications in Physics | Teturo Inui | SpringerTinkham’s book is also popular.

2 Likes