Eigenvalue calculation differs between Julia and MATLAB

I am determining the eigenvectors of some matrices, however, the result appears a little different between Julia and MATLAB. I’m porting some code onto Julia and want the exact output as I would see in MATLAB/OCTAVE. Any suggestions on what I’m doing wrong or lacking understanding in?

Julia:

image

MATLAB:

2 Likes

Those values and vectors are the same, just in a different order.

9 Likes

@sukera is correct. By default Matlab sorts the eigenvalues (and the corresponding eigenvectors) so that their module is in decreasing order.

You can achieve a similar result in Julia. For instance, you could use the sortby argument of the eigen function (Linear Algebra · The Julia Language). Alternatively, you could do it ex-post.


EDIT

I have just noticed you are using the eig function in Matlab, rather than eigs. The function you are using does not sort the eigenvalues by default. eigs does.

The Matlab documentation on eig says:

By default eig does not always return the eigenvalues and eigenvectors in sorted order. Use the sort function to put the eigenvalues in ascending order and reorder the corresponding eigenvectors.

If you want to keep using eig and enforcing an order for the eigevalues you can take a look at the examples at Eigenvalues and eigenvectors - MATLAB eig.

3 Likes

Oh, I see. Thank you. But I can’t find exactly what I can type in the sortby keyword in the docs though :confounded:

Try this,

julia> using LinearAlgebra

julia> a = [1 2 3; 4 5 6; 3 2 2]
3×3 Array{Int64,2}:
 1  2  3
 4  5  6
 3  2  2

julia> eigen(a, sortby = x -> -abs(x))
Eigen{Float64,Float64,Array{Float64,2},Array{Float64,1}}
eigenvalues:
3-element Array{Float64,1}:
  9.260803553539533  
 -1.4797264327289552 
  0.21892287918940798
eigenvectors:
3×3 Array{Float64,2}:
 -0.345532  -0.6557     0.266735
 -0.858376  -0.253703  -0.832515
 -0.379207   0.711121   0.485563
7 Likes

Oh cool, so sortby is taking in the anonymous function and sorting the eigenvectors in the opposite of absolute values? (sorry my math is rusty/non-existent)

That function is computing -1 times the module of the eigenvalues. @GunnarFarneback is using it since sortby sorts in increasing order by default (and you want a decreasing order).

Please note that sortby is not compatible with Julia LTS. I am not sure when it was introduced. In Julia 1.0.5 (I just tried) you get the Matlab order by default (in this example).

1 Like

I’m on the JuliaPro version of Juno. Question: does the order change in case of a bigger matrix? The example I used in the OP for a small test case, now that I’m implementing it in my analysis where I’m using a 240x240 matrix, I am not getting the same output.

Type of the matrix is:
Array{Float64,2}
but the same operation (with sorting mentioned above) on this matrix yields different results than on MATLAB. I am guessing it’s still a sorting issue.

Do you really need the same exact order? What are you trying to achieve with the result that requires this specific ordering?

2 Likes

Yes, I do need the same order. I am porting some MATLAB code, and a lot of downstream computations require this order. I am not well versed enough to change all the computations, and thus hoping for a quick fix, i.e the same order.

Can you please post a working code to generate the matrix you are actually using?

Also, take a look at my previous comment (I just edited it). You should use eigs in Matlab to easily set the order for the eigenvalues (not eig).

The code that generates the matrix is quite cumbersome—I was trying to make something similar in the past 45 minutes but could not manage. My apologies if this doesn’t cut it, I can post the actual code too.

However, when I was going line by line I noticed an error in one of my variables which was throwing the value off completely! Now when I compare (via isapprox) the outputs (MATLAB vs Julia) I get this:

Please post the code. Otherwise, it is impossible to say why it is not working.

2 Likes

It was added in Julia 1.2 (https://github.com/JuliaLang/julia/pull/21598). Prior to that, the eigenvalue ordering for non-Hermitian matrices was essentially random (technically deterministic, but arbitrary, unpredictable, and depending on internal details of LAPACK).

My understanding is that Matlab also doesn’t sort non-Hermitian eigenvalues, so if your Matlab code relies upon the ordering then there may be something wrong. (Or it could be that the code is completely correct, but that your testing procedure is wrong. e.g. changing the eigenvalue order changes the roundoff errors slightly, or introduces an inconsequential permutation in the results.)

5 Likes

Hmm, thanks for adding that to the discussion. I removed the sorting part, and now the absolutes of the eigenvalues are the same—does that mean anything?

Another simpler case showing eigenvector outputs:

Julia

julia> a = [1 2 3 4; 2 2 0 0;0 0  1 0;0 0 0 3]
4×4 Array{Int64,2}:
 1  2  3  4
 2  2  0  0
 0  0  1  0
 0  0  0  3

julia> a= (a+a')/2
4×4 Array{Float64,2}:
 1.0  2.0  1.5  2.0
 2.0  2.0  0.0  0.0
 1.5  0.0  1.0  0.0
 2.0  0.0  0.0  3.0

julia> eigvecs(a)
4×4 Array{Float64,2}:
 -0.752694  -0.159387  -0.171499  -0.615334
  0.402842   0.440777  -0.685994  -0.415748
  0.412522  -0.863765  -0.171499  -0.233073
  0.317799   0.184989   0.685994  -0.627849

julia> 

MATLAB

>> a = [1 2 3 4; 2 2 0 0;0 0  1 0;0 0 0 3]

a =

     1     2     3     4
     2     2     0     0
     0     0     1     0
     0     0     0     3
 
>> a = (a+a')/2

a =

    1.0000    2.0000    1.5000    2.0000
    2.0000    2.0000         0         0
    1.5000         0    1.0000         0
    2.0000         0         0    3.0000

>> [z b] = eig(a)

z =

    0.7527   -0.1594   -0.1715    0.6153
   -0.4028    0.4408   -0.6860    0.4157
   -0.4125   -0.8638   -0.1715    0.2331
   -0.3178    0.1850    0.6860    0.6278

I’m afraid there is no way to get around the fact that the eigenvalue decomposition is not unique and that you need to have a reasonable understanding of the theory to be able to analyze algorithms that makes use of the eigenvectors.

This specific case illustrates the fact that if x is an eigenvector, then so is -x and there is no easy way to decide which sign to choose. If the downstream algorithm is sensitive to which sign happens to come out of the eigenvalue decomposition, something is not quite sound and you are likely in for a lot of trouble.

14 Likes

Oh I see, that’s awesome. This is a good bug to have weeded out! Thank you everyone! Closing the topic.

Just for anyone snooping around here: the output of eigenvectors differed when tested in numpy and the python linear algebra package too (numpy.linalg.eig)

2 Likes

I did same data in python and matlab . you can see that some column opposite
why? Which one is correct?
a=numpy.array([[1,2,1.5,2],[2,2,0,0],[1.5,0,1,0],[2,0,0,3]])
e_vals, e_vecs = np.linalg.eig(a)
print(e_vecs)
[[-0.75269385 0.61533397 -0.17149859 0.1593873 ]
[ 0.40284166 0.41574761 -0.68599434 -0.44077691]
[ 0.41252215 0.23307326 -0.17149859 0.86376534]
[ 0.31779874 0.62784942 0.68599434 -0.18498875]]

in matlab
a =

1.0000    2.0000    1.5000    2.0000
2.0000    2.0000         0         0
1.5000         0    1.0000         0
2.0000         0         0    3.0000

[z b] = eig(a)
z =

0.7527   -0.1594   -0.1715    0.6153

-0.4028 0.4408 -0.6860 0.4157
-0.4125 -0.8638 -0.1715 0.2331
-0.3178 0.1850 0.6860 0.6278

1 Like

Just read the whole thread above, I think it’s explained quite well.

I’d also like to throw in a video (and the whole series) from Grant Sanderson who does a great job in visualising such things: Eigenvectors and eigenvalues | Chapter 14, Essence of linear algebra - YouTube

3 Likes