Unusually bad performance of eigen() compared to eig() in Matlab

Hello everyone,

I’m fairly new to Julia. I am working on implementing SBFEM in Julia. SBFEM requires solving a lot of eigenvalue problems. I noticed that my code in Matlab is running much (much) faster than in Julia. I tried to apply the tips in the performance section.
Analyzing the profiler, I found that most time is spend on the eigenvalue problem (and also on LU decomposition for solving systems of equations). This is expected, however Julia runs much slower than Matlab.
To better understand this discrepancy between Julia and Matlab I timed the eigenproblem solver for several different matrix sizes.

Julia Code

using TimerOutputs
using LinearAlgebra

function timer_eigen(n::T,matrix_size::UnitRange{T}) where {T<:Integer}
	# BLAS.set_num_threads(1)
	timer = TimerOutput()
	for m in matrix_size

		Z = rand(Complex{Float64}, m, m)
		for i in 1:n

			𝚲, 𝚿 = @timeit timer "Size $m" eigen(-Z)

		end
	end
	return timer
end

timer_eigen(1000,2:102)

Matlab Code

function timer = timer_eigen(n, matrix_size)

    m = numel(matrix_size);
    timer = zeros(m, 3);
    for j = 1:m
        
        s = matrix_size(j);
        Z = rand(s) + 1i * rand(s);
        timer(j,1) = s;

        for i = 1:n 

            tic;
            [V, L] = eig(-Z);
            timer(j,2) = timer(j,2) + toc;
        end

        timer(j,3) = timer(j,2) / n;
    end
end

What you can see is that (at least for me) Julia takes about twice as long to solve eigenproblems than Matlab.

I have read that OpenBLAS is usually slower than MKL, however, this much of a slowdown is rather disappointing and I believe detrimental to Julia.

Can anyone reproduce this discrepancy? Could this more likely be a problem with my machine rather than OpenBLAS?

EDIT: For completeness sake…

Julia Version 1.1.1
Commit 55e36cc308 (2019-05-16 04:10 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)

Matlab 2018b (with 6 threads).

2 Likes

From your plot it seems that for larger matrix sizes Julia with BLAS.threads = 1 is faster than the default, multi-threaded version. Did you swap the labels perhaps?

1 Like

This is interesting to point out. I didn’t swap the labels. BLAS using only one thread really is faster than multi-threading on my system. I have run this test multiple times, each giving similar results.

1 Like

I can confirm that on my workstation (desktop running ubuntu on a newish intel processor), setting BLAS to use a single thread improves performance. I have no idea why though…

1 Like


With

Julia Version 1.3.0-DEV.466
Commit 8d4f6d24c0 (2019-06-28 18:40 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-8705G CPU @ 3.10GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
  JULIA_EDITOR = "C:\Users\PetrKrysl\AppData\Local\atom\app-1.38.2\atom.exe"  -a
  JULIA_NUM_THREADS = 4

and Matlab 2018b (with 4 threads).

EDIT: Same story with 1.1

Two times slower is within the same order of magnitude - it may be that Matlab uses the Intel MKL library and its more aggressive optimizations (at least on your CPU) lead to the additional performance. Matlab seems really hard to beat on this front.

Perhaps trying MKL.jl will fill some of that gap.

2 Likes

It isn’t really Matlab, but MKL itself that is hard to beat.

What IS surprising is that performance with a single thread is better than threaded for larger matrices.
One would expect that the overhead of threading would get amortized for large matrices.

Can you try even larger matrix sizes?

According to your graph, rather than Julia with OpenBLAS being rather underperformant on my setup, it is in fact Matlab with MKL which is especially overperforming. (Seeing your maximum runtime with size 102 is about 18 ms and mine is about 9 ms in Matlab.)

I agree. I never expected Matlab to be faster, as the underlying library is based on LAPACK. Indeed, Matlab uses MKL.
Even though two times slower is within the same order of magnitude, it is very much perceptible as I will be running an optimization on top of the SBFEM model.

Thank you for pointing out MKL.jl. I will try it out.

Please post your findings from using MKL.jl, I think this is an important issue for many Julia users.

image
1-thread Julia still faster. Matlab/MKL rather fast, comparatively speaking.

I have also just installed MKL.jl and got a massive performance boost.
Julia now runs comparable to Matlab.

I asked a colleague to test his Matlab and there Matlab runs comparable to @PetrKryslUCSD .

What kind of optimizations are going on that MKL runs that much faster on my machine??
At least Julia is not at fault per se. Maybe Julia should be given the option to use different BLAS/LAPACK implementations on default? MKL.jl does work wonders (at least in my case), however it currently does not allow to switch back the OpenBLAS.

EDIT: Unit of y label was wrong (s -> ms).

2 Likes

MKL is a specialized (formerly commercial) LAPACK implementation for Intel architectures by Intel itself. It’s hard for an open source project like OpenBLAS to compete with it.

Unfortunately, Julia can’t ship with MKL as explained here. However, apart from MKL.jl, which you are already aware of, you can also build Julia yourself and link it against MKL which is prettry straightforward, especially on Unix systems. You can find instructions here.

Indeed, as @crstnbr said, MKL is built by Intel specifically to provide the best performance available. It is important to note that Intel CPUs are proprietary devices and having intimate knowledge of the internals (and a big budget :slight_smile: ) allows intel to aggressively optimize their libraries for a competing edge against AMD and other CPU makers.

1 Like

@crstnbr thank you for the instructions.

Yes, Intel is able to better optimize MKL. I was only wondering why this has such a profound effect on my computer, as I assumed that my hardware wasn’t that much better than others’.

Thank you for your help.