LinearAlgebra.inv matrix inversion slower than MATLAB?

Why a matrix inversion is slower than MATLAB?

I am comparing julia 1.6.3 (2021-09-23) Built by Homebrew (v1.6.3_1) and MATLAB 9.11.0.1769968 (R2021b) on a late 2013 MacBook Pro 2.6 GHz Dual-Core Intel Core i5 with 16 GB 1600 MHz DDR3.

The simplistic and unscientific test is

>> A = rand(4000); f = @() inv(A); timeit(f)

ans =

    2.1841

And

julia> using LinearAlgebra, BenchmarkTools

julia> function test(n)
         A = rand(n,n)
         @btime inv($A)
       end
test (generic function with 1 method)

julia> test(4000);
  3.932 s (5 allocations: 124.04 MiB)

The time difference is very significant, did the homebrew binary not link to the proper fast libraries?

Seems like in the julia one you are timing both the generation and inversion, and matlab only the inversion. I’m on the phone so can’t test if that will be any relevant part of the total time, but feels like it could be.

Also you can try running it multiple times to see if compilation was part of that time.

EDIT: I misread the code, blame being tired and as mentioned on the phone…

From here: check

using LinearAlgebra
LinearAlgebra.versioninfo()

On my system:

BLAS: libopenblas (OpenBLAS 0.3.10  USE64BITINT DYNAMIC_ARCH NO_AFFINITY Prescott MAX_THREADS=32)
LAPACK: libopenblas64_

I get

julia> LinearAlgebra.versioninfo()
BLAS: libopenblas (OpenBLAS 0.3.17 DYNAMIC_ARCH NO_AFFINITY USE_OPENMP Haswell MAX_THREADS=56)
LAPACK: libopenblas

How can get Julia to use the MKL library?

First hit on the search engine without name is GitHub - JuliaLinearAlgebra/MKL.jl: Intel MKL linear algebra backend for Julia. Looks reasonably trustworthy.

4 Likes

as always, it’s worth noting that you should pretty much never use matrix inversion. instead, use \ for backsolves, or factorizations. both approaches will be faster and have lower error.

4 Likes

Only exception I vaguely remember is multiple right hand sides (but still lower error sounds right)…

The MKL.jl experience is particularly nice on Julia >= 1.7 where it is based on libblastrampoline (and MKL_jll.jl). In this case, you can simply compare the performance of inv from OpenBLAS and MKL:

julia> using LinearAlgebra

julia> using BenchmarkTools

julia> A = rand(4000,4000);

julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] libopenblas64_.0.3.13.dylib

julia> @btime inv($A);
  1.444 s (6 allocations: 124.05 MiB)

julia> using MKL

julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] libmkl_rt.1.dylib

julia> @btime inv($A);
  846.798 ms (6 allocations: 139.68 MiB)
5 Likes

Indeed, with MKL the time becomes

julia> test(4000);
  2.158 s (5 allocations: 139.66 MiB)

Thanks!

1 Like

multiple right hand sides are when the best option is to factorize the matrix, and perform the perform solves with the factored version. The time will be comparable to inv, but the result will be more accurate.

1 Like

Actually the lower accuracy is not accurate ;-).
When the inversion is done properly there is no reason for lower accuracy.
See, for example, How Accurate Is inv(A)*b?
Performance wise, it is better to stick with decompositions.

@Oscar_Smith , You may be interested in reading this as well.

3 Likes

inv is a constant factor more expensive than factorization, and hence rather useless for anything more than a handful of rows and columns, though.

1 Like

I see no real difference here even with openblas:

MATLAB R2020a:
>> A = rand(4000); f = @() inv(A); timeit(f)
ans =
       1.2232

julia> test(4000);
  1.249 s (6 allocations: 124.05 MiB)

julia> VERSION
v"1.7.0-rc1"

julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] libopenblas64_.dll

That said, you practically rarely want to calculate matrix inverse explicitely as others noted.

1 Like

What type of CPU are you using? MKL usually only beats OpenBLAS on Intel CPUs

Yes, it’s an old-ish Intel CPU (Haswell), and I used to notice some edge in favor of MKL in the past, but with recent Julia versions this edge is more like within the margin of error. May be I can test again with MKL and come back with MKL results in Julia.

julia> versioninfo()
Julia Version 1.7.0-rc1
Commit 9eade6195e (2021-09-12 06:45 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, haswell)
Environment:
  JULIA_DEPOT_PATH = D:\DeepBook\.julia

Out of curiosity. In MATLAB you can invert a vector? (e.g rand(4000))

Unfortunately, in MATLAB, calling rand(4000) is a shortcut for creating a 4000x4000.

No.

>> A = rand(1000,1);
>> inv(A)
Error using inv
Matrix must be square.

:scream:

2 Likes

OK, with MKL.jl I see about 12% improvement over openblas, still better than MATLAB’s MKL.

julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] libopenblas64_.dll

julia> test(4000);
  1.206 s (6 allocations: 124.05 MiB)

julia> using MKL

julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] mkl_rt.1.dll

julia> test(4000);
  1.075 s (6 allocations: 139.68 MiB)