I am comparing julia 1.6.3 (2021-09-23) Built by Homebrew (v1.6.3_1) and MATLAB 9.11.0.1769968 (R2021b) on a late 2013 MacBook Pro 2.6 GHz Dual-Core Intel Core i5 with 16 GB 1600 MHz DDR3.
The simplistic and unscientific test is
>> A = rand(4000); f = @() inv(A); timeit(f)
ans =
2.1841
And
julia> using LinearAlgebra, BenchmarkTools
julia> function test(n)
A = rand(n,n)
@btime inv($A)
end
test (generic function with 1 method)
julia> test(4000);
3.932 s (5 allocations: 124.04 MiB)
The time difference is very significant, did the homebrew binary not link to the proper fast libraries?
Seems like in the julia one you are timing both the generation and inversion, and matlab only the inversion. I’m on the phone so can’t test if that will be any relevant part of the total time, but feels like it could be.
Also you can try running it multiple times to see if compilation was part of that time.
EDIT: I misread the code, blame being tired and as mentioned on the phone…
as always, it’s worth noting that you should pretty much never use matrix inversion. instead, use \ for backsolves, or factorizations. both approaches will be faster and have lower error.
The MKL.jl experience is particularly nice on Julia >= 1.7 where it is based on libblastrampoline (and MKL_jll.jl). In this case, you can simply compare the performance of inv from OpenBLAS and MKL:
julia> using LinearAlgebra
julia> using BenchmarkTools
julia> A = rand(4000,4000);
julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] libopenblas64_.0.3.13.dylib
julia> @btime inv($A);
1.444 s (6 allocations: 124.05 MiB)
julia> using MKL
julia> BLAS.get_config()
LinearAlgebra.BLAS.LBTConfig
Libraries:
└ [ILP64] libmkl_rt.1.dylib
julia> @btime inv($A);
846.798 ms (6 allocations: 139.68 MiB)
multiple right hand sides are when the best option is to factorize the matrix, and perform the perform solves with the factored version. The time will be comparable to inv, but the result will be more accurate.
Actually the lower accuracy is not accurate ;-).
When the inversion is done properly there is no reason for lower accuracy.
See, for example, How Accurate Is inv(A)*b?
Performance wise, it is better to stick with decompositions.
@Oscar_Smith , You may be interested in reading this as well.
Yes, it’s an old-ish Intel CPU (Haswell), and I used to notice some edge in favor of MKL in the past, but with recent Julia versions this edge is more like within the margin of error. May be I can test again with MKL and come back with MKL results in Julia.
julia> versioninfo()
Julia Version 1.7.0-rc1
Commit 9eade6195e (2021-09-12 06:45 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, haswell)
Environment:
JULIA_DEPOT_PATH = D:\DeepBook\.julia