You are timing the inversion of a 5000x5000 matrix — this O(n³) operation completely dominates every other operation — so the benchmark has nothing to do with language. It depends entirely on which linear-algebra library is linked and how many threads it is using.
That’s not correct. For double-precision calculations (Float64
), Julia, R, Numpy, Matlab, etcetera all do matrix inversion with LAPACK, but there are different optimized versions of LAPACK (or the underlying BLAS library) that can be linked by all of these. Sometimes people link OpenBLAS, sometimes they link MKL, or sometimes another library — both Julia and R can be linked with various BLAS libraries with different tradeoffs. In your benchmark, you are mostly timing which BLAS library was linked. Also, different libraries use different numbers of threads by default.
In Julia, you can find out the BLAS library you are using via:
using LinearAlgebra
BLAS.vendor()
I’m not sure what the easiest way to determine this is for R.
Basically, if you are doing computations that are dominated by dense-matrix operations like solving Ax=b or Ax=λx, it doesn’t really matter what language you are using because every language uses LAPACK and can be configured link to the same BLAS libraries. But if you do enough computational work, eventually you are going to run into a problem where you have to write your own “inner loops” (performance-critical code), and that’s where language choice matters. To do a useful cross-language benchmark, you need to be timing inner-loop code, not calls to external libraries.