My code for testing in python was the following (i know that it is optimized):
from numba import njit, prange
import numpy as np
import time
@njit
def test():
for i in prange(0,100000):
A = np.random.rand(3, 3) + 1j * np.random.rand(3,3)
np.linalg.eig(A)
return
As result I got
773 ms ± 8.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
np.__config__
tells me, that MKL is used. But this should also be true for Julia if I interpreted the versioninfo() (BLAS: mkl_rt) correctly