Within numpy in python there is the function np.linalg.eigvalsh which computes the eigenvalues of a hermetian matrix. It speeds up the computation of the eigenvalues when the matrix is symmetric. I just did a quick test in python:
M = np.random.randn(30,30)
A = M + M.transpose()
%timeit np.linalg.eigvals(A)
%timeit np.linalg.eigvalsh(A)
which gave the timings:
114 µs ± 177 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
46.2 µs ± 20.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Now Julia does seem to take advantage of the fact that the matrix is symmetric:
M = randn(30,30)
A = M + M'
@btime eigvals(M)
@btime eigvals(A)
but the timings were:
494.869 μs (13 allocations: 43.69 KiB)
106.240 μs (10 allocations: 19.00 KiB)
Any reason why numpy should be so much faster / better ways to take advantage of the fact that my matrix is symmetric? Thanks!
Generally you want to inform Julia that a matrix is symmetric by wrapping it in Symmetric
, e.g.
A = Symmetric(M + M')
although it appears that, as you say, the symmetry is being recognized and exploited in this case.
As to the speed difference, if you are running using an Intel CPU, it could be due to the fact that your version of NumPy is compiled with MKL, while Julia, by default, uses OpenBlas. Try
using MKL
prior to your other code and see if it makes a difference.
1 Like
The eigensolver checks ishermitian
and adds the Hermitian
wrapper for you if it is true
, since this check is so fast compared to the eigensolver itself.
But if you wrap the matrix in Symmetric
(or more generally Hermitian
, if it might be complex), then it skips the check and dispatches directly to the fast algorithm. The main advantage of this is that the result is type-stable, since in this case the compiler knows that the eigenvalues must be real.
Realize that Julia, Numpy, Matlab, R, etcetera are all calling the same LAPACK routines under the hood. So any timing differences that you see (for a sufficiently large matrix) are generally either:
- Julia is using a different number of threads. See
LinearAlgebra.BLAS.get_num_threads()
and LinearAlgebra.BLAS.set_num_threads(n)
- or you are linking a different BLAS library. (Julia defaults to the free/open-source OpenBLAS library, but you can also link to MKL via MKL.jl, which is sometimes faster. It can be tricky to find out what BLAS library numpy is using.)
7 Likes