Is it possible to diagonalize matrices with M-series Apple Silicon GPUs?

Hello.

I have to diagonalize Hermtian matrices with large (e.g. 10k X 10k) dimensions for many times. To reduce computing time, I am considering using GPUs. Since I have a Mac mini with M1 chip now, I would like to try solving eigen-problems with it.

Does Metal.jl support diagonalizing matrices on M-series GPUs?
If not, are there any options to compute eigenvalues on GPUs in Apple Silicons?

Thank you.

1 Like

With some tries with MtlArrays, I found it impossible to make diagonalization in simple form like for CuArrays. Thank you.

Have you tried AppleAccelerate.jl?

Hello, thank you for introducing the package. I would like to take advantages of GPU functions on apple silicon. AppleAccelerate.jl seems not to have functions to access the GPUs. Are there actually some way to use GPUs with the package?

We were hoping for this functionality as well. It seems that Apple themselves have not implemented hermitian eigendecomposition on M-series GPUs: Matrices and Vectors | Apple Developer Documentation (just Cholesky and LU) so until they do it’s doubtful it would be available in Julia, since from what I have seen for now Julia GPU libraries just wrap the matrix decompositions that are provided by the backend vendors.

EDIT: I should add on that I’m referring to dense, full Hermitian eigendecompositions, which I assumed the original post was talking about, but for targeting sets of eigenvalues/eigenvectors using Krylov methods on GPU there are a number of options available in Julia including KrylovKit.jl, and I assume packages like IterativeSolvers.jl and Arpack.jl should work on GPU as well.

1 Like

I guess that’s a project if anyone wants to write a kernel.

Note that it would be good to have a Julia hermitian eigensolver period. CPU, or GPU. Currently the situation is sad: Arpack essentially abandoned, alternatives not working.

That seems a bit hard on the alternatives. I have been using KrylovKit (on GPU, for matrix-free pre-conditioned shift inverse non hermitian eigenvalue problems) and it worked very well.

Same for ArnoldiMethod compared to Arpack. Note that I bound the Arpack version on BifurcationKit.

Computing eigenvalues is hard by iterative methods and is essentially linked to the possibility to make the spectrum “compact”. For example, the shift invert (SI) method essentially does this when applied to operators with unbounded spectra (like some reaction diffusion). Sometimes, I compute the spectrum of the exponential of the matrix but this is essentially the same trick as SI.

In short, I would not expect a generic method to work well for every problem, you have to taylor the method to it (preconditioner, etc).

Also, I should mention NonlinearEigenproblems.jl

I appreciate that these methods may work for some problems (albeit with some customized treatment to achieve convergence).

The generalized hermitian eigenvalue problem is very important in engineering. I could not get either ArnoldiMethod or KrylovKit to solve this: Julia is slower than matlab when it comes to matrix diagonalization? - #15 by PetrKryslUCSD

Did you manage to do it in Matlab?

Did you try NonlinearEigenproblems.jl?

Indeed, with Matlab I got identical results to those obtained with Arpack.jl (or SubSIt.jl).

I haven’t tried NonlinearEigenproblems.jl yet. I will have a look. Thanks!