Suggestions needed: diagonalizing large Hermitian sparse matrix

I need to find a few eigenvalues and eigenvectors of some large Hermitian sparse matrices. Right now, I am trying Arpack.jl and KrylovKit.jl. Here are some considerations and my experiences.

  1. Arpack.jl does not have special routines for Hermitian matrices, and is less stable than KrylovKit.jl.
  2. With moderate size, multithreading works pretty good. For example, my CPU usage (from top comand) of my Julia process is roughly 2000% for Arpack.jl diagonalizing 30624x30624 matrices (sparsity 0.005). However, when the matrices become larger (3 million by 3 million, sparsity 0.00012), the CPU usage of both Arpack.jl and KrylovKit.jl drops to roughly 200%.
  3. Right now, KrylovKit.jl takes 2200 seconds to diagonalize a 2883289x2883289 matrix with sparsity 0.00012 and Arpack.jl needs more than 1 hour.

Eventually, I would like to diagonalize a 30000000x30000000 matrix with sparsity of roughly 0.00003. What would be your suggestions to attack such a problem?

Sorry I don’t have a reply for you, but I was wondering if you could also compare ArnoldiMethod.jl with the other two? It’s the Julia equivalent to Arpack.

Just to be sure you just compute a few eigen elements?

Also I think that sparse * vec is not threaded

I tested both KrylovKit.jl and ArnoldiMethod.jl with a 2882954 \times 2882954 Hermitian matrix (sparsity: 0.00012). The results are

  • KrylovKit.jl: 4783.532569246 seconds
  • ArnoldiMethod.jl: 3528.127798565 seconds.

The time for KrylovKit.jl is different from what I got in my original post. Possible cause: 1. This is a different matrix; 2. I probably asked for more eigenvalues this time (nev=10).

1 Like

I just need a few eigenvalues. I think that’s the common case for sparse linear algebra.