Hi. I need to find out ALL the eigenvalues and eigenvectors of large Hermitian matrices. Half of the entries are zeros so maybe it is worth using the sparse matrix. I am now using eigen() from the standard library LinearAlgebra. On my laptop, it typically costs 0.5s to get the result for an 11561156 matrix, I wish to extend the size to about 80008000 or even 15000*15000. Is there a better way of doing this? It would also be good to take advantage of GPU, multithreading, or HPC computing clusters.
I don’t think there is an easy way to do this. Sparsity of 50% is not enough to get benefits from sparse matrices and performing full diagonalization also results in a dense matrix (maybe here is some memory optimization if you just want the eigenvalues and not the eigenvectors). Are these large Hermitian matrices quantum Hamiltonians by any chance? Then they might be a lot sparser than 50%.
I usually use a cluster to diagonalize my large Hermitian matrices (~ 10000x10000). Usually I try to run as many diagonalizations in parallel as the total memory permits and then set the number of BLAS threads such that all cores are working. Example on the cluster I use, I have 96 cores on a machine and I can run 48 operations simultaneously, so I can use 2 BLAS threads per Julia thread. Note this only works properly with MKL. I have never tried using GPUs for diagonalization (the cluster has none).