Large Sparse Hermitian Matrix - Intermediate Eigenvalue Problem

Hi,

I am trying to find eigenvalues and eigenvectors for a rather large, sparse, hermitian matrix (size ~1E7 x 1E7). I am mainly interested in around 40 eigenvalues in the middle of the spectrum.

To solve this originally, I am using the ARPACK library, which works fine with the shift-invert, I get correct eigenvalues and eigenvectors, but this takes a very long time. I have been using previously a JDQR MATLAB solver on my local machine for smaller problems. A similar implementation of this I have seen in Julia in the JacobiDavidson package. I have installed this and begun testing, but I can’t seem to figure out how the output from the JDQR solver itself is structured. I have looked through the github and haven’t been able to decipher anything useful :frowning: (all examples are given for the JDQZ routine which is throwing me a bit.)

Has anyone gotten this library to work for them? Or is there any other library I should be looking at for this problem.

Here is a snippet of what I am running:

// S is a sparse matrix read in from a file and assigned to A, column 1 is the row number, column 2 is the column number, column 3 and 4 are the real and imaginary parts of the value at a given row and column number. //

A=Hermitian(sparse(Int.(S[:,1]), Int.(S[:,2]), complex.(S[:,3], S[:,4])))
Aop = LinearMap(A)

target = Near(complex(dE, 0.0)) // Near requires a complex value, all eigenvalues should be real.

n = size(A, 1)
levels = 20
solver = BiCGStabl(n, max_mv_products = 10, l = 2)

pschur, residuals = jdqr(Aop;
solver = solver,
target = target,
pairs = levels,
tolerance = 1e-8,
subspace_dimensions = levels+5:2*levels+10,
max_iter = 25000,
verbosity = 1
)

Basically, I am confused with the format of pschur, and how I get my eigenvalues and eigenvectors from this.

Any help is appreciated!

Is it possible for you to share the matrix? I think one of the most efficient ways would be to compute the action of the inverse of the shifted matrix (A-\sigma I)^{-1} on a vector x, rather than including the entire matrix inside a LinearMap (this is explained in this post),

If you can share the matrix, I can try to give you an example for this case since I have been doing something similar these past weeks.

2 Likes

Thank you for the reply and useful link, I shall give that one a try!

I have attached a much smaller system for you to take a look at. The outline of the file is as written in my initial post.

Dropbox Matrix

A bit of context for the problem, I am running a finite difference method solver for an 8 band k.p model in 3D. The exact problem we are trying to solve requires a very small step size - hence the large matrix.

Is it symmetric real? Then perhaps PetrKryslUCSD/VibrationGEPHelpers.jl: Generalized Eigenvalue Problem helper functions might be of help?

Assuming the matrix A is loading into memory, you can try the following code snippet which uses ArnoldiMethod.jl, a native julia implementation:

struct ShiftAndInvert{TA}
    A_factor::TA
end

function shiftinvert_operator(A, σ)
    n = size(A, 1)
    T = LinearMap{eltype(A)}(ShiftAndInvert(lu(A - σ * I)), n; ismutating=true)
    return T
end

function (M::ShiftAndInvert)(y, x)
    (; A_factor) = M
    ldiv!(y, A_factor, x) # computes y = (A - σI)⁻¹ x reusing the factorization
end


σ = 2.0
T = shiftinvert_operator(A, σ)

decomp, history = partialschur(T; which=:LM, nev=10, tol=1e-10)
history.converged || error("The eigenvalue solver did not converge.")
μ, Vs = partialeigen(decomp)

# λ of the original problem are λ = 1 / μ + σ
λ = 1 ./ μ .+ σ

For your bigger matrix, an issue could be that you run out of memory in the factorization step (which here I just chose lu for simplicity, since I saw that the matrix is not positive definite for cholesky). If this is the case, I had good experiences in the past with the sparse factorization of Pardiso, which should be included in the construction of the LinearMap, and will be later used by the eigenvalue routine for the matrix-vector products.