# Shift-Inverse diagonalization in Julia

What is the best technique to use the shift-inverse diagonalisation technique in Julia? I have used `Arpack.jl` but it doesn’t seem to be much faster; I am not sure how it handles the inverse part. So I would like to understand a better technique to get some eigenpairs of a large sparse hermitian matrix around a specified point in the spectrum, say the middle.

Furthermore, I need this in my research related to a random field spin chain model, where there is a random parameter in my Hamiltonian, the matrix I want to diagonalise. And because of the randomness, I need to diagonalise several instances of the Hamiltonian, and finally calculate some averages. So I was wondering if there is a nicer way to do this parallelly?

Thanks.

1 Like

Perhaps you should look into GitHub - JuliaLinearAlgebra/ArnoldiMethod.jl: Implicitly Restarted Arnoldi Method, natively in Julia, they include a section on shift-inverse diagonalization

2 Likes

Thanks for letting me know about this library. It seems it’ll work for me. However, I was also wondering if you could tell me whether this technique is threadsafe. I mean if I have to do this diagonalization many times with a slightly different matrix and each time I calculate some quantity out of the obtained eigenvectors.

Yes, I think this package was developed as a thread-safe alternative to ARPACK

Ok thanks!

Hi, sorry to bother you again. The method described in the reference your provided fails while using `sparse matrices`. Since I am dealing with large matrices and many of them, it saves me a lot of memory allocation if I make use of the sparsity in the matrix. So is there any way to make it work? I tried to convert the matrix into dense form before factorization, which, if fact, helps. However, I was wondering if there was a better way.

This is strange, sounds like a bug. Could you post the error message? It should definitely work for sparse matrices (barring numerical issues like invertibility)

I get this error while doing

``````F = factorize(A - s * I)
A_lmap = LinearMap{eltype(A)}((y, x) -> ldiv!(y, F, x), size(A, 1), ismutating=true)

decomp, = partialschur(A_lmap, nev=n, tol=1e-5, restarts=100, which=LM())
λs_inv, X = partialeigen(decomp)
``````

ERROR: LoadError: MethodError: no method matching ldiv!(::SuiteSparse.CHOLMOD.Factor{Float64}, ::SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true})

However, the error’s gone if I use `lu()` instead of `factorize()`, nevertheless the memory allocation is very large, compared to converting first to a dense matrix!