Calculating the `k` smallest positive eigenvalues of a matrix with ARPACK

I have a sqaure matrix that is not necessarily positive definite, for example rand(10000,10000), and I’m trying to obtain the k smallest positive eigenvalues with the command eigs() from ARPACK. How can I achieve that provided that the smallest eigenvalues are negative and I want to select the k smallest eigenvalues only among the positive eigenvalues? The options which=:SM and which=:SR don’t do the trick. Thank you!

Interior eigenvalues are essentially impossible with Krylov based methods if you can only multiply with the matrix, you’ll want to use shift and invert for this (or in this case only invert, since you are looking for eigenvalues closest to zero, so the shift is zero). Thus, you need to be able to efficiently apply the inverse of that matrix.

1 Like

Ok :slight_smile: Thanks. Will try to find another way then

If your matrix is dense but small enough to store, e.g. rand(10000,10000), you might as well use eigvecs. Really large matrices usually have some structure (e.g. sparsity) that allow you to solve A \ b more quickly (e.g. sparse-direct methods or iterative solvers), which allow you to use shift-and-invert or other methods (e.g. contour-integration methods, which also require shifted linear solves).


ARPACK’s eigs implements shift and invert AFAIU, but you have to provide a sigma = ... option, indicating the shift (zero in your case, I believe). Don’t use which.

ARPACK will not focus on one side of the shift point, however. Have a look at LOBPCG in IterativeSolvers.jl for this. Also check out the other native-Julia packages for these things, e.g KrylovKit.jl and ArnoldiMethod.jl

EDIT: example

julia> using Arpack

julia> r = rand(10000, 10000); r += transpose(r);

julia> eigs(r, sigma = 0)[1]
6-element Array{Float64,1}:
1 Like

Depends on how many negative eigenvalues you have. If you just have a few and want the positive among these, just use ARPACK or LOBPCG with a large number of sought eigenvectors. If you have too many of them so that you can’t afford to compute them all, then yeah it’s an interior eigenproblem and you’re basically screwed unless you can afford efficient inverses. There’s also a “shift-and-square” method that squares the matrix A to transform the eigenvalues close to 0 into exterior ones, but I wouldn’t try it unless you’re really desperate.


I want to avoid the use of eigvecs because I only need a few eigenvalues and not all eigenvalues.

Ok, that sounds good! Will try to explore your suggestions!!

I understand, but if you have a dense matrix with no structure it might not be any slower to just compute all the eigenvalues than to try to use an iterative method — you can’t get around the fact that you need O(n³) operations for interior eigenvalues, so you are just playing with the constant factor. If your matrices are very large, it is advisable to look for some special structure to exploit.

(One thing you could try to do is to compute the Hessenberg factorization of your dense matrix, and then use the Hessenberg factorization for shift-and-invert solves, which have new efficient solvers in Julia 1.3 — this is useful if you need a lot of different shifts for the same matrix, but doesn’t get around the O(n³) scaling.)


I think the KrylovKit.jl will do the trick, but I’m stuck with the arguments options of the function eigsolve(). The documentation is very complete but, as I always say when I criticize the documentation of Julia: there are no examples with the commands. For example, the simple code eigsolve(rand(1000,1000), howmany=15, which=:SM) doesn’t work and throw the error got unsupported keyword argument "howmany" even if homany and which are the arguments of the function. I even tried eigsolve(rand(1000,1000), [howmany=15, which=:SM]) with no success. I’m sure the option EigSorter(f; rev = false) will do the trick for what I want, instead of which=:SM, but if the command doesn’t work with the basic arguments, then with this one it will be difficult to implement it…

Yes, good point indeed! And thanks for the Hessenberg factorization!

Documentation getting out of date is an unfortunate fact of life (and it’s a good example of why the mantra “always document everything a lot” is not always the right thing to do…), and the only thing you can do is to open an issue (or even better submit a PR). That said, Julia packages have good tests, which often double up as examples. It tends to rot less fast, as CI yells at people when tests break; in this case should have what you’re looking for.

1 Like

I agree totally with you! Thanks for the link :slight_smile: Github pages are always a better source for that!

Be careful to the caracter ; howmany is not a keyward argument!! It should be

eigsolve(rand(1000,1000), 15, :SM) 

Also, you should be carefull that to target specific parts of the spectrum, shift-invert methods are required, :SM will not do the trick (completely).

And :SM is not valid…

1 Like

Ok, perfect! :slight_smile: I think I’m close to the solution! I’m trying to use the option EigSorter to get the k positive eigenvalues I want and tried essai=eigsolve(rand(1000,1000),15,EigSorter(function pos(x) x[x.>0] end)) , but still have an error! I’m still motivated to find the solution! :stuck_out_tongue: Perhaps the problem is that the eigenvalues I want are not close to the periphery of the spectrum, and so perhaps the better is to compute all the eigenvalues, as @stevengj was saying, despite I want to avoid that …

It depends on the size of the matrix, you can use a Cayley transform or the FEAST algorithm. Roughly speaking, power methods for M gives you the largest EV, so the same method for M^{-1} will give the smallest ones.

Linear algebra is not simple as soon as the “matrix” is large, see @stevengj, @juthohaegeman, @pablosanjose answers above. Hence, you cannot expect to use a general method to work in all cases. Rather you should use the specificities of the matrix.

Is your matrix hermitian (or, since it is likely real, symmetric)? In that case you can indeed square it and look for the :SR eigenvalues as suggested above. I did not think of this as you used rand(10000,10000) as an example in your initial post. Squaring has a negative impact on convergence speed (it squares the ratio between the gaps in the spectrum), but at least gives you a way to move eigenvalues from the interior to the peripheral. If it is not symmetric and eigenvalues will be complex, you cannot expect squaring the matrix will help.

For using EigSorter, the function f should be a scalar function, it’s really as using sort(..., by = f, rev = ...), e.g. :LM is EigSorter(abs, rev=true) , :SR is EigSorter(real).

Any improvements in the documentation are welcome.

1 Like

Yes I’m aware of that… You are right…

It’s a real and symmetric matrix. The rand(10000,10000) was just an example to give a reproducible code.

Then I would use the squaring idea, unless indeed there are only a few negative eigenvalues so that you can just query sufficient eigenvalues to also get the first positive ones.

1 Like