Smallest magnitude sparse generalized eigenvalues

Hi,

I am trying to find the neig=5 smallest eigenvalues (in magnitude) of a generalised problem

(K-val*M)*vec = 0

where K is real symmetric positive indefinite and M is real symmetric positive definite. This is close to a question in a previous thread but here I try to use KrylovKit.jl.

My toy problem below could use better forms of K and M than generalised sparse storage, but my real world problems starts with sparse matrices.

The problem is parametrised by N so that the eigenfreqency (I do mechanics) f, computed from the eigenvalue is independent of N (asymptoticaly, at least). The game: scale up. But moderately: let’s stay on a personal computer for now.

using SparseArrays,KrylovKit

N = 1000

neig = 5

K = spdiagm(N,N,-1=>range(-N,-N,N-1),0=>range(2N,2N,N), 1=>range(-N,-N,N-1))
M = spdiagm(N,N,0=>range(1/N,1/N,N))

vals, vecs, info = KrylovKit.geneigsolve((K/N,M*N),randn(N),
                        neig,
                        EigSorter(abs; rev = false);                                
                        maxiter     = 200,    
                        krylovdim   = 100,
                        tol         = 1e-4,
                        verbosity   = 3,
                        orth        = KrylovKit.ModifiedGramSchmidtIR(),
                        issymmetric = true,
                        isposdef    = true)

f=sqrt.(vals[1:neig])./(2Ļ€)*N

N=1000 works, but I hit the wall before N=10000 :frowning: . I get some gain from scaling the matrices (K/N,M*N) so that they have ā€œkind of oneā€ terms… does this mean I need a preconditionner? ā€œKrylovā€ would suggest that, but the manual states ā€œ*KrylovKit.jl does currently not provide a general interface for including preconditionersā€.

I also gain from a lax tol - all eigenvalues but the last are returned with better precision.

Any ideas on how to tweak the solver?

For the smallest-magnitude eigenvalues of an indefinite matrix, you usually want to apply Krylov methods to the inverse problem

Kx = \lambda Mx \Longleftrightarrow K^{-1} y = \lambda^{-1} M^{-1} y

where y = Mx, and then find the largest magnitude of \lambda^{-1}. You don’t compute K^{-1} explicitly, but instead form (if possible) its sparse factorization (e.g. sparse LU or sparse LDLįµ€) and use that as a linear operator (e.g. with InverseMap in LinearMaps.jl). (Your M matrix is apparently diagonal, which would make it easy to invert explicitly.

3 Likes

Hi Steven,

Thank you. I noted that KrylovKit (and I reckon, iterative eigensolvers in general) would rather deal with the eigenvalues of largest magnitude. And you seem to say that my asking for the smallest eigenvalues is what is giving the solver a hard time. Very important input.

The catch is that my K (in general, as opposed to my toy example), is not necessarily invertible (and may have negative eigenvalues). But OK, I’ll ponder what algebraic trick I can pull to transform my problem into a more palatable form (i.e. find largest eigenvalues). I will have to think very carefully about what properties I can assume K and M to have in general… As you point out, I do not need to invert any matrix, just to provide a matvec for any linear expression.

I’ll take it from there!

:smiley:

Would this help?
PetrKryslUCSD/VibrationGEPHelpers.jl: Generalized Eigenvalue Problem helper functions

1 Like

Would this? :smiley:
A blueprint on how to use KrylovKit, Arpack and more, to compute eigenmodes in structures!
Thanks!

If it’s not invertible, then the problem is even easier: you know that there is an eigenvalue of zero, and just need to solve a linear system to find it.

(Whereas negative eigenvalues are no problem.)

Not necessarily: in mechanics the fundamental frequency is always non-zero (ie. the first non-zero after the zeros).

If an object floats in space, it will have 6 zero eigenfrequencies, and 6 ā€œstiff body motionā€ eigenvectors. Then I am interested in ā€œa fewā€ more eigenfrequencies - ā€œat what notes does it sound if I hit itā€. These will be positive eigenvalues.

Then comes the fact that if I apply boundary conditions, I do not do it in the usual manner of ā€œsuppressingā€ or ā€œcondensingā€ some degrees of freedom, which would (typically) lead to a symmetric-positive-definite K. Rather, I introduce Lagrange multipliers (new degrees of freedom, augmented problem), resulting in an indefinite K (negative eigenvalues, but typically no zero eigenvalue). I have not thought through what story the eigenvectors with negative eigenvalues will tell, and I look forward to discover that.

I’ll explore the ideas in VibrationGEPHelpers now.

One simple option is to add a tiny shift to your matrix, A + (\text{small}) I to your matrix that makes it invertible. Even better, if you have an order-of-magnitude estimate for the lowest nonzero eigenfrequencies, e.g. from dimensional analysis, you could subtract that estimate as a shift. That would not only make the matrix invertible, it would accelerate convergence of the desired eigenfrequencies.

(For example, a shift strategy to deal with rigid-body modes is mentioned in this article.)

Whether I use the algorithm corresponding to the option KrylovKit in VibrationGEPHelpers, or a modification using an LU factorisation

luK = lu(K)
L,U,s,p,q = luK.L,luK.U,luK.Rs,luK.p,luK.q
vals, vecs, info = KrylovKit.eigsolve(x->L\((s.*(M*((U\x)[q])))[p]),...
for vecsᵢ ∈ vecs  
    vecsįµ¢ .= (U\vecsįµ¢)[q]
end

my toy problem is solved with excellent stability, tested up to N = 1000000. Which is an improvement, to say the least. Next step: the real world!

In both cases: the problem is transformed in to an ā€œinverseā€ problem (with inverted eigen values as suggested by Steven), that is an ordinary (as opposed to generalized) eigenproblem (as suggested by Petr).

Thank you Petr and Steven!

1 Like

Thank you, I’ll study this.

You can certainly use the GEP for the same purpose.
Shifting is mentioned in the docs

The problem with changing to an ordinary eigenproblem is that, if you are not careful, you lose the Hermitian/real-symmetric property. Here, since M is diagonal, you can easily scale by M^{1/2} factors to get a Hermitian ordinary eigenvalue problem if you want.

Oh… the operator x->L\((s.*(M*((U\x)[q])))[p]) of my inverted problem will generally not be exactly symmetric, even though K and M are.

But KrylovKit, if told ishermitian=true stays Float64, and gives good results. If not, I could always rewrite the above operator to something like x->U\(L\(M*x)) (ignoring the details here), which must be symmetric (baring finite precision).

The fact that M is banded, is just my toy problem, not my general problem…

Petr: GEP can do it - no doubt. I am infamous for… learning by doing :smiley: . And my linear algebra got upgraded in the past couple of days, thanks to you both.