# Subset of eigenvalues/vectors for Generalized eigenproblem

Hi all,

In my code I have a generalized eigenvalue problem

eigen(A,B)


where the matrices A and B are both real, symmetric and dense. The size is so far not a big deal (~1500x1500), as I’m still running on my laptop, but might need to increase in the future.

I wonder if it’s possible to calculate only a subset of all the eigenvalues instead of calculating all of them and selecting afterwards? in particular I need the n (e.g. n = 5) eigenvalues (and corresponding eigenvectors) with the smallest (most negative) value. All interesting eigenvalues should (by a physical argument) also lie below a certain threshold, so if there is a good way to use this as a selection, I’m also fine with it.

The documentation does not mention these usecases for a generalized eigenvalue problem, and a simple test leads to the following error

a=rand(3,3);b=rand(3,3);
eigen(a,b,1:2)
ERROR: MethodError: no method matching eigen(::Matrix{Float64}, ::Matrix{Float64}, ::UnitRange{Int64})
Closest candidates are:
eigen(::AbstractMatrix{TA}, ::AbstractMatrix{TB}; kws...) where {TA, TB} at C:\Program Files\Julia-1.8.2\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:509
eigen(::AbstractMatrix{T}; permute, scale, sortby) where T at C:\Program Files\Julia-1.8.2\share\julia\stdlib\v1.8\LinearAlgebra\src\eigen.jl:235
Stacktrace:
[1] top-level scope
@ REPL[27]:1


If this is only possible for eigenvalues, and not for eigenvectors I would still be interested.

Doesn’t look like LAPACK has this, so you’re probably out of luck. LAPACK (and julia) have routines for subsets of eigenvalues of symmetric tridiagonal matrices, so you can possibly hook into that by handling the transformation to tridiagonal form yourself. In any case I would not expect a super large speedup even if it did, because computing a subset of eigenvalues of a dense matrix is not that much less expensive than computing the full spectrum. Are your matrices really dense without any specific property (eg fast matvecs?)

Some numbers on the above:

N = 1500
A = randn(N, N)
A = A+A'
@time eigen(A)
Atri = SymTridiagonal(A)
@time eigen(Atri)
@time eigen(Atri, -8, -6.5)

  0.347073 seconds (15 allocations: 52.037 MiB, 0.17% gc time)
0.166876 seconds (14 allocations: 17.556 MiB)
0.000825 seconds (17 allocations: 17.591 MiB)


So it is indeed quite fast to find a small subset of eigenvalues of a tridiagonal matrix, but that step only amounts to half the total computation time, the other being the reduction to tridiagonal form. So at most you can get a factor of 2.

1 Like

Thank you for your replies antoine-levitt.
Regarding the Tridiagonal case, it looks like you only considered a normal eigenvalue problem, and not a generalized one, right? I’m not sure if it is still applicable in that case. Do both matrices A and B need to be transformed to SymTridiagonal form?

I had a closer look at my matrices: The left matrix (A) is indeed dense, i.e. there are no obvious nonzero elements. However, the right matrix (B) is block-diagonal.

To clarify, I should maybe add the info that the size of my matrices is governed by the product of the number of “physical channels” times the number of “numerical expansion size”, i.e. something like N = p * n where p ~1…10, and n ~200. So when I say the right matrix is block-diagonal I mean it is diagonal in terms of the physical channel index, but dense in terms of the numerical expansion.

So in principle I could maybe calculate the inverse of each of the submatrices (size ~ 200x200) and multiply it from the left to each block-row corresponding to each physical channels. And then I am left with a regular eigenvalue problem, however I’m not sure whether the resulting matrix will still be real-symmetric, and if I will gain anything from that.

Is the matrix B positive definite as well? This might help with preserving symmetry

2 Likes

Yes, it seems that the matrix B is positive definite. At least for the few parameter configurations which I have quickly tested.

In addition, I found a function in LAPACK which claims to be doing exactly what I am looking for:
https://netlib.org/lapack/explore-html/d2/d8a/group__double_s_yeigen_ga51bef2d9d58cfff3f1bac9143ccc85b8.html#ga51bef2d9d58cfff3f1bac9143ccc85b8

Maybe this one is not implemented in Julia yet? I guess will then have a look into how to call Fortran routines directly from Julia. If anyone has a good literature/source feel free to post it

IIRC you basically just use ccall. See julia/lapack.jl at master · JuliaLang/julia · GitHub for examples.

We ideally need some kind of flag in eigen(A,B) for Hermitian problems when B is positive-definite, so that we can use the specialized methods. We don’t have a type to represent this, although one could add this to GitHub - JuliaStats/PDMats.jl: Uniform Interface for positive definite matrices of various structures I guess.

Of course, we could just call isposdef(B), I guess — it’s usually going to be 5–10× faster than eigen.

Of course if your matrix B is positive-definite, you could turn it into an ordinary Hermitian eigenproblem with the Cholesky factors. If B = LL^*, then Ax = \lambda B x \implies L^{-1} A L^{-*} y = \lambda y where y = L^* x. So, just do:

L = cholesky(B).L
C = Hermitian((L \ A) / L')
F = eigen(C, 1:5)
λ = F.values
X = L' \ F.vectors


and you can check that these are indeed generalized eigenvectors of Ax=\lambda Βx — you should find that norm(A * X - B * X * Diagonal(λ)) / (norm(A)*norm(X)) is small.

If the matrix is sufficiently large, I find that this is indeed faster than eigen(A,B), since Cholesky factorization and triangular solves are relatively cheap compared to eigenproblems.

However, the right matrix (B) is block-diagonal.

The above procedure can be made especially efficient in this case, since you can do the Cholesky factorization and triangular solves separately for each block. The cost of computing C is then O(nb^2) for n \times n matrices with blocks of size \le b \times b, instead of O(n^3) for dense B.

2 Likes

Thank you for your input. UNfortunately, a few other, more urgent things to do have crossed my way, so I have to postpone the performance optimization for a bit. Nevertheless, this is aleady some very nice help, thank you!. I think especially the block-diagonal form can be exploited more, as I realized recently that within the n bxb submatrices, there is only a few (<<n) different submatrices.

The LAPACK routine you found uses the approach that @stevengj described. Depending on your accuracy needs, you will want B to be reasonably well-conditioned for that method, otherwise you can have stability problems that manifest as an unnecessary loss of accuracy in the smaller (in absolute value) generalized eigenvalues. Ignoring symmetry and using the QZ algorithm would avoid that problem. So you might want to be on the lookout for numerical problems unless you know that B is going to be well conditioned.

I did poke around the code for eigen. After checking that A is symmetric and B positive definite it does seem to resolve eventually into a call to sygvd in the file symmetriceigen.jl:

function eigen!(A::HermOrSym{T,S}, B::HermOrSym{T,S}; sortby::Union{Function,Nothing}=nothing) where {T<:BlasReal,S<:StridedMatrix}
vals, vecs, _ = LAPACK.sygvd!(1, 'V', A.uplo, A.data, B.uplo == A.uplo ? B.data : copy(B.data'))
GeneralizedEigen(sorteig!(vals, vecs, sortby)...)
end


The routine sygvd uses the same trick, but computes all eigenvalue/eigenvector pairs. If you wrapped your matrices in Symmetric(A) or they are exactly symmetric numerically, you are probably already using a relatively fast algorithm that exploits symmetry and already seeing whatever instability you will see for your problem. I am surprised that eigen defaults to the faster but potentially unstable algorithm for symmetric definite problems.

Checking the condition number of B would be a good idea. Also, if you want some basis of comparison, you can probably trick eigen into using QZ and then compare results. For example, you could multiply the first rows of both A and B by -1. This won’t change generalized eigenvalues or right generalized eigenvectors, but will necessitate the use of the nonsymmetric generalized eigenvalue routines. You will take a hit on speed, however, even compared to what you are doing now.

1 Like

Yes, my point was simply that in standard algorithms to compute (symmetric) eigenvalues/eigenvectors, there are two steps: the first is to reduce the matrix to tridiagonal form, and the second is to compute the eigenpairs of this tridiagonal matrix. Only the second step can be optimized when you are only interested in a small subset of eigenvalues, and since the two steps take about the same time, you only gain a factor of 2 at most. So if you add the step of reducing the generalized eigenvalue problem to a standard one, you gain even less.

So I wouldn’t actually bother with it, unless that 2x factor is super valuable to you. The real gains are to be had in trying to find some additional structure you can exploit. You say A is dense, but from your formulation it looks like it’s solving some kind of PDE? These matrices usually have some property that enable you to compute matrix-vector products in less than O(N^2), even if dense - maybe using a Fourier transform, or a multipole expansion, or something like that? You can then use iterative solvers to solve the eigenvalue problem only for those eigenpairs you need. Maybe try posting your physical problem here - the intersection of julia and PDE people is quite large