Get an eigvec by eigval, using eigvecs, only allow SymTridiagonal?

Hi, there

I wonder why the eigvecs method does not work for a A::Symmetric{Int64, Matrix{Int64}}.

using LinearAlgebra
A = SymTridiagonal([1.; 2.; 1.], [2.; 3.])
val3 = eigvals(A)[3]
vec3 = reshape(eigvecs(A, [val3]), (:,)) # ✅ this works
A * vec3 .- val3 * vec3 # double check the correctness
# re-do the above prosess
A = Symmetric(Matrix(A))

julia> eigvecs(A, [val3])
ERROR: MethodError: no method matching eigvecs(::Symmetric{Int64, Matrix{Int64}}, ::Vector{Float64})
The function `eigvecs` exists, but no method is defined for this combination of argument types.

Closest candidates are:
  eigvecs(::AbstractMatrix, ::AbstractMatrix; kws...)
   @ LinearAlgebra K:\julia-1.11.4\share\julia\stdlib\v1.11\LinearAlgebra\src\eigen.jl:651
  eigvecs(::SymTridiagonal{<:Union{Float32, Float64, ComplexF64, ComplexF32}, <:StridedVector{T} where T}, ::Vector{<:Real})
   @ LinearAlgebra K:\julia-1.11.4\share\julia\stdlib\v1.11\LinearAlgebra\src\tridiag.jl:326
  eigvecs(::Union{Hermitian{T, S}, Symmetric{T, S}} where {T, S})
   @ LinearAlgebra K:\julia-1.11.4\share\julia\stdlib\v1.11\LinearAlgebra\src\symmetriceigen.jl:251
  ...

Should I use the following method instead? Is this the correct usage?

function my_eigvecs(A::Symmetric, val) return nullspace(A - val * one(A))[:, 1] end
vec3 = my_eigvecs(A, val3) # retrieve an eigenvector

The eigvecs(A, λ) method is currently only implemented for SymTridiagonal matrices. For other matrix types there is currently only eigvecs(A).

In principle, we could easily extend this to real-symmetric/Hermitian matrices, since they can be transformed into real SymTridiagonal matrices by the Hessenberg factorization. This should work:

using LinearAlgebra
import LinearAlgebra: eigvecs, RealHermSymComplexHerm

function eigvecs(A::RealHermSymComplexHerm, λ::AbstractVector{<:Real})
    F = hessenberg(A) # transform to SymTridiagonal form
    X = eigvecs(F.H, λ)
    return F.Q * X    # transform eigvecs of F.H back to eigvecs of A
end

For only computing a small subset of the eigenvectors, it seems to be a couple times faster than eigvecs, especially if you don’t count the cost of the eigenvalues (e.g. you already have them for some other reason).

Might be worth putting together a PR to LinearAlgebra.jl if you are interested in this functionality? eigvecs(A::Hermitian, eigvals) method? · Issue #1248 · JuliaLang/LinearAlgebra.jl · GitHub

No. nullspace employs an SVD, which is as costly as computing all the eigenvectors and eigenvalues with eigen. (In fact, the SVD calls eigen for Hermitian matrices.)

1 Like

As I commented in the LinearAlgebra.jl issue, however, if you are calling eigvals(A) explicitly, you are better off doing the Hessenberg factorization yourself and re-using it for both eigvecs and eigvals, since the Hessenberg factorization is the most expensive part of Hermitian eigensolves.

(I really should write a blog post about Hessenberg factorizations in Julia at some point; most people aren’t familiar with this factorization and why it is important.)

5 Likes

The motivation for directly retrieving a eigenvector associated with the minimum eigenvalue is here:

Assume A is a symmetric real matrix whose size(A) == (n, n).
According to the definition, if A is not PSD, then we should could find a v, such that

transpose(v) * A * v == eigmin(A) < 0

Once we know v, we could add the linear cut @constraint(model, transpose(v) * X * v >= 0), as one member of the full PSD cut @constraint(model, X in PSDCone()).

Therefore, I hope that LinearAlgebra could extend the eigmin function such that it also provide a corresponding eigenvector.

Yes, it is useful, as my motivation above.

Ingenuous students probably didn’t learn them well at school. If they are, they should spend more effort learning by themselves. If they learn pure knowledge without applications, they may forget all of them after e.g. 1 year. :slightly_smiling_face:

You can just call eigen(A, 1:1), no?

1 Like

Ah, yes. But is it efficient? x-ref this.

This API works, but is somewhat not intelligible for the user.
Why can’t it be eigen(A, begin:begin), or if I want to retrieve the maximum,
eigen(A, end:end). These cannot work out.

Now I’m updating like this

import LinearAlgebra
function my_eigmin(A)
    valmin, vecmin = LinearAlgebra.eigen(A, 1:1)
    return first(valmin), vecmin[:, 1]
end
# test code begins
A = LinearAlgebra.SymTridiagonal([1.; 2.; 1.], [2.; 3.])
A = LinearAlgebra.Symmetric(Matrix(A))
valmin, vecmin = my_eigmin(A)

Yes. eigen(A, 1:1) only works for Hermitian matrices, where there is a specialized algorithm. The method you are referring to in that comment is eigmin for general matrices, which is not as efficient.

1 Like

begin and end are allowed to point to the first and last indices only within an indexing operation. There simply isn’t a way to pass these generically as arguments. It might be possible to avail a somewhat similar syntax by using EndpointRanges.jl.

2 Likes