Is there a way to precondition eigensolver?

Hello everyone. I am solving a nonlinear matrix equation which can be solved be solved iteratively. A single iteration contains construction of a matrix H, its diagonalization and solution of a nonlinear equation for operator eigenvalues. Since eigenvalues and eigenvectors do not change much between iterations, I am looking on the way to precondition the eigensolver eigen. However, I don’t see options for that.

Are there other alternative dense eigensolvers for Julia?

EDIT

To add a little more detail, the matrix equation we are solving is:
2 \epsilon + a \psi + \psi^3 + \frac{1}{2} (\delta a \psi + \psi \delta a)=0
where a is a number, \epsilon, \delta a are matrices and \psi is the unknown. A following code does solve such problem:

using LinearAlgebra
using IterativeSolvers
using LazyArrays: ⋆


# The equations we are trying to solve
F(a,δam,ϵ,ψ) =  2 .* ϵ + 1/2 .* (δam*ψ + ψ*δam) + a*ψ + ψ^3

function λ(a,Λ)
    p = -9*Λ + sqrt(12*a^3 + 81*Λ^2)
    return (-2*3^(1/3)*a + 2^(1/3)*p^(2/3))/(6^(2/3)*p^(1/3))
end

### A relaxation parameter
α = 10.

a = 1.
ωD = 10.
N = 100

ϵ = Diagonal(range(-ωD,stop=ωD,length=N))

δa = zeros(Complex{Float64},N)
δa[1] = 2.1
#δa[60] = 1.2

δam = Array{Complex{Float64}}(undef,(N,N))
for m in 1:N
    for n in 1:N
        if m-n>0
            δam[m,n] = δa[m-n]
        elseif n-m>0
            δam[m,n] = δa[n-m]'
        else
            δam[m,n] = 0
        end
    end
end

function solveψ(a,δam,ϵ,N,α)
    ψ = zeros(Complex{Float64},(N,N))
    U = Matrix{Complex{Float64}}(I,(N,N))
    H = zeros(Complex{Float64},(N,N))
    δψ = zeros(Complex{Float64},(N,N))
    
    for i in 1:100
        H .= 2 .* ϵ + 1/2 .* (δam*ψ + ψ*δam)
        
        # ### Ordinary eigensolver
        # h = eigen(H)
        # U = h.vectors
        # Λ = h.values

        # Diagonalizing as much as possible before giving to eigensolver
        # Hp = U'*H*U
        # @time h = eigen(Hp)
        # global U = U*h.vectors
        # Λ = h.values

        ### IterativeSolvers
        iter = LOBPCGIterator(H,true,U)
        h = lobpcg!(iter)
        Λ = h.λ

        δψ .= U*Diagonal((x -> λ(a,x)).(Λ))*U' - ψ

        if mod(i,10)==0
            @show norm(F(a,δam,ϵ,ψ))
        end

        ### Adaptive relaxation
        if norm(F(a,δam,ϵ,ψ+α*δψ))>norm(F(a,δam,ϵ,ψ))
            α = α/2
            continue
        end

        ψ .= ψ .+ α .* δψ
    end

    return ψ
end

@time solveψ(a,δam,ϵ,N,α)

Previously I thought that preconditioning would increase performance, but even giving an initial guess (which I previously called preconditioning) does not improve the performance.

Are there other changes I could make to improve the performance of solveψ function?

There’s a Jula implementation of LOBPCG. That might not be what you have in mind when speaking of preconditioning though. If you give more details about your application you might get better answers.

1 Like

I have used IterativeSolvers.lobpcg from IterativeSolvers.jl with Preconditioners.AMGPreconditioner from Preconditioners.jl before. The latter just wraps AlgebraicMultigrid.jl. The trick is to use the approximate inverse of A - \sigma B as the preconditioner where \sigma is close enough to the eigenvalues satisfying A x = \lambda Bx. That said, if your eigenvectors and eigenvalues are close enough between iterations, you probably won’t need many iterations to converge if you set the starting point properly, so you may not even need a preconditioner. Note that LOBPCG assumes that B is positive definite.

1 Like

Thanks. I could get lobpcg to solve the eigensystem. However, the time it takes is the same as with Julia built in eigen solver even if I plug in the initial guess of eigenvectors from previous iterations.

LOBPCG is tailored towards sparse/matrix-free operators. As far as I know there is no dense eigensolver that is able to take into account starting guesses (or, equivalently, almost diagonal matrices)

Starting guesses should be helpful in subspace iteration (block inverse power iteration). And that is applicable to both sparse and dense matrices.

Yes, but to reliably beat LAPACK with subspace iteration, that I have not seen (but would love to).

Certainly true without a good guess of the subspace. But with a good guess of the eigenmodes it may be possible.

@Janis_Erdmanis @antoine-levitt I’ve developed a method that does just this: efficiently compute (one, few or all) eigenvectors of a near-diagonal matrix. Useful for your purpose?

2 Likes

It would have been interesting to try at the time when I was still involved in the project. Now though, I have defended my PhD, and I have chosen to follow a different direction where I no longer need to remember the horrors I experienced working on this project/supervisor.

Fortunately, MWE is self-contained. Thus one can easily test whether it improves performance.