Eigenvalues and extended precision

I am interested in computing a small proportion (between 3 and 10% of the matrix size) of the eigenvalues of a matrix with extended precision. As a toy example, let’s take the matrix A define by:

T = Complex{Float128}
N = 64
A = Tridiagonal(rand(T, N - 1), rand(T, N), rand(T, N - 1))

The following eigensolver does not works:

julia> using Arpack
julia> eigs(A; nev=round(Int, 0.1 * N), which=:LM, ritzvec=false)
ERROR: LoadError: StackOverflowError:
...

from the Arpack.jl package;

julia> using KrylovKit
julia> eigsolve(A, round(Int, 0.1 * N), :LM)
ERROR: LoadError: MethodError: no method matching hschur!(...

from the KrylovKit.jl package;

julia> using ArnoldiMethod
julia> partialschur(A; nev=round(Int, 0.1 * N), which=LM())
ERROR: LoadError: MethodError: no method matching gemv!(...

from the ArnoldiMethod.jl package.

Is there a way/package to compute a small proportion of the eigenvalues with extended precision ?


Additional information:

  • I have used Float128 from the package Quadmath.jl but I could have use Double64 from the DoubleFloats.jl or Float64x2 from MultiFloats.jl, I get the same errors.
  • eigvals(Matrix(A)) from the standard library LinearAlgebra.jl works with extended precision but here the size N is small for exposition purpose, in my application it might be as big as 10 000 and I do not need all the eigenvalues only a small proportion of them. So if I can avoid the huge cost of computing all the eigenvalues, that would be great.
  • All the example shown above works with T = Complex{Float64}.
  • I know about the eigensolver powm in the IterativeSolvers.jl but it only give one eigenvalue and I am interested in a small proportion > 1.

Edit: Formatting and add details.

You can get the job done with GenericLinearAlgebra.jl and ArnoldiMethod.jl:

julia> partialschur(A; nev=round(Int, 0.1 * N), which=LM())[1]
PartialSchur decomposition (Complex{Float128}) of dimension 6
eigenvalues:
6-element Vector{Complex{Float128}}:
 1.76732917570178181966857786046259500e+00 + 1.88566438666511933882975520031760772e+00im
 1.65431839639080928590829614759631355e+00 + 1.97529966730726403527537198010151792e+00im
 1.29189845962540662486402509909018053e+00 + 1.85730981985326261230642237981537783e+00im
 1.29393589581771217265696071928253209e+00 + 1.85391512578430311844047859807080534e+00im
 1.03405214172759353767334445901983764e+00 + 1.94335216348051256609276579410600025e+00im
 1.66370260978943467402009678221541037e+00 + 1.42582140642411450137051978349554524e+00im

It’s dog-slow, though: N = 1024 took ~3 minutes, versus 0.3 seconds for Complex{Float64}.

2 Likes

If you convert your Tridiagonal matrix to a general Matrix your code should work using GenericLinearAlgebra.eigvals:

``
using LinearAlgebra, GenericLinearAlgebra, Quadmath, DoubleFloats

complex_matrix(T,n) = rand(Complex{T}, n) * rand(Complex{T}, n)’
generic_tridiag(m) = Matrix(Tridiagonal(m))

n = 4
m128 = generic_tridiag(complex_matrix(Float128, n));
md64 = Complex{Double64}.(m128);

m128eigvals = eigvals(m128)
md64eigvals = eigvals(md64)

maximum(abs.(m128eigvals .- md64eigvals))

Have you tried KrylovKit?

Can you elaborate on how you use the GenericLinearAlgebra.jl package from their GitHub I do not understand how to use it in my case.

Yes, I have already see that eigvals from the LinearAlgebra.jl works with extended precision (and you do not need the GenericLinearAlgebra.jl package for eigenvals to works) but the two mains issue with this method:

  1. It transform the tridiagonal matrix into a dense matrix which become prohibitive as the matrix size grows.
  2. It compute all the eigenvalues but I am interested in a small proportion of the eigenvalues (between 3 and 10% of the matrix size). So if I could avoid the cost of computing all the eigenvalues that would be great.

Yes, I tried KrylovKit.jl in my second example (I have edited my post to be hopefully more clear).

It is possible that I am not using it properly for what I want to do. I have seen that you can give it the product matrix-vector instead of the matrix but I did not manage to make it works with extended precision. I get the same error.

julia> using KrylovKit
julia> Amap = LinearMap{T}(x -> A * x, N)
julia> eigsolve(Amap, N, nb_eig, :LM)[1]
ERROR: LoadError: MethodError: no method matching hschur!(...

I think what you need in addition is using GenericSchur. At least this is what allows one to use ArnoldiMethod.jl with arbitrary float types.

julia> using ArnoldiMethod, GenericSchur, DoubleFloats

julia> T = Complex{Double64}
Complex{Double64}

julia> N = 64
64

julia> A = Tridiagonal(rand(T, N - 1), rand(T, N), rand(T, N - 1));

julia> pschur,hist = partialschur(A; nev=round(Int, 0.1 * N), which=LM());
1 Like

You should open an issue because it should be type agnostic

Thank you @fgerick for the answers.

I have found the main issue which was that the ArnoldiMethod.jl package was pinned at version 0.1.0 because of BifurcationKit.jl. When I remove BifurcationKit.jl and update ArnoldiMethod.jl to 0.2.0 it worked.

Actually, it even worked without GenericLinearAlgebra.jl and GenericSchur.jl. The following works:

using LinearAlgebra, ArnoldiMethod, Quadmath

T = Complex{Float128}
N = 64

A = Tridiagonal(rand(T, N - 1), rand(T, N), rand(T, N - 1))

pschur, hist = partialschur(A; nev=round(Int, 0.1 * N), which=LM())
display(pschur)
1 Like

Ok thanks, I will

edit: