I am interested in computing a small proportion (between 3 and 10% of the matrix size) of the eigenvalues of a matrix with extended precision. As a toy example, let’s take the matrix A define by:
T = Complex{Float128}
N = 64
A = Tridiagonal(rand(T, N - 1), rand(T, N), rand(T, N - 1))
julia> using KrylovKit
julia> eigsolve(A, round(Int, 0.1 * N), :LM)
ERROR: LoadError: MethodError: no method matching hschur!(...
from the KrylovKit.jl package;
julia> using ArnoldiMethod
julia> partialschur(A; nev=round(Int, 0.1 * N), which=LM())
ERROR: LoadError: MethodError: no method matching gemv!(...
from the ArnoldiMethod.jl package.
Is there a way/package to compute a small proportion of the eigenvalues with extended precision ?
Additional information:
I have used Float128 from the package Quadmath.jl but I could have use Double64 from the DoubleFloats.jl or Float64x2 from MultiFloats.jl, I get the same errors.
eigvals(Matrix(A)) from the standard library LinearAlgebra.jl works with extended precision but here the size N is small for exposition purpose, in my application it might be as big as 10 000 and I do not need all the eigenvalues only a small proportion of them. So if I can avoid the huge cost of computing all the eigenvalues, that would be great.
All the example shown above works with T = Complex{Float64}.
I know about the eigensolver powm in the IterativeSolvers.jl but it only give one eigenvalue and I am interested in a small proportion > 1.
Yes, I have already see that eigvals from the LinearAlgebra.jl works with extended precision (and you do not need the GenericLinearAlgebra.jl package for eigenvals to works) but the two mains issue with this method:
It transform the tridiagonal matrix into a dense matrix which become prohibitive as the matrix size grows.
It compute all the eigenvalues but I am interested in a small proportion of the eigenvalues (between 3 and 10% of the matrix size). So if I could avoid the cost of computing all the eigenvalues that would be great.
Yes, I tried KrylovKit.jl in my second example (I have edited my post to be hopefully more clear).
It is possible that I am not using it properly for what I want to do. I have seen that you can give it the product matrix-vector instead of the matrix but I did not manage to make it works with extended precision. I get the same error.
julia> using KrylovKit
julia> Amap = LinearMap{T}(x -> A * x, N)
julia> eigsolve(Amap, N, nb_eig, :LM)[1]
ERROR: LoadError: MethodError: no method matching hschur!(...
I have found the main issue which was that the ArnoldiMethod.jl package was pinned at version 0.1.0 because of BifurcationKit.jl. When I remove BifurcationKit.jl and update ArnoldiMethod.jl to 0.2.0 it worked.
Actually, it even worked without GenericLinearAlgebra.jl and GenericSchur.jl. The following works:
using LinearAlgebra, ArnoldiMethod, Quadmath
T = Complex{Float128}
N = 64
A = Tridiagonal(rand(T, N - 1), rand(T, N), rand(T, N - 1))
pschur, hist = partialschur(A; nev=round(Int, 0.1 * N), which=LM())
display(pschur)