I am new to the Julia language, and I’ve encountered a performance issue with an eigenvalue computation. Please feel free to point out any confusing statements or beginner mistakes!
I need to compute the largest magnitude eigenvalues (since they are complex) for a set of large, dense matrices (size ∼4^8×4^8) with DoubleFloat precision.
My current implementation is extremely slow!
using LinearAlgebra
using DoubleFloats
using GenericLinearAlgebra
using GenericSchur
function biggest_values(S::Matrix{ComplexDF64})::Double64
lambda2 = 1.0
λs = eigvals(S)
mu_pairs = Pair{Float64, ComplexDF64}[]
for λ in λs
diff = λ - 1.0
abs_diff = abs(diff)
if abs_diff < eps()
abs_mu = Inf
else
abs_mu = 1.0 / abs_diff
end
push!(mu_pairs, abs_mu => λ)
end
p = sortperm(mu_pairs, by=p -> abs(p.second), rev=true)
λ2 = mu_pairs[p[2]].second
lambda2 = abs(λ2)
return lambda2
end
I am aware that the largest eigenvalue magnitude of the matrices I am dealing with is abs(λ1)=1. I need to find the top two to three eigenvalues by magnitude, e.g., abs(λ2).
Is there an efficient way to calculate these few largest eigenvalues quickly while maintaining DoubleFloat precision?
Probably you should replace the eigvals(S) computation, which computes all 4^8 eigenvalues, by an iterative method that only computes the top two, for instance Power method · IterativeSolvers.jl.
Besides the algorithmic discussion, I’d also suggest you first try with regular Float64 eltypes and without GenericLinearAlgebra to check if that runs “fast enough”. Your extra requirements will almost certainly be slower than this “normal” baseline.
Thank you! This method was very effective and fast! But if I want to find the third and fourth largest eigenvalue magnitudes, this method no longer seems to work. Is there an effective method that can solve them?
Use the Arnoldi iterative method, a Krylov method which can find several eigenvalues.
(The basic power method is the first iterative method that everyone learns in linear algebra, but it is mostly superseded by the superior Arnoldi method. Essentially, Arnoldi accelerates the power method by keeping a “memory” of the space of all the vectors in the iteration and searches this whole “Krylov subspace” for the best eigenvector estimates.)
Another thing you could do is to compute the eigenvalues in Float64 precision first, which is much faster, and then use these as initial guesses for shift-invert steps, or even Newton steps, though I don’t know if there is a good implementation of a Newton eigensolver for general nonsymmetric matrices lying around in Julia right now. (QuadGK.jl has a Newton eigensolver that it uses for arbitrary-precision eigensolves, starting with Float64 initial guesses, but its implementation is only for symmetric matrices.)