# Is repeated calculations of eigenvalues without overwriting input matrix or allocating new arrays possible?

In my simulation, I need to repeatedly update a few entries of a square matrix A and then calculate the corresponding eigenvalues \lambda. Doing \lambda = eigvals(A) will allocate new memory each time, while \lambda = eigvals!(A) overwrites A, which is not desired. Is there a way of avoiding these cases, i.e. preallocating the necessary arrays/memory once, after which the eigenvalues can be calculated in-place (something like eigvals!(A, \lambda))?

1 Like

eigvals!() is just calling directly into LAPACK: https://github.com/JuliaLang/julia/blob/7e35ac60b3e52436912a85d29c5dffca4fbc6ad3/base/linalg/eigen.jl#L170 , so this would only be possible if LAPACK supports an out-of-place eigvals computation (eigvals(A) just creates a copy of A and then does eigvals!() on that copy).

However, you could do:

B = similar(A)
for i in 1:100
B .= A
eigvals!(B)
end


which doesnâ€™t allocate any memory inside the loop, but does spend some time copying data over from A to B.

1 Like

Unfortunately, this will still allocate because the eigenvalue routine allocates temporary arrays in each iteration. Some work has been initiated in order to provide non-allocating LAPACK wrappers. See https://github.com/JuliaLang/julia/pull/16274. We could make this work for one solver at a time. If we then also add workspace fields to the Eigen struct then it might be possible to reuse such a struct for repeated eigenvalue computations without allocation. E.g. we could make something like this possible

F = eigfact(A)
for i in 1:100
rand!(A)
eigfact!(F, A)
end


In the current setup, this could compute the vectors as well so if we want to go this route then we might want to consider a version of eigfact that only computes values.

4 Likes

@andreasnoack: Although Iâ€™m a bit surprised non-allocating LAPACK wrappers are not yet present in (a technical programming language like) Julia, itâ€™s good to see that work towards that has been initiated!

However, the pull request/discussion seems to end about a year ago. Do you know the current status of this issue?

I also think this would be a great thing to have, so eventually I might take it up myself. But if you donâ€™t feel like waiting an indefinite time I suggest you try to implement it and submit a PR, I am sure people will be happy to help and answer any questions you might have!

(Compared to the cost of computing eigenvalues, the cost of a copy is completely negligible. Hence, I donâ€™t really see the need for an out-of-place eigvals computation in LAPACK.)

2 Likes

See also the PR in BandedMatrices.jl for adding LAPACK eigfact overrides add eigfact methods for SymBandedMatrix by MikaelSlevinsky Â· Pull Request #35 Â· JuliaLinearAlgebra/BandedMatrices.jl Â· GitHub.

@andreasnoack also has a great unregistered package providing native Julia implementations of eigendecompositions (and more):
https://github.com/andreasnoack/LinearAlgebra.jl/

Testing the QL algorithm for symmetric matrices on positive definite matrices of a few random sizes vs LAPACK:
https://gist.github.com/chriselrod/c36f061df86ff862dc027474c4853248
On one computer the crossover point happened at about n=35, and another somewhere between n = 133 and n = 150.
I suspect the former is more representative and that thereâ€™s something wrong with the LAPACK build on >133 case. Otherâ€™s are free to try.
Both were OpenBLAS. I intend to build Julia with MKL within the next week, and will rerun it then.

So, if your matrices are small enough, andreasnoackâ€™s implementation is likely to be faster than LAPACK. Given that, if you really want in place eigenvector updating, you could modify the code there.
For example, within EigenSelfAdjoint, full(::EigenQ) allocates for the vectors.

stevengjâ€™s of course right that the time saved in not allocating will be fairly negligible.
For n=100, the difference between copy! and copy was around 4 microseconds on my computer, while eig(::Hermitian) took about 2,300 microseconds.

Although for algorithms like Optimâ€™s NewtonTrustRegion, where you need to recalculate eigenvalues and eigenvectors of of fairly similar matricesâ€¦could any performance be gained by somehow using the old eigenvectors as a seed/updating them?

Thanks for the feedback, everyone!

@stevengj, @Elrod: You are of course right that the cost of copying (probably) will be negligible to the actual eigenvalue calculation, hence my question is actually not as important as I first imagined. Thanks for pointing this out!

@Elrod: Iâ€™ve had the same idea of using the previous eigenvalues/vectors as seeds for the next calculation (as I suspect the eigenvalues will not change very much between each iteration step), but unfortunately I have not been able to figure out (by myself / searching the internet) how to do this efficiently.