Hi, so today I found out that appearantly calling eigen
on a hermitian 3x3 SMatrix{ComplexF64}
allocates quite a bit of memory which I do not quite understand, since there are no allocations for a real matrix:
using StaticArrays,LinearAlgebra,BenchmarkTools;
a = Hermitian(SA[1.0 1+0im 0; 1-0im 1.0 0; 0 0 1]);
b = real(a);
@btime eigen($b); # 98.732 ns (0 allocations: 0 bytes)
@btime eigen($a); # 7.075 μs (13 allocations: 3.73 KiB)
What is the reason for these allocations and is there a way to avoid them?
There likely isn’t a special method for complex eigen
for SMatrix
, so it is falling back to the generic one.
1 Like
Yes, seems that you’re right: In the static arrays library it says:
# [...]
# TODO extend the method to complex hermitian
@inline function _eig(::Size{(3,3)}, A::LinearAlgebra.HermOrSym{T}, permute, scale) where {T <: Real}
Thats a little unfortunate, Ill try my luck with eigen!
, although I believe it does not remove all allocations. Do you know a good way to get rid of allcations when repeatedly computing eigenvalues of many small matrices?
Some time ago I dealt with a similar problem, and found this paper (unpublished I think) describing a faster way of computing the eigendecomposition of 3x3 Hermitian matrices.
I wrote an implementation of this, feel free to use it:
https://gist.github.com/jipolanco/cd0be9867511148345144ddcd517d9ba
Note that I only tested this with real Hermitian matrices, but it’s also supposed to work for complex ones. Maybe it’s worth adding this to StaticArrays, though I had small issues with numerical stability (I think) so I wasn’t sure it was a good idea. But that was probably because some of my matrices were badly conditioned…
4 Likes
thanks for that, I will check it out! I will probably still have have to use generic methods that allocate as little as possible since larger matrices could also occur
Nice move writing the eigenvalues in terms of (a-b)^2, etc. Seems like the authors of the paper should have done that if they were thinking about numerics at all. I don’t have a counter-example, but without an error analysis I wouldn’t trust the eigenvalue formulas for the 3\times 3 case. Even with your fix, it looks like there are a lot of places you could run into troubles.
It’s easy to demonstrate that the eigenvector formulas are unstable:
julia> A = Hermitian(SMatrix{3,3}([1.0 0 0; 0 2 0; 0 0 3]) + 1.0e-6 * randn(3,3))
3×3 Hermitian{Float64, SMatrix{3, 3, Float64, 9}}:
0.999999 5.77458e-7 -1.55338e-6
5.77458e-7 2.0 7.14066e-7
-1.55338e-6 7.14066e-7 3.0
julia> e, V = fast_eigen(A)
Eigen{Float64, Float64, SMatrix{3, 3, Float64, 9}, SVector{3, Float64}}
values:
3-element SVector{3, Float64} with indices SOneTo(3):
0.9999990593745025
2.000000993334634
2.999999739964082
vectors:
3×3 SMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
1.0 -5.7768e-7 7.76991e-7
-5.77458e-7 -1.0 -7.14066e-7
7.76688e-7 7.14066e-7 -1.0
julia> norm(V'*V - I)
5.324478957430239e-10
julia> A*V - V*Diagonal(e)
3×3 SMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
2.22045e-16 2.22374e-10 -6.07619e-10
-1.05879e-22 0.0 4.23516e-22
4.23516e-22 -2.11758e-22 0.0
Both the measure or numerical orthogonality \|V^T V - I\| and the residual \|AV - VD\| should be on the order of the unit roundoff for a stable algorithm. Here, they are off that by 6 orders of magnitude. The eigenvectors are well conditioned in this case since the corresponding eigenvalues are well separated.
2 Likes