# Eigen not working for arbitrary precision BigFloat

I try to use the function eigen defined on a matrix with BigFloat entries. The method does not work.

julia> matrix = [BigFloat(2) BigFloat(3); BigFloat(3), BigFloat(2)]
julia> matrix
2x2 Matrix{BigFloat}:
2.0   3.0
3.0   2.0
julia> eigen(matrix)


produces an issue.

The same issue does not arise if I simply define my matrix as

julia> matrix2 = [2.0 3.0; 3.0 2.0]


How can I use linear algebra routines with matrices made out of arbitrary precision floating point numbers as entries?

Thank you very much in advance. I am new to Juila and also would appreciate if I could be directed to the right place to request implementations of this sort.

1 Like

Welcome!

Two points:

1. what error do you get? Please post the full error message.
2. Are you certain you need arbitrary precision? Would a 128 bit Float also be enough? It would be much more performant than true BigFloats. Quadmath.jl would be one package supporting this.

Have you tried using GenericLinearAlgebra.jl?

2 Likes

Thank you for the response. The error message I get is,

ERROR: MethodError: no method matching eigen!(::Matrix{Complex{BigFloat}}; permute=true, scale=true, sortby=LinearAlgebra.eigsortby)
Closest candidates are:
eigen!(::SymTridiagonal{var"#s832", V} where {var"#s832"<:Union{Float32, Float64}, V<:AbstractVector{var"#s832"}}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/tridiag.jl:280 got unsupported keyword arguments "permute", "scale", "sortby"
eigen!(::SymTridiagonal{var"#s832", V} where {var"#s832"<:Union{Float32, Float64}, V<:AbstractVector{var"#s832"}}, ::UnitRange) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/tridiag.jl:283 got unsupported keyword arguments "permute", "scale", "sortby"
eigen!(::SymTridiagonal{var"#s832", V} where {var"#s832"<:Union{Float32, Float64}, V<:AbstractVector{var"#s832"}}, ::Real, ::Real) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/tridiag.jl:288 got unsupported keyword arguments "permute", "scale", "sortby"
...
Stacktrace:
[1] eigen(A::Matrix{Complex{BigFloat}}; permute::Bool, scale::Bool, sortby::typeof(LinearAlgebra.eigsortby))
@ LinearAlgebra /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/eigen.jl:237
[2] eigen(A::Matrix{Complex{BigFloat}})
@ LinearAlgebra /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/eigen.jl:235
[3] top-level scope
@ none:1


Thank you for notifying me of Quadmath package. It is certainly a help in applications where I have decided what precision to use and it fits inside 128 bits. I do want to be able to take exp(log(matrix)) for an arbitrary precision matrix. The word “need” is strong, but certainly for classroom demonstration purposes I would like to be able to do everything I do usually with arbitrary precision numbers vectors matrices etc. Then later I may decide on a fixed predefined precision for performance purposes.

I downloaded the Quadmath.jl package, and ran the same routine on it, however the method eigen still causes the same problem. It seems to be only defined for Float32 and Float64.

Thank you

No I hadn’t thank you very much for pointing me to the package GenericLinearAlgebra.jl

However despite the readme file looking like exactly what I want (linear algebra Routines for BigFloat and Quaternion) in using it I encountered the same problem. This actually got me worried that I’m doing something wrong so I copy below exactly what I wrote.

julia> using GenericLinearAlgebra
julia> matrix
julia> matrix
2×2 Matrix{BigFloat}:
2.0  3.0
3.0  2.0
julia> GenericLinearAlgebra.eigen2(matrix)
ERROR: MethodError: no method matching eigen2(::Matrix{BigFloat})
Closest candidates are:
...
Stacktrace:
[1] top-level scope
@ none:1


Am I using the wrong method? Is GenericLinearAlgebra.eigen2(matrix) the wrong syntax to write this in?

Thank you very much.

Try with Symmetric(matrix)

1 Like

Just to clarify: Julia’s eigen uses LAPACK under the hood which only supports basic number types much as Float64. That’s why big floats don’t work by default. However, as pointed out above, the Julia ecosystem provides more type-generic pure Julia implementations of eigensolvers that you can use.

1 Like

AFAIR, you should be able to simply use eigen after loading the package. It adds methods for big floats to the generic eigen function. Might need to use Symmetric(matrix) or Hermitian(matrix) though.

2 Likes

This didn’t work but after setting

hmatrix = Hermitian(matrix)

and after having loaded the package GenericLinearAlgebra
I was able to use eigen as a method.

Both

julia> GenericLinearAlgebra.eigen2(hmatrix)

and even simply

julia> eigen(hmatrix)

worked. And they gave eigenvalues and vectors to many decimal places.

Would you consider it a bug of the LinearAlgebra package that I would get

julia> using LinearAlgebra
julia> matrix = [2 3; 3 0]
julia> bfmatrix = collect(BigFloat, matrix)
julia> ishermitian(bfmatrix)
true


but I would still need to explicitly convert it to Hermitian(bfmatrix) to be able use the eigen routine?

1 Like

Thanks, yes after

using GenericLinearAlgebra

as suggested by dlfivefifty above
and after converting it

hmatrix = Hermitian(matrix)

I could simply use

eigen(hmatrix).

Thanks.
(Though I don’t need to explicitly turn matrices hermitian if I’m working with normal Float64 size matrices, GenericLinearAlgebra package seems to require it to work whenI have arbitrary precision matrices.)

No, that’s not a bug. ishermitian tests for hermiticity in the value not the type sense. No matter the type of a matrix, if it is hermitian ishermitian will / should return true. So, the function behaves correctly.

The fact that you need to use Hermitian(matrix) in the eigen function is because that’s a requirement from GenericLinearAlgebra. Among other things, it only adds a method eigen(A::Hermitian{<:Real}). Perhaps one could think about relaxing the function signature / dispatch constraint and, based on ishermitian / issymmetric, add a generic method that does the Hermitian(matrix) / Symmetric(matrix) type transformation automatically (i.e. for the user). Note that this is what LinearAlgebra does at some points, for example here. However, it is very natural in Julia to indiciate special properties of your matrices on the type level to dispatch to specific implementations.

In any case, not a bug.

2 Likes

I think it’s not a bug but a missing feature l. GenericLinearAlgebra should overload eigen and check ishermitian and return eigen(Hermetian(A)), otherwise throw an error “not implemented for non-Hermitian “

I’m sure if someone made a PE doing this it’s getting merged

Note the implementation of non-symmetric eigenvalue computation is pretty interesting. I’m not an expert but I believe there’s something called “Wilkinson’s adhoc shift” which i believe is the same as Wilkinson’s shift (which provably converges for SymTridiagonal ) applied to hessenbergised instead of Tridiagonalised matrices. Unlike the symmetric case it doesn’t always converge. I believe LAPACK used something similar… and there are examples that break it.

It would be fantastic if someone added this to GenericLinearAlgebra, if only that on the mathematical level it might lead to insight.

Note Mathematica takes a completely different route and reduces to root finding

3 Likes

The GenericSchur package includes translations of many of the LAPACK eigensystem routines to handle types like BigFloat, including Hermitian and non-symmetric cases, with piratical wrappers eigen! etc. (Diagonalization of Hermitian matrices is a special case of Schur decomposition; it was added in recent versions but not properly advertised in the top-level descriptions.)

I have tried to make it compatible with GenericLinearAlgebra (it preempts some methods by narrower parametrization).

3 Likes

Perfect!

Thank you very much.

Eigenvalues do work now with arbitrary precision, and I don’t need to have Symmetric or Hermitian matrices.

Now I have a follow-up, the reason I wanted to compute eigenvalues was because when I tried to apply exp(log(matrix)) with a BigFloat matrix, I got an error message that eigen wasn’t available.

Now that eigen(matrix) is available, I wonder why exp(matrix) doesn’t work. It had worked with GenericLinearAlgebra.jl (but admittedly only with Hermitian matrices and after converting its type explicitly as exp(Hermitian(matrix)) ).

1 Like

What result do you expect? Elementwise exponentiation? If so, you’ll have to broadcast the function to apply it to each element, like this: exp.(matrix).

Note the dot between the function name and the arguments - this is broadcasting.

Nope, I expect the matrix exponentiation given by \sum_{n = 0}^\infty \frac{A^n}{n!} which converges in general for a Banach space element A, and in particular if we consider matrices with operator norm.

For a diagonalizable matrix A = U D U^{-1} with D = \operatorname{diag} (\lambda_1, \lambda_2, \ldots, \lambda_n) , its exponential is simply given as e^{A} = U \operatorname{diag} (e^{\lambda_1}, \ldots, e^{\lambda_n}) U^{-1} .

Therefore I understand why many of the julia routines for this need the eigen method to work.
And

exp(A)

does indeed give me meaningful answer for standard (not BigFloat) matrices. I also get that

log(exp(A)) \approx A

in those cases. Where log has been similarly defined.

Did you remember to make it a Hermitian matrix?

This works:

using LinearAlgebra, GenericLinearAlgebra
Q = rand(3,3) |> x -> Hermitian(x' * x)
exp(log(Hermitian(big.(Complex.(Q))))) ≈ Q


(Note that my Q is positive definite.)

1 Like