The same issue does not arise if I simply define my matrix as
julia> matrix2 = [2.0 3.0; 3.0 2.0]
How can I use linear algebra routines with matrices made out of arbitrary precision floating point numbers as entries?
Thank you very much in advance. I am new to Juila and also would appreciate if I could be directed to the right place to request implementations of this sort.
what error do you get? Please post the full error message.
Are you certain you need arbitrary precision? Would a 128 bit Float also be enough? It would be much more performant than true BigFloats. Quadmath.jl would be one package supporting this.
Thank you for notifying me of Quadmath package. It is certainly a help in applications where I have decided what precision to use and it fits inside 128 bits. I do want to be able to take exp(log(matrix)) for an arbitrary precision matrix. The word âneedâ is strong, but certainly for classroom demonstration purposes I would like to be able to do everything I do usually with arbitrary precision numbers vectors matrices etc. Then later I may decide on a fixed predefined precision for performance purposes.
I downloaded the Quadmath.jl package, and ran the same routine on it, however the method eigen still causes the same problem. It seems to be only defined for Float32 and Float64.
No I hadnât thank you very much for pointing me to the package GenericLinearAlgebra.jl
However despite the readme file looking like exactly what I want (linear algebra Routines for BigFloat and Quaternion) in using it I encountered the same problem. This actually got me worried that Iâm doing something wrong so I copy below exactly what I wrote.
julia> using GenericLinearAlgebra
julia> matrix
julia> matrix
2Ă2 Matrix{BigFloat}:
2.0 3.0
3.0 2.0
julia> GenericLinearAlgebra.eigen2(matrix)
ERROR: MethodError: no method matching eigen2(::Matrix{BigFloat})
Closest candidates are:
eigen2(::SymTridiagonal; tol, debug) at /home/ekiral/.julia/packages/GenericLinearAlgebra/z4KPO/src/eigenSelfAdjoint.jl:622
eigen2(::Hermitian) at /home/ekiral/.julia/packages/GenericLinearAlgebra/z4KPO/src/eigenSelfAdjoint.jl:624
eigen2(::Hermitian, ::Any) at /home/ekiral/.julia/packages/GenericLinearAlgebra/z4KPO/src/eigenSelfAdjoint.jl:624
...
Stacktrace:
[1] top-level scope
@ none:1
Am I using the wrong method? Is GenericLinearAlgebra.eigen2(matrix) the wrong syntax to write this in?
Just to clarify: Juliaâs eigen uses LAPACK under the hood which only supports basic number types much as Float64. Thatâs why big floats donât work by default. However, as pointed out above, the Julia ecosystem provides more type-generic pure Julia implementations of eigensolvers that you can use.
AFAIR, you should be able to simply use eigen after loading the package. It adds methods for big floats to the generic eigen function. Might need to use Symmetric(matrix) or Hermitian(matrix) though.
as suggested by dlfivefifty above
and after converting it
hmatrix = Hermitian(matrix)
I could simply use
eigen(hmatrix).
Thanks.
(Though I donât need to explicitly turn matrices hermitian if Iâm working with normal Float64 size matrices, GenericLinearAlgebra package seems to require it to work whenI have arbitrary precision matrices.)
No, thatâs not a bug. ishermitian tests for hermiticity in the value not the type sense. No matter the type of a matrix, if it is hermitian ishermitian will / should return true. So, the function behaves correctly.
The fact that you need to use Hermitian(matrix) in the eigen function is because thatâs a requirement from GenericLinearAlgebra. Among other things, it only adds a method eigen(A::Hermitian{<:Real}). Perhaps one could think about relaxing the function signature / dispatch constraint and, based on ishermitian / issymmetric, add a generic method that does the Hermitian(matrix) / Symmetric(matrix) type transformation automatically (i.e. for the user). Note that this is what LinearAlgebra does at some points, for example here. However, it is very natural in Julia to indiciate special properties of your matrices on the type level to dispatch to specific implementations.
I think itâs not a bug but a missing feature l. GenericLinearAlgebra should overload eigen and check ishermitian and return eigen(Hermetian(A)), otherwise throw an error ânot implemented for non-Hermitian â
Iâm sure if someone made a PE doing this itâs getting merged
Note the implementation of non-symmetric eigenvalue computation is pretty interesting. Iâm not an expert but I believe thereâs something called âWilkinsonâs adhoc shiftâ which i believe is the same as Wilkinsonâs shift (which provably converges for SymTridiagonal ) applied to hessenbergised instead of Tridiagonalised matrices. Unlike the symmetric case it doesnât always converge. I believe LAPACK used something similar⌠and there are examples that break it.
It would be fantastic if someone added this to GenericLinearAlgebra, if only that on the mathematical level it might lead to insight.
Note Mathematica takes a completely different route and reduces to root finding
The GenericSchur package includes translations of many of the LAPACK eigensystem routines to handle types like BigFloat, including Hermitian and non-symmetric cases, with piratical wrappers eigen! etc. (Diagonalization of Hermitian matrices is a special case of Schur decomposition; it was added in recent versions but not properly advertised in the top-level descriptions.)
I have tried to make it compatible with GenericLinearAlgebra (it preempts some methods by narrower parametrization).
Eigenvalues do work now with arbitrary precision, and I donât need to have Symmetric or Hermitian matrices.
Now I have a follow-up, the reason I wanted to compute eigenvalues was because when I tried to apply exp(log(matrix)) with a BigFloat matrix, I got an error message that eigen wasnât available.
Now that eigen(matrix) is available, I wonder why exp(matrix) doesnât work. It had worked with GenericLinearAlgebra.jl (but admittedly only with Hermitian matrices and after converting its type explicitly as exp(Hermitian(matrix)) ).
What result do you expect? Elementwise exponentiation? If so, youâll have to broadcast the function to apply it to each element, like this: exp.(matrix).
Note the dot between the function name and the arguments - this is broadcasting.
Nope, I expect the matrix exponentiation given by \sum_{n = 0}^\infty \frac{A^n}{n!} which converges in general for a Banach space element A, and in particular if we consider matrices with operator norm.
For a diagonalizable matrix A = U D U^{-1} with D = \operatorname{diag} (\lambda_1, \lambda_2, \ldots, \lambda_n) , its exponential is simply given as e^{A} = U \operatorname{diag} (e^{\lambda_1}, \ldots, e^{\lambda_n}) U^{-1} .
Therefore I understand why many of the julia routines for this need the eigen method to work.
And
exp(A)
does indeed give me meaningful answer for standard (not BigFloat) matrices. I also get that
log(exp(A)) \approx A
in those cases. Where log has been similarly defined.