# Refining eigvals, svdvals

Out of the box, DoubleFloats work with these LinearAlgebra functions:

``````isdiag, ishermitian, isposdef, issymmetric, istril, istriu
norm, det, dot, tr, condskeel, logdet, logabsdet
diag, diagind
factorize, lu, lufact, qr, qrfact
``````

I want to support `eigvals`, `svdvals`. I can obtain Float64 approximations by converting the given matrix to Float64 elements and running `eigvals` or `svdvals` on that and then converting the result to DoubleFloat elements.

(a) is there an efficient way to refine the approximations to more accurate results?
(b) is there an efficient way to use the functions that work already to get `eigvals`, `svdvals`?

Have you tried GenericSVD and/or Andreasâ€™ LinearAlgebra?

I have. Andreasâ€™ work is helpful, but being named LinearAlgebra is problematic now that LinearAlgebra is the name of a standard library. GenericSVD complains that there is no method `LinearAlgebra.Bidiagonal` for the type. I suppose I could copy Andreasâ€™ stuff into a sub-module but that is not good software development practice.

(a) is there an efficient way to refine the approximations to more accurate results?

For eigenvalues, yes, easily (just recompute the Rayleigh quotient in double float - at least for non degenerate eigenpairs, you might have to do some orthogonalization in the case of degenerate ones). For eigenvectors of a dense matrix, not easily (at least that I can see), youâ€™d just end up reimplementing an eigenvalue algorithm. Also if someone uses double float itâ€™s likely that they want to target an ill conditioned problem, so you have to be extra careful in the implementation, which is why porting a tried and tested algorithm seems better than a post-processing.

1 Like

Even for dense nonsymmetric matrices, you can do a few iterations of inverse iteration. Though it depends on having some separation in the eigenvalues.

Right, but you have to pick shifts and factor matrices, and I think you just end up rediscovering a degraded version of the QR algorithm.

Just use the Float64 eigenvalues as the shift. Provided there is only a single closest eigenvalue, it will converge exponentially fast.

Sure, but factoring a matrix for every eigenvalue is very expensive. Again, thatâ€™s what the QR algorithm is for.

Andreasâ€™ LinearAlgebra.jl doesnâ€™t work for dense matrices. I think implementing an LAPACK like dense QR algorithm might not be straightforward.

I do appreciate the info. If the answer is implement a â€¦, this will wait for someone more qualified with matrix algorithms to submit a PR. (hence [b] of the question).

Iâ€™m not sure what this refers to. I believe my package has a LAPACK like dense QR. Actually it has a couple of versions.

1 Like

How do I use your package without it getting mixed up with the stdlib LinearAlgebra (I need both)?

1 Like

It doesnâ€™t appear to work (see conversation https://github.com/ajt60gaibb/FastGaussQuadrature.jl/issues/62#issuecomment-396225097):

``````julia> using LinearAlgebra
INFO: Recompiling stale cache file /Users/sheehanolver/.julia/lib/v0.6/LinearAlgebra.ji for module LinearAlgebra.

julia> eigvals(ones(BigFloat,10,10))
ERROR: MethodError: no method matching eigvals!(::Array{BigFloat,2}; permute=true, scale=true)
Closest candidates are:
eigvals!(::SymTridiagonal{#s268} where #s268<:Union{Float32, Float64}) at linalg/tridiag.jl:190 got unsupported keyword arguments "permute", "scale"
eigvals!(::SymTridiagonal{#s268} where #s268<:Union{Float32, Float64}, ::UnitRange) at linalg/tridiag.jl:196 got unsupported keyword arguments "permute", "scale"
eigvals!(::SymTridiagonal{#s268} where #s268<:Union{Float32, Float64}, ::Real, ::Real) at linalg/tridiag.jl:203 got unsupported keyword arguments "permute", "scale"
...
Stacktrace:
[1] #eigvals#44(::Bool, ::Bool, ::Function, ::Array{BigFloat,2}) at ./linalg/eigen.jl:194
[2] eigvals(::Array{BigFloat,2}) at ./linalg/eigen.jl:193
``````