Hi, is anybody aware of a way to compute the truncated SVD of an extended precision (128 bit floats in my case) matrix?
For the floats, I’d like to use either MultiFloats.jl (seems faster in my testing, so this would be preferred) or DoubleFloats.jl. GenericLinearAlgebra.jl works to an extent but has two major problems that currently prevent me from using it: 1) It gets stuck on certain matrices (GitHub issue ) and 2) computes the entire, untruncated SVD, which is much more expensive.

I’m grateful for any hints or experiences you can share!

I saw this earlier post:

Hi. Given a matrix M I would like to compute its SVD truncated to rank k. I think this is possible without doing the full SVD. For example, Python has this: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html . The code I am currently using to do this is given below. The problem is that it computes SVD first, and then throws out the extra rows/columns, which can be quite costly if k is much smaller than rank of M. Is there a function in Julia to do this?
usi…
The recommendation is TSVD.jl .

Thanks, but unfortunately TSVD.jl calls LinearAlgebra‘s `svd`

, and so doesn‘t work for types not supported by whatever BLAS julia uses.

You should be able to load `GenericLinearAlgebra`

which would extend `svd`

to work with arbitrary element types.

1 Like

In particular, TSVD.jl calls svd(::Bidiagonal) , and it looks like GenericLinearAlgebra.jl provides a method for this .

Thank you! I‘ll give this a try and report back.

Ok, so I finally got around to trying this:

```
julia> using TSVD
julia> using GenericLinearAlgebra
julia> using DoubleFloats
julia> A = randn(Double64, 3, 5)
3×5 Matrix{Double64}:
1.3801363771568418 1.4219282164212106 0.13499940802352295 -0.16125764163952927 1.6214322111600634
0.04669843558325742 0.7916858087312778 -0.740636655635555 0.2954127906942305 -1.2628958884690384
-0.9117030073895186 0.5428515183127985 -0.10123196960714188 0.9868982725916127 0.2833604981860745
julia> tsvd(A, 2)
ERROR: SVD for lower triangular bidiagonal matrices isn't implemented yet.
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:33
[2] __svd!(B::LinearAlgebra.Bidiagonal{Double64, Vector{Double64}}, U::Matrix{Double64}, Vᴴ::Matrix{Double64}; tol::Double64, debug::Bool)
@ GenericLinearAlgebra ~/.julia/packages/GenericLinearAlgebra/Bd8s0/src/svd.jl:196
[3] _svd!(B::LinearAlgebra.Bidiagonal{Double64, Vector{Double64}}; tol::Double64, debug::Bool)
@ GenericLinearAlgebra ~/.julia/packages/GenericLinearAlgebra/Bd8s0/src/svd.jl:232
[4] #svd!#38
@ ~/.julia/packages/GenericLinearAlgebra/Bd8s0/src/svd.jl:499 [inlined]
[5] svd!
@ ~/.julia/packages/GenericLinearAlgebra/Bd8s0/src/svd.jl:499 [inlined]
[6] #svd#191
@ /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/LinearAlgebra/src/bidiag.jl:214 [inlined]
[7] svd
@ /Applications/Julia-1.7.app/Contents/Resources/julia/share/julia/stdlib/v1.7/LinearAlgebra/src/bidiag.jl:214 [inlined]
[8] biLanczos(A::Matrix{Double64}, nvals::Int64; maxiter::Int64, initvec::Vector{Double64}, tolconv::Double64, tolreorth::Double64, stepsize::Int64, debug::Bool)
@ TSVD ~/.julia/packages/TSVD/q42gG/src/svd.jl:175
[9] _tsvd(A::Matrix{Double64}, nvals::Int64; maxiter::Int64, initvec::Vector{Double64}, tolconv::Double64, tolreorth::Double64, stepsize::Int64, debug::Bool)
@ TSVD ~/.julia/packages/TSVD/q42gG/src/svd.jl:236
[10] #tsvd#7
@ ~/.julia/packages/TSVD/q42gG/src/svd.jl:345 [inlined]
[11] tsvd(A::Matrix{Double64}, nvals::Int64)
@ TSVD ~/.julia/packages/TSVD/q42gG/src/svd.jl:345
[12] top-level scope
@ REPL[5]:1
```

The relevant part in `GenericLinearAlgebra.jl`

is this:

```
if B.uplo === 'U'
# actual code here, omitted for readability
else
# Just transpose the matrix
error("SVD for lower triangular bidiagonal matrices isn't implemented yet.")
end
```

Seems like this would be easiest to fix in `TSVD.jl`

by, as the comment suggests, transposing the bidiagonal matrix before feeding it to `GenericLinearAlgebra.jl`

, right?