SVD: Better default to gesvd! instead of gesdd!?

Currently, svd defaults to using LinearAlgebra.LAPACK.gesdd! under the hood. I vaguely mentioned some time ago that gesdd! gives incorrect results in some cases which can be circumvented by explicitly calling LinearAlgebra.LAPACK.gesvd! instead. However, for large enough matrices, gesvd! is much slower than gesdd!, which uses a divide-and-conquer approach. Whether the inaccuracies (which can be huge) are due to a bug in the implementation or a shortcoming of this different method I don’t know.

In any case, it naturally raises the question which method we should default to. It seems to me that we should prefer accuracy over speed and default to gesvd! in svd. Of course, we could introduce a keyword flag divide or similar that allows one to switch to the faster method. Just getting wrong results without a warning or anything, as it is the case right now, seems bad.

Note that Octave, which was also defaulting to gesdd has recently made precisely this switch as discussed in this issue.

Sort of taking the example from this thread, here is a MWE that demonstrates the issue:

using LinearAlgebra

function compan(c::AbstractVector{T}) where T
    N = length(c)
    A = diagm(-1 => ones(T, N-1))
    @inbounds for i in 1:N
        A[1,i] = - c[i]/c[1]
    end
    A
end


function svdvaldiff(N)
  A = compan(reverse( vcat(1, 1 ./ cumprod(1:N, dims=1)) ));
  svals_gesdd = LinearAlgebra.LAPACK.gesdd!('A', copy(A))[2];
  svals_gesvd = LinearAlgebra.LAPACK.gesvd!('A', 'A', copy(A))[2];

  return maximum(svals_gesdd - svals_gesvd)
end

function gesdd_gesvd_comparison(Nmax=40)
    for N in 1:Nmax
        println("N = $(N), max diff svdvals: ", svdvaldiff(N)) 
    end
end

Which gives the following output:

julia> gesdd_gesvd_comparison()
N = 1, max diff svdvals: 0.0
N = 2, max diff svdvals: 0.0
N = 3, max diff svdvals: 0.0
N = 4, max diff svdvals: 0.0
N = 5, max diff svdvals: 0.0
N = 6, max diff svdvals: 0.0
N = 7, max diff svdvals: 0.0
N = 8, max diff svdvals: 0.0
N = 9, max diff svdvals: 0.0
N = 10, max diff svdvals: 0.0
N = 11, max diff svdvals: 0.0
N = 12, max diff svdvals: 0.0
N = 13, max diff svdvals: 0.0
N = 14, max diff svdvals: 0.0
N = 15, max diff svdvals: 0.0
N = 16, max diff svdvals: 0.0
N = 17, max diff svdvals: 0.0
N = 18, max diff svdvals: 0.0
N = 19, max diff svdvals: 0.0
N = 20, max diff svdvals: 0.0
N = 21, max diff svdvals: 0.0
N = 22, max diff svdvals: 0.0
N = 23, max diff svdvals: 0.0
N = 24, max diff svdvals: 0.0
N = 25, max diff svdvals: 749.7516745517077
N = 26, max diff svdvals: 166.767736950586
N = 27, max diff svdvals: 584.3097624800283
N = 28, max diff svdvals: 635.9955873150014
N = 29, max diff svdvals: 752.0372910883041
N = 30, max diff svdvals: 934.3056077045787
N = 31, max diff svdvals: 532.6321090336445
N = 32, max diff svdvals: 644.2861698543643
N = 33, max diff svdvals: 362.0563181932318
N = 34, max diff svdvals: 524.8519471908377
N = 35, max diff svdvals: 681.9577012165146
N = 36, max diff svdvals: 959.8179240905338
N = 37, max diff svdvals: 116.35110997021529
N = 38, max diff svdvals: 510.2091542800206
N = 39, max diff svdvals: 245.12683184534578
N = 40, max diff svdvals: 6.869915826281117
4 Likes

I agree. The Octave discussion you linked mentions a LAPACK issue,

so they are working on the problem, and may solve/clarify it by the time of the next release EDIT of Julia. Perhaps you could open an issue (for Julia) and xref the above one.

I should probably clarify one more thing. The bug (or whatever) in gesdd! seems to only be present in full mode ('A'), as indicated in the octave/LAPACK issues mentioned above. If one replaces the gesdd! call in the MWE by

svals_gesdd = LinearAlgebra.LAPACK.gesdd!('N', copy(A))[2]; # 'N' instead of 'A'

one gets

N = 1, max diff svdvals: 0.0
N = 2, max diff svdvals: 1.1102230246251565e-16
N = 3, max diff svdvals: 0.0
N = 4, max diff svdvals: 7.105427357601002e-15
N = 5, max diff svdvals: 2.842170943040401e-14
N = 6, max diff svdvals: 2.220446049250313e-16
N = 7, max diff svdvals: 0.0
N = 8, max diff svdvals: 2.220446049250313e-16
N = 9, max diff svdvals: 1.1102230246251565e-16
N = 10, max diff svdvals: 9.313225746154785e-10
N = 11, max diff svdvals: 0.0
N = 12, max diff svdvals: 0.0
N = 13, max diff svdvals: 5.551115123125783e-16
N = 14, max diff svdvals: 2.220446049250313e-16
N = 15, max diff svdvals: 0.0
N = 16, max diff svdvals: 0.0
N = 17, max diff svdvals: 2.220446049250313e-16
N = 18, max diff svdvals: 2.220446049250313e-16
N = 19, max diff svdvals: 32.0
N = 20, max diff svdvals: 1.1368683772161603e-13
N = 21, max diff svdvals: 1.1368683772161603e-13
N = 22, max diff svdvals: 2.220446049250313e-16
N = 23, max diff svdvals: 2.220446049250313e-16
N = 24, max diff svdvals: 2.220446049250313e-16
N = 25, max diff svdvals: 5.684341886080802e-14
N = 26, max diff svdvals: 4.336808689942018e-19
N = 27, max diff svdvals: 5.684341886080802e-14
N = 28, max diff svdvals: 1.1102230246251565e-16
N = 29, max diff svdvals: 2.2737367544323206e-13
N = 30, max diff svdvals: 1.1368683772161603e-13
N = 31, max diff svdvals: 5.684341886080802e-14
N = 32, max diff svdvals: 2.220446049250313e-16
N = 33, max diff svdvals: 1.1368683772161603e-13
N = 34, max diff svdvals: 1.1102230246251565e-16
N = 35, max diff svdvals: 9.094947017729282e-13
N = 36, max diff svdvals: 1.1102230246251565e-16
N = 37, max diff svdvals: 2.842170943040401e-14
N = 38, max diff svdvals: 1.1102230246251565e-16
N = 39, max diff svdvals: 1.4210854715202004e-14
N = 40, max diff svdvals: 2.220446049250313e-16

Hence, only Julia’s svd(A, full=true) is effected.

Done, see `svd` gives wrong results (upstream LAPACK bug) · Issue #31023 · JuliaLang/julia · GitHub.

2 Likes

That matrix appears to be pathologically badly conditioned: it has some entries around 1e16, and you’re looking at singular values of order unity. This means that a relative perturbation of the matrix of size 1e-16, as might result from machine precision, will appreciably change singular values around unity. So I’m not sure if this should be called a bug rather than the normal effects of machine precision; sacrificing performance in the non-pathological case seems premature.

Also I can’t seem to reproduce in pure julia, did I do something wrong?

N = 37; norm(svd(compan(reverse( vcat(1, 1 ./ cumprod(1:N, dims=1)) )),full=true).S - svd(compan(reverse( vcat(1, 1 ./ cumprod(1:N, dims=1)) ))).S)

gets me 0

I’m on mobile, but I think you shouldn’t calculate the norm but compare singular values individually.

I took the matrix from the octave thread. However, I’ve faced this issue in a concrete physics simulation before. Badly conditioned matrices do show up in real world applications. In fact, I use svd decompositions in these scenarios to avoid mixing of scales in matrix products and in matrix inversion.

These matrices are indeed nasty. Neither LAPACK routine gives an accurate decomposition for N>25 because of rounding errors for Float64 . Fortunately in Julia we have GenericSVD which does a good job with BigFloat matrices.

1 Like

They are identical. Norm(a-b) =0 means a==b, no trickiness here.

Yes, badly conditioned matrices are a fact of life, and one has to deal with them to avoid loss of precision. But there’s not much more lapack can do here I think - different methods have slightly different results, both compatible with the error bars expected. You’re likely to see similar differences on different architectures, threading, etc.

Of course, my bad…

Turns out Julia is inconsistent: norm(svdvals(A) - svd(A).S) != 0, independent of the value of full.

However, if I look at the implementation full should switch between 'A' and 'N'. Don’t know what’s going on here.

I’m far from an expert on this, so someone more qualified than me should judge whether this is a bug or not. In my physics simulation switching from gesdd! to gesvd! made all the difference. Before this change everything was wrong, afterwards the results were correct. So it’s hard for me to be satisfied with “neither routine works”/“they are equally bad”.

In this direction, how would you explain/comment the following observation:

N = 37
A = compan(reverse( vcat(1, 1 ./ cumprod(1:N, dims=1)) ))

using GenericSVD
B = BigFloat.(A)
vals = svd(B).S

display("gesdd:")
display(Float64.(LinearAlgebra.LAPACK.gesdd!('A', A)[2] - vals))
println()

display("gesvd: ")
display(Float64.(LinearAlgebra.LAPACK.gesvd!('A', 'A', A)[2] - vals))
println()

which gives

"gesdd:"
38-element Array{Float64,1}:
 -545.9257605455043 
  180.8251664583235 
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
    ⋮               
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
  133.97039557171414
   81.05435621882137
"gesvd: "
38-element Array{Float64,1}:
 -545.9257605455043    
   95.97867447307135   
    5.3787786348333455 
    5.156126423112092  
    5.006756486183942  
    4.871019085393854  
    4.461591733737439  
    4.181477907103048  
    3.980235090671698  
    3.84337889798058   
    3.7403253010761617 
    3.5582886967320233 
    3.3537939211414782 
    ⋮                  
    0.8753460484851945 
    0.7662725689736753 
    0.664655536722611  
    0.5026707900598488 
    0.40756319405669306
    0.3737408827688582 
   -0.0506366143480812 
   -0.13352353403151718
   -0.2954480014961395 
   -0.5336271002375245 
   -0.77719691671636   
   -0.655727106867289  

The latter looks much better to me.

The error bounds for SVD imply that the singular values are only good to within about ±350 for N=37 in Float64, so both routines are actually performing as advertised in that regard. (One might quibble about the largest s.v. in both cases on your machine, but it’s still close.) In this particular case, one is lucky that many singular values are closer for gesvd! (in fact on my machine they are even closer than your results, consistent with @antoine-levitt’s point above). But note that the relative errors for the second-largest and smallest s.v. are pretty bad for both, so common applications of the SVD would be misleading here. This just isn’t a good problem for Float64.

I don’t mean to assert that gesdd! and gesvd! are generally equally reliable, just that this example is not conclusive. It sounds as if you have some interesting matrices in your applications; if you could post some examples they might help to elucidate the difference.

3 Likes

In fact a similar case is discussed explicitly in http://www.netlib.org/lapack/lug/node97.html: “Thus large singular values (those near sigma_1) are computed to high relative accuracy and small ones may not be.”. The surprise here for me is actually that gesvd is performing better than expected, which is good but probably shouldn’t be relied upon.

What’s your exact application, ie what do you need to do to the matrix?

1 Like

This is inconsistent with \ , which uses LU instead of QR because it prefers speed over accuracy. So probably the default should remain the faster one.

I think that in general these decisions should be more nuanced, and depend on the degree of inaccuracy and the speed gain.

1 Like

Ok, I managed to revive my 2 year old code and extract a MWE that displays a qualitative difference between gesdd! and gesvd! when used to stabilize a matrix multiplication.

Given a matrix B, the task is to calculate the matrix product chain B * B * B * B * .... (Eventually I want to invert this chain and calculate the determinant of the inverse, but this doesn’t matter here I guess).

If we perform the multiplication naively (left plot) we’ll loose the lowest singular values due to mixing of scales. Note that the finite precision artifacts (at around 60 multiplications) kick in when the lowest singular values reach roughly the floating point precision limitation.

To avoid this issue, we stabilize the procedure by SVD decomposing in every iteration, separating the scales. However, only when we use gesvd! (right plot) do we see the expected result. When using gesdd! (middle) instead the problem still persists.

MWE:

using PyCall, PyPlot, LinearAlgebra
@pyimport matplotlib.ticker as ticker

B = [0.9631774046507462 0.048118777932499635 0.002403935950466714 0.04811877793249931 0.09599796038960963 0.004795902100335193 0.00023959547538980293 0.004795902100335332 0.00956792420011803 0.00047799794475834705 2.3880000553206727e-5 0.00047799794475778125 0.0959979603896096 0.004795902100335486 0.0002395954753902608 0.004795902100335878 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.048118777932499635 0.9631774046507456 0.04811877793249987 0.002403935950467196 0.004795902100335861 0.09599796038960962 0.0047959021003353515 0.0002395954753906275 0.0004779979447579591 0.00956792420011812 0.0004779979447577643 2.388000055312064e-5 0.0047959021003361885 0.09599796038960963 0.0047959021003355865 0.00023959547538973634 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.002403935950466714 0.04811877793249987 0.9631774046507452 0.048118777932499295 0.00023959547538959772 0.0047959021003353254 0.0959979603896097 0.004795902100335392 2.388000055344459e-5 0.0004779979447581358 0.009567924200118488 0.0004779979447580419 0.0002395954753897907 0.004795902100335456 0.09599796038961016 0.004795902100335967 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.04811877793249931 0.002403935950467196 0.048118777932499295 0.9631774046507461 0.0047959021003360055 0.0002395954753902414 0.004795902100335805 0.095997960389609 0.0004779979447578188 2.3880000553252446e-5 0.00047799794475820914 0.00956792420011871 0.0047959021003361755 0.00023959547539000118 0.0047959021003356125 0.09599796038960898 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.09599796038960963 0.004795902100335861 0.00023959547538959772 0.0047959021003360055 0.9631774046507459 0.04811877793249995 0.0024039359504669402 0.04811877793249928 0.09599796038960942 0.004795902100335522 0.00023959547539085065 0.004795902100336073 0.009567924200118628 0.0004779979447581598 2.388000055308229e-5 0.0004779979447580583 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.004795902100335193 0.09599796038960962 0.0047959021003353254 0.0002395954753902414 0.04811877793249995 0.9631774046507461 0.04811877793250019 0.0024039359504671003 0.0047959021003356785 0.09599796038960914 0.004795902100335187 0.00023959547539012586 0.000477997944758397 0.009567924200118712 0.0004779979447579929 2.388000055378612e-5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.00023959547538980293 0.0047959021003353515 0.0959979603896097 0.004795902100335805 0.0024039359504669402 0.04811877793250019 0.9631774046507459 0.048118777932499454 0.00023959547538995215 0.004795902100335738 0.09599796038960945 0.004795902100335804 2.388000055351635e-5 0.00047799794475794286 0.009567924200118543 0.00047799794475805626 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.004795902100335332 0.0002395954753906275 0.004795902100335392 0.095997960389609 0.04811877793249928 0.0024039359504671003 0.048118777932499454 0.9631774046507461 0.00479590210033561 0.0002395954753902095 0.0047959021003353905 0.0959979603896089 0.0004779979447580881 2.3880000552118516e-5 0.0004779979447578195 0.009567924200117978 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.00956792420011803 0.0004779979447579591 2.388000055344459e-5 0.0004779979447578188 0.09599796038960942 0.0047959021003356785 0.00023959547538995215 0.00479590210033561 0.9631774046507453 0.048118777932500585 0.002403935950466673 0.0481187779325001 0.09599796038960902 0.004795902100335514 0.00023959547538961477 0.004795902100335855 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.00047799794475834705 0.00956792420011812 0.0004779979447581358 2.3880000553252446e-5 0.004795902100335522 0.09599796038960914 0.004795902100335738 0.0002395954753902095 0.048118777932500585 0.9631774046507456 0.04811877793249896 0.002403935950467324 0.004795902100335815 0.09599796038960987 0.004795902100335308 0.00023959547538959935 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 2.3880000553206727e-5 0.0004779979447577643 0.009567924200118488 0.00047799794475820914 0.00023959547539085065 0.004795902100335187 0.09599796038960945 0.0047959021003353905 0.002403935950466673 0.04811877793249896 0.9631774046507457 0.048118777932499086 0.00023959547539017492 0.0047959021003355804 0.09599796038960905 0.004795902100335983 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.00047799794475778125 2.388000055312064e-5 0.0004779979447580419 0.00956792420011871 0.004795902100336073 0.00023959547539012586 0.004795902100335804 0.0959979603896089 0.0481187779325001 0.002403935950467324 0.048118777932499086 0.963177404650746 0.004795902100335326 0.00023959547538994093 0.004795902100335743 0.09599796038960902 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0959979603896096 0.0047959021003361885 0.0002395954753897907 0.0047959021003361755 0.009567924200118628 0.000477997944758397 2.388000055351635e-5 0.0004779979447580881 0.09599796038960902 0.004795902100335815 0.00023959547539017492 0.004795902100335326 0.9631774046507464 0.048118777932499905 0.0024039359504668153 0.04811877793249932 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.004795902100335486 0.09599796038960963 0.004795902100335456 0.00023959547539000118 0.0004779979447581598 0.009567924200118712 0.00047799794475794286 2.3880000552118516e-5 0.004795902100335514 0.09599796038960987 0.0047959021003355804 0.00023959547538994093 0.048118777932499905 0.963177404650745 0.048118777932499884 0.002403935950466461 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0002395954753902608 0.0047959021003355865 0.09599796038961016 0.0047959021003356125 2.388000055308229e-5 0.0004779979447579929 0.009567924200118543 0.0004779979447578195 0.00023959547538961477 0.004795902100335308 0.09599796038960905 0.004795902100335743 0.0024039359504668153 0.048118777932499884 0.9631774046507459 0.048118777932499315 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.004795902100335878 0.00023959547538973634 0.004795902100335967 0.09599796038960898 0.0004779979447580583 2.388000055378612e-5 0.00047799794475805626 0.009567924200117978 0.004795902100335855 0.00023959547538959935 0.004795902100335983 0.09599796038960902 0.04811877793249932 0.002403935950466461 0.048118777932499315 0.9631774046507473 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.9631774046507459 -0.0959979603896101 0.009567924200118063 -0.09599796038960919 -0.04811877793249977 0.0047959021003357106 -0.0004779979447580722 0.004795902100335954 0.0024039359504662056 -0.00023959547538962708 2.38800005536094e-5 -0.00023959547539008502 -0.04811877793249954 0.004795902100335876 -0.0004779979447581868 0.004795902100335842; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0959979603896101 0.9631774046507463 -0.09599796038960923 0.00956792420011795 0.00479590210033555 -0.04811877793250019 0.004795902100335323 -0.0004779979447578979 -0.00023959547539012795 0.0024039359504670526 -0.00023959547538991325 2.3880000553715824e-5 0.004795902100335467 -0.04811877793249935 0.004795902100335896 -0.00047799794475811004; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.009567924200118063 -0.09599796038960923 0.963177404650746 -0.09599796038960962 -0.0004779979447578395 0.004795902100335537 -0.04811877793249961 0.004795902100335916 2.3880000553300117e-5 -0.00023959547539028687 0.0024039359504665187 -0.00023959547538977485 -0.00047799794475783845 0.004795902100335813 -0.04811877793249962 0.004795902100335811; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.09599796038960919 0.00956792420011795 -0.09599796038960962 0.9631774046507457 0.00479590210033584 -0.00047799794475798536 0.004795902100336278 -0.048118777932499524 -0.00023959547538984473 2.388000055350761e-5 -0.0002395954753899119 0.0024039359504664567 0.004795902100335905 -0.00047799794475835626 0.004795902100336193 -0.048118777932499614; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.04811877793249977 0.00479590210033555 -0.0004779979447578395 0.00479590210033584 0.9631774046507465 -0.09599796038960896 0.009567924200117685 -0.09599796038960907 -0.048118777932499954 0.0047959021003352465 -0.00047799794475821906 0.004795902100336257 0.0024039359504666635 -0.00023959547539018227 2.388000055351761e-5 -0.0002395954753895482; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0047959021003357106 -0.04811877793250019 0.004795902100335537 -0.00047799794475798536 -0.09599796038960896 0.9631774046507452 -0.09599796038960884 0.00956792420011878 0.004795902100335861 -0.04811877793250033 0.004795902100335711 -0.00047799794475756154 -0.00023959547538969806 0.002403935950466428 -0.00023959547538983226 2.3880000553934592e-5; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0004779979447580722 0.004795902100335323 -0.04811877793249961 0.004795902100336278 0.009567924200117685 -0.09599796038960884 0.9631774046507464 -0.09599796038960885 -0.0004779979447580733 0.004795902100335925 -0.04811877793250011 0.004795902100335748 2.388000055362013e-5 -0.00023959547539026475 0.0024039359504664805 -0.00023959547538962662; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.004795902100335954 -0.0004779979447578979 0.004795902100335916 -0.048118777932499524 -0.09599796038960907 0.00956792420011878 -0.09599796038960885 0.963177404650745 0.004795902100335986 -0.0004779979447581103 0.004795902100335992 -0.04811877793249953 -0.00023959547539023258 2.3880000553186046e-5 -0.0002395954753903041 0.002403935950466679; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0024039359504662056 -0.00023959547539012795 2.3880000553300117e-5 -0.00023959547538984473 -0.048118777932499954 0.004795902100335861 -0.0004779979447580733 0.004795902100335986 0.9631774046507452 -0.09599796038960969 0.009567924200118537 -0.09599796038960912 -0.04811877793249956 0.00479590210033607 -0.0004779979447581861 0.00479590210033581; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.00023959547538962708 0.0024039359504670526 -0.00023959547539028687 2.388000055350761e-5 0.0047959021003352465 -0.04811877793250033 0.004795902100335925 -0.0004779979447581103 -0.09599796038960969 0.9631774046507461 -0.0959979603896091 0.009567924200117853 0.004795902100335603 -0.04811877793249963 0.00479590210033595 -0.00047799794475808386; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.38800005536094e-5 -0.00023959547538991325 0.0024039359504665187 -0.0002395954753899119 -0.00047799794475821906 0.004795902100335711 -0.04811877793250011 0.004795902100335992 0.009567924200118537 -0.0959979603896091 0.9631774046507453 -0.09599796038960987 -0.0004779979447580813 0.004795902100335885 -0.048118777932499204 0.004795902100335838; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.00023959547539008502 2.3880000553715824e-5 -0.00023959547538977485 0.0024039359504664567 0.004795902100336257 -0.00047799794475756154 0.004795902100335748 -0.04811877793249953 -0.09599796038960912 0.009567924200117853 -0.09599796038960987 0.9631774046507457 0.004795902100335232 -0.00047799794475815053 0.004795902100336965 -0.04811877793249964; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.04811877793249954 0.004795902100335467 -0.00047799794475783845 0.004795902100335905 0.0024039359504666635 -0.00023959547538969806 2.388000055362013e-5 -0.00023959547539023258 -0.04811877793249956 0.004795902100335603 -0.0004779979447580813 0.004795902100335232 0.9631774046507473 -0.09599796038960959 0.009567924200117964 -0.0959979603896097; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.004795902100335876 -0.04811877793249935 0.004795902100335813 -0.00047799794475835626 -0.00023959547539018227 0.002403935950466428 -0.00023959547539026475 2.3880000553186046e-5 0.00479590210033607 -0.04811877793249963 0.004795902100335885 -0.00047799794475815053 -0.09599796038960959 0.9631774046507461 -0.09599796038960948 0.00956792420011772; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.0004779979447581868 0.004795902100335896 -0.04811877793249962 0.004795902100336193 2.388000055351761e-5 -0.00023959547538983226 0.0024039359504664805 -0.0002395954753903041 -0.0004779979447581861 0.00479590210033595 -0.048118777932499204 0.004795902100336965 0.009567924200117964 -0.09599796038960948 0.9631774046507465 -0.09599796038960973; 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.004795902100335842 -0.00047799794475811004 0.004795902100335811 -0.048118777932499614 -0.0002395954753895482 2.3880000553934592e-5 -0.00023959547538962662 0.002403935950466679 0.00479590210033581 -0.00047799794475808386 0.004795902100335838 -0.04811877793249964 -0.0959979603896097 0.00956792420011772 -0.09599796038960973 0.9631774046507472]

function calc_product_chain(B, N)
    bs = size(B,1)
    R = Matrix{Float64}(I, bs, bs)
    U = Matrix{Float64}(I, bs, bs)
    S = ones(bs)
    Vt = Matrix{Float64}(I, bs, bs)

    svs = zeros(bs,N)
    svc = 1
    for k in 1:N
        R = B * R
        U, S, Vt = LinearAlgebra.LAPACK.gesvd!('A', 'A', copy(R))
        svs[:,svc] = log.(S)
        svc += 1
    end
    return (R, svs)
end

function calc_product_chain_stabilized(B, N, svdmethod)
    bs = size(B,1)

    U = Matrix{Float64}(I, bs, bs)
    S = ones(bs)
    Vt = Matrix{Float64}(I, bs, bs)
    Vtnew = Matrix{Float64}(I, bs, bs)

    svs = zeros(bs,N)
    svc = 1
    for k in 1:N
        U = B * U
        U *= Diagonal(S)
        U, S, Vtnew = svdmethod(U)
        Vt =  Vtnew * Vt
        svs[:,svc] = log.(S)
        svc += 1
    end
    return (U,S,Vt,svs)
end

function plot_product_chain_svs(B, N)
  T = calc_product_chain(B, N)
  svs = T[2]
  fig = figure(figsize=(20,7))
  b = fig[:add_subplot](131)
  b[:yaxis][:set_major_locator](ticker.MaxNLocator(symmetric=true))
  b[:plot](copy(svs[:,:]'))
  b[:set_xlabel]("# multiplications")
  b[:set_ylabel]("log singular values of matrix product chain")
  b[:set_title]("naive")
    
  T = calc_product_chain_stabilized(B, N, x -> LinearAlgebra.LAPACK.gesdd!('A', x))
  svs = T[4]
  c = fig[:add_subplot](132, sharey=b, sharex=b)
  c[:plot](copy(svs[:,:]'))
  setp(c[:get_yticklabels](), visible=false)
  c[:set_xlabel]("# multiplications")
  c[:set_title]("stabilized, gesdd!")
    
  T = calc_product_chain_stabilized(B, N, x -> LinearAlgebra.LAPACK.gesvd!('A', 'A', x))
  svs = T[4]
  c = fig[:add_subplot](133, sharey=b, sharex=b)
  c[:plot](copy(svs[:,:]'))
  setp(c[:get_yticklabels](), visible=false)
  c[:set_xlabel]("# multiplications")
  c[:set_title]("stabilized, gesvd!")

  # tight_layout()
  subplots_adjust(wspace = 0.0)
  show()
  println(maximum(svs))
  nothing
end

plot_product_chain_svs(B, 150)

Your matrix is symmetric and you’re interested in repeated multiplications, so don’t you want an eigenvalue decomposition rather than an SVD? I’m not sure I understand your code but it looks like you’re assuming that U==Vt, which is not the case e.g. for negative eigenvalues.

1 Like

This is a simplified example taken out of context. I’m not assuming U==Vt. Most of the time my B will be complex valued and not symmetric/hermitian. But even in the simpler case I observe the qualitative difference, that’s why I simplified.

On a side note, I ended up using QR decompositions eventually, mainly for speed reasons.

On looking deeper, I see that LAPACK also has a Jacobi method for the SVD (as mentioned in Calling Lapack's Jacobi SVD - dgesvj) This approach is not mentioned in the Users’ Guide, but (often?) gives singular values with elementwise relative accuracy close to unit rounding error. It works for all of the s.v. for the companion matrices described above until N>64 where things are so bad that NaNs are generated.

1 Like