Iteration of SVD factorization

I was suprised to see that iterating a SVD gives U, S, and V instead of U, S, and Vt. After all it is U * Diagonal(S) * Vt which restores the factorized matrix.

Returning Vt would also make more sense to me from a technical point of view as the algorithm produces Vt whereas V has to be (lazily) calculated.

Note that because of this, one unfortunately can’t unpack a SVD like so

U, S, Vt = svd(M)

Is there any deeper reason behind this choice or would it make sense to change this? However, It’d be a breaking change, wouldn’t it?

If it makes any difference, there is no performance penalty in the reconstruction if V is returned insteat of V'. As to the choice of returning V, I suppose it may be just simplicity: the canonical reconstruction formula is USV' if U,S,V = svd(X)

This is true, but maybe I don’t want to reconstruct but perform operations on Vt.

I fail to see why USV' is better or more canonical than USVt. I’d say it’s more natural to expect svd(X) to unfold to the components which when recombined give back the original matrix. But maybe that’s a matter of taste.

You might as well ask why Hemitian eigenvector decompositions A = QΛQ' return the matrix Q whose columns are the eigenvectors rather than Q'. Of course, Q and Q' contain exactly the same information, but when thinking about vectors that form a basis it is conventional to put the basis vectors as the columns. With column-major storage, this also means that accessing a particular basis vector Q[:,k] involves contiguous memory access.

In the same way, the both the U and V vectors of the SVD are basis vectors (and are in fact eigenvectors of AA' and A'A respectively), and one of the main ways in which you use the SVD is to examine the singular vectors corresponding to large singular values, so it is conventional to access them by columns. Moreover, the basis vectors U and V play analogous roles, so it would be odd to transpose one and not the other.

5 Likes