# Should ::SVD \ ::Vector fall back on pinv?

At the back of my head, I was wrongly assuming that `svd(A) \ b` should be falling back on an efficient `pinv(A) * b`. For instance I find it surprising that

``````julia> qr(zeros(3,3), Val(true)) \ ones(3)
3-element Array{Float64,1}:
0.0
0.0
0.0
``````

gives the expected result, while

``````julia> svd(zeros(3,3)) \ ones(3)
3-element Array{Float64,1}:
NaN
NaN
NaN
``````

doesn’t. `SVD` is supposed to be the most stable of all factorizations towards the singularity of `A` and I see no other purpose in calling the `\` after an SVD but to benefit from this immunity against singular and near singular matrices. Is this a reasonable feature request?

2 Likes

I think that `\` with `svd` falls back to `ldiv!` as it should, but I am surprised about the `qr`: I implicitly expect that

``````A \ B ≈ F(A) \ B
``````

for all factorizations `F` (with the only difference being numerical error etc).

Let’s take a few examples of `A \ b` that question expectations:

1. `A` is square and gracefully non-singular.
``````julia> Matrix(1.0I, 2, 2) \ ones(2)
2-element Array{Float64,1}:
1.0
1.0
``````
1. `A` is square and near or actually singular.
``````julia> [1.0 0.0; 0.0 0.0] \ ones(2)
2-element Array{Float64,1}:
1.0
Inf
``````
1. `A` is tall and full column rank.
``````julia> [1.0; 0.0] \ ones(2)
1.0
``````
1. `A` is tall and column rank deficient.
``````julia> zeros(2) \ ones(2)
0.0
``````
1. `A` is wide and full row rank.
``````julia> [1.0 0.0] \ ones(1)
2-element Array{Float64,1}:
1.0
0.0
``````
1. `A` is wide and row rank deficient.
``````julia> zeros(1, 2) \ ones(1)
2-element Array{Float64,1}:
0.0
0.0
``````

I suppose we all expect `A \ b` to work for cases 3 and 4 giving a least square solution, and for case 5 and 6 giving a least norm solution. So given that we expect `A \ b` to just work in cases 4 and 6, then it also makes sense for it to give some reasonable solution in case 2. In fact, case 2 is simply the intersection of cases 4 and 6, much like case 1 is the intersection of cases 3 and 5. Of course, the most logical solution to expect in case 2 would then be `pinv(A) * b`.

I understand that it is not the most efficient to use SVD or pivoted QR when solving any square system, and that it is defaulting to an LU factorization for efficiency. But if I explicitly do the SVD factorization first then call `\ b`, then I see no reason for it to still give me a bad solution, hence my “wrong” assumption in the OP.

Mathematically you make a reasonable point, but (near-)singularity can be difficult to detect and deal with numerically.

I think the following behavior (which is think describes the current state) is a reasonable compromise:

1. for non-square matrices, do least squares/norm, depending on the shape,
2. for square matrices, if the matrix is not invertible, your result won’t make sense; if you care about this you need to choose an appropriate method,
3. if you want a `pinv`, you have to ask for it specifically.

True for a matrix, but not true if I give `\` the SVD, it’s a simple check of the singular values much like how `pinv` does it.