At the back of my head, I was wrongly assuming that svd(A) \ b should be falling back on an efficient pinv(A) * b. For instance I find it surprising that
julia> svd(zeros(3,3)) \ ones(3)
3-element Array{Float64,1}:
NaN
NaN
NaN
doesn’t. SVD is supposed to be the most stable of all factorizations towards the singularity of A and I see no other purpose in calling the \ after an SVD but to benefit from this immunity against singular and near singular matrices. Is this a reasonable feature request?
I suppose we all expect A \ b to work for cases 3 and 4 giving a least square solution, and for case 5 and 6 giving a least norm solution. So given that we expect A \ b to just work in cases 4 and 6, then it also makes sense for it to give some reasonable solution in case 2. In fact, case 2 is simply the intersection of cases 4 and 6, much like case 1 is the intersection of cases 3 and 5. Of course, the most logical solution to expect in case 2 would then be pinv(A) * b.
I understand that it is not the most efficient to use SVD or pivoted QR when solving any square system, and that it is defaulting to an LU factorization for efficiency. But if I explicitly do the SVD factorization first then call \ b, then I see no reason for it to still give me a bad solution, hence my “wrong” assumption in the OP.