Let’s take a few examples of A \ b
that question expectations:

A
is square and gracefully nonsingular.
julia> Matrix(1.0I, 2, 2) \ ones(2)
2element Array{Float64,1}:
1.0
1.0

A
is square and near or actually singular.
julia> [1.0 0.0; 0.0 0.0] \ ones(2)
2element Array{Float64,1}:
1.0
Inf

A
is tall and full column rank.
julia> [1.0; 0.0] \ ones(2)
1.0

A
is tall and column rank deficient.
julia> zeros(2) \ ones(2)
0.0

A
is wide and full row rank.
julia> [1.0 0.0] \ ones(1)
2element Array{Float64,1}:
1.0
0.0

A
is wide and row rank deficient.
julia> zeros(1, 2) \ ones(1)
2element Array{Float64,1}:
0.0
0.0
I suppose we all expect A \ b
to work for cases 3 and 4 giving a least square solution, and for case 5 and 6 giving a least norm solution. So given that we expect A \ b
to just work in cases 4 and 6, then it also makes sense for it to give some reasonable solution in case 2. In fact, case 2 is simply the intersection of cases 4 and 6, much like case 1 is the intersection of cases 3 and 5. Of course, the most logical solution to expect in case 2 would then be pinv(A) * b
.
I understand that it is not the most efficient to use SVD or pivoted QR when solving any square system, and that it is defaulting to an LU factorization for efficiency. But if I explicitly do the SVD factorization first then call \ b
, then I see no reason for it to still give me a bad solution, hence my “wrong” assumption in the OP.