# Left matrix division with singular matrix

I am confused by the result of a left matrix division (the backslash operator, documented in Base.:\ at Linear Algebra · The Julia Language) in the case of a singular matrix:

``````julia> [1.0 0; 0 0] \ randn(2)
2-element Array{Float64,1}:
0.44879273949885823
-Inf
``````

I found this closed discussion related: https://github.com/JuliaLang/julia/issues/17591. Was there another discussion whose conclusion was that the infinite norm solution `[0.4488, -Inf]` makes more sense than a finite one, e.g. `[0.4488, 0]`?

The behavior of the polyalgorithm depends on the matrix properties. When the matrix is square we don’t give a minimum norm solution. Instead, we just try to solve the system. The square problem is generally solved with the LU but if the matrix is triangular or diagonal then we just solve it directly. However, it looks like we don’t check for zeros in the diagonal solve here. The expected behavior would have been an exception.

Is this because of a fundamental mathematical reason or of implementation?

In both cases, square and tall rectangular system, one is trying to find the linear combination of columns equal to the right hand side. If I was to add a line with all zeros to the matrix and the right hand side, why is the solution changing?

``````julia> b = rand(2);
julia> [1.0 0; 0 0; 0 0] \ vcat(b, 0)
2-element Array{Float64,1}:
0.9770292062075274
0.0

julia> [1.0 0; 0 0] \ b
2-element Array{Float64,1}:
0.9770292062075274
Inf
``````

I would prefer to have a behaviour to that of a rectangular matrix. Does it make sense?

That should be covered by the discussion in https://github.com/JuliaLang/julia/issues/17591. Notice, that you can always control the behavior by explicitly choosing the factorization, e.g.

``````julia> b = rand(2);

julia> qr([1.0 0; 0 0; 0 0], Val(true)) \ [b; 0]
2-element Array{Float64,1}:
0.8324318906464716
0.0

julia> qr([1.0 0; 0 0], Val(true)) \ b
2-element Array{Float64,1}:
0.8324318906464716
0.0
``````

I see your comment here: https://github.com/JuliaLang/julia/issues/17591#issuecomment-235010516

Why does pivoting of R change the solution of a problem with a singular matrix?

Because `\` for a pivoted QR dispatches to a method that computes a minimum norm solution (in the dense case).

I am sorry to be a nudnik, but pivoting to me means swapping the rows in a matrix in Gaussian elimination. Is pivoting a standard term in the context of backslash? If so, I will open a pull request towards the documentation (in the noteworthy differences or in the QR factorization) because this behaviour is not self-evident, at least coming from a MATLAB background.

I think it’s pretty standard. In the QR, you swap columns instead of rows. See QR Factorization with Column Pivoting but it is also supported by Matlab’s `qr`.

1 Like

Thank you and many apologies for the delay.

Would something like this be OK? https://github.com/JuliaLang/julia/pull/29976