How is division by a singular matrix performed in Julia even it is theoretically looks invalid?

These are two matrices A1 and A2:

julia> A1 = [2 4 6; 5 8 3; 2 1 8; 7 6 3]
4×3 Matrix{Int64}:
 2  4  6
 5  8  3
 2  1  8
 7  6  3
julia> A2 = reshape(collect(11:19), (3, 3))
3×3 Matrix{Int64}:
 11  14  17
 12  15  18
 13  16  19

Here A2 is a singular matrix as we can check

julia> LinearAlgebra.det(A2)
0.0

But in Julia still we can find

julia> A1 / A2
4×3 Matrix{Float64}:
  3.375       -0.0833333  -2.625
 -1.1259e15    2.2518e15  -1.1259e15
  1.1259e15   -2.2518e15   1.1259e15
 -2.81475e14   5.6295e14  -2.81475e14

How Julia evaluates this result although division by a singular matrix is invalid?
However equivalently we can check with inv(A2)

julia> A1 * inv(A2)
ERROR: SingularException(3)

which makes sense. So, what the meaning of the result from A1 / A2 here?
Thank you…

You can ask the help console with ?:

help?> /
search: / //

[...]

  A / B


  Matrix right-division: A / B is equivalent to (B' \ A')' where \ is the left-division
  operator. For square matrices, the result X is such that A == X*B.

  See also: rdiv!.

  Examples
  ≡≡≡≡≡≡≡≡

  julia> A = Float64[1 4 5; 3 9 2]; B = Float64[1 4 2; 3 4 2; 8 7 1];
  
  julia> X = A / B
  2×3 Matrix{Float64}:
   -0.65   3.75  -1.2
    3.25  -2.75   1.0
  
  julia> isapprox(A, X*B)
  true
  
  julia> isapprox(X, A*pinv(B))
  true

You have a clue at the very end: the result is equivalent to multiplying by the pseudoinverse of B, which always exists. This is also what backslash does, i.e. A \ B: it finds a least-squares solution to the linear system, but that does not require B to be invertible.

Oh. Thank you for your reply.
So, how A1 / A2 is calculated here? Is it A1 * pinv(A2). But

julia> A1 * pinv(A2)
4×3 Matrix{Float64}:
  3.22222   0.222222  -2.77778
 -5.27778  -0.111111   5.05556
  6.0       0.333333  -5.33333
 -7.88889  -0.222222   7.44444

it is not the same result. Anyhow is it possible to express A1 / A2 in any other way, by which it would be easier to understand the evaluation?

Julia does use the pseudoinverse in the case in which A2 is not square. If A2 is square, it uses LU factorization of A2 . Actually to be more exactly correct, it does a conjugate transpose first and returns

copy(adjoint(adjoint(A2) \ adjoint(A1)))

The operator \ then computes an LU factorization of adjoint(A2) and uses forward and backward substitution to apply the inverse. It does this in floating point and in floating point with rounding there is no clear distinction between a singular matrix and a matrix that is close to being singular. In general, if you compute the LU factorization of a matrix A, you get not the exact LU factorization, but an LU factorization of a matrix that is close to A:

P(A+E) = LU

where E represents a backward error due to rounding. At this point you are effectively applying the inverse of a matrix A+E that is nearly but not exactly singular. This results in dividing by some very small numbers and you get the large numbers you saw in A1/A2.

If you do know that your matrix is singular, using the pseudoinverse might be more appropriate, but Julia doesn’t do that automatically for square matrices. If you want to avoid numerical errors you could do something like this with rational matrices. In that case you will definitely get a singular exception.

1 Like