Why does julia systematically fails when doing \ operation?

The problem Ax=b for square A is solved by the \ function. With that in mind, I’ve tried to do the following:

        A = rand(1:4,3,3)
        x = fill(1.0, 3)
        b = A * x 
        A\b

For some reason, the code seems to works at times. But sometimes it returns me the following error:

LinearAlgebra.SingularException(3)

Stacktrace:
 [1] checknonsingular
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:19 [inlined]
 [2] checknonsingular
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:21 [inlined]
 [3] #lu!#136
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:85 [inlined]
 [4] #lu#140
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:273 [inlined]
 [5] lu (repeats 2 times)
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:272 [inlined]
 [6] \(A::Matrix{Int64}, B::Vector{Float64})
   @ LinearAlgebra /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/generic.jl:1136
 [7] top-level scope
   @ In[208]:4
 [8] eval
   @ ./boot.jl:360 [inlined]
 [9] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
   @ Base ./loading.jl:1116

So, I tried to understand what is happening. I’ve executed the code 10000000 times and found out that it failed 11% of the times it was executed.

using Printf

i = 0
test = 10000000
for x in 1:test    
    try
        A = rand(1:4,3,3)
        x = fill(1.0, 3)
        b = A * x 
        A\b
        catch 
        i = i+1        
        end
end

fail_percentage = (i/test)*100

@printf "this code has failed in %.2f%%" fail_percentage
this code has failed in 10.95%

Can someone explain me what is happening here?

1 Like

As in Why does Julia fails to solve linear system systematically? - Stack Overflow. This is not a Julia mistake (sorry for doubting you Julia). It’s actually a property of linear systems. There would be no solution if the matrix A was singular.

3 Likes

No. The system does have a solution, doesn’t it? You have generated it by yourself. Obviously, with your data A, x, b the equality is satisfied. Therefore your claim about nonexistence of a solution is clearly incorrect.

True, the matrix A is singular. But it is not the end of the world. Depending on whether b is in the range of A or not, a solution exists or not. Do not give up your search for one :slight_smile:

For convenience, here is an example code:

julia> using LinearAlgebra

julia> A = [1.0 2.0 3.0;
            2.0 3.0 5.0;
            3.0 4.0 7.0]
3×3 Matrix{Float64}:
 1.0  2.0  3.0
 2.0  3.0  5.0
 3.0  4.0  7.0

julia> x = [1.0, 2.0, 3.0]
3-element Vector{Float64}:
 1.0
 2.0
 3.0

julia> b = A*x
3-element Vector{Float64}:
 14.0
 23.0
 32.0

julia> x = A\b
ERROR: SingularException(3)
Stacktrace:
 [1] checknonsingular
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:19 [inlined]
 [2] checknonsingular
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:21 [inlined]
 [3] #lu!#136
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:85 [inlined]
 [4] #lu#140
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:273 [inlined]
 [5] lu (repeats 2 times)
   @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lu.jl:272 [inlined]
 [6] \(A::Matrix{Float64}, B::Vector{Float64})
   @ LinearAlgebra /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/generic.jl:1136
 [7] top-level scope
   @ none:1
2 Likes

Before we hit a solution (method) with Julia, I also tried Matlab with the same data:

>> A = [1.0 2.0 3.0;
     2.0 3.0 5.0;
     3.0 4.0 7.0];

x = [1.0, 2.0, 3.0]';

>> b = A*x

b =

    14
    23
    32

>> x = A\b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND =
3.289550e-18. 
 

x =

    0.5000
    1.5000
    3.5000

Interesting, isn’t it? It gives a solution that differs from the one we started with. And a solution it is:

>> A*x-b

ans =

     0
     0
     0
1 Like

Does using Big Float produces a solution? Is the problem to do with Float64?

julia> A = big.( [1.0 2.0 3.0;
                   2.0 3.0 5.0;
                   3.0 4.0 7.0] )
3×3 Matrix{BigFloat}:
 1.0  2.0  3.0
 2.0  3.0  5.0
 3.0  4.0  7.0

julia> x = big.( [1.0, 2.0, 3.0 ] )
3-element Vector{BigFloat}:
 1.0
 2.0
 3.0

julia> b = A*x
3-element Vector{BigFloat}:
 14.0
 23.0
 32.0

julia> x = A\b
3-element Vector{BigFloat}:
 2.666666666666666666666666666666666666666666666666666666666666666666666666666678
 3.666666666666666666666666666666666666666666666666666666666666666666666666666644
 1.333333333333333333333333333333333333333333333333333333333333333333333333333339

hm ok, i’m quite confused now. Aren’t our vectors different? Mine one is an 3-dimensional vector of ones. Does this condition change things a little bit?

I’m confused too! Matrix/matrices is not my strong points in mathematics.

I generated my own data. It is just that you used a random matrix A and I wanted the behaviour of the problem be always the same. While fixing the matrix A, I also arbitrarily assigned the other two guys - x and b. But this should not have an impact on the general outcome.

It appears that the convention adopted by Julia is that the backslash symbol used in x = A\b with a square A matrix really stands exclusively for the inverse of a matrix (computed perhaps using LU decomposition). If A is singular (or detected close to singular), the call of x= A\b fails. In such situations, SVD is your friend:

julia> G = svd(A)
SVD{Float64, Float64, Matrix{Float64}}
U factor:
3×3 Matrix{Float64}:
 -0.332289   0.850246   0.408248
 -0.549449   0.177311  -0.816497
 -0.766609  -0.495624   0.408248
singular values:
3-element Vector{Float64}:
 11.218599757513006
  0.3781791648532636
  1.9743360888694275e-16
Vt factor:
3×3 Matrix{Float64}:
 -0.332574  -0.479504  -0.812078
 -0.745695   0.660865  -0.08483
  0.57735    0.57735   -0.57735

julia> xₛ = G\b
3-element Vector{Float64}:
 0.9999999999999953
 2.000000000000006
 2.999999999999999

Obviously, when the backslash is used on the SVD object (a variable of type SVD), it does the standard routine of removing all those small enough singular values and the corresponding columns in U and V matrices and then doing the multiplication of b by the inverse of the reduced SVD of A

In fact, the same ideas are behind the concept of a pseudoinverse. And Julia can compute it too:

julia> x = pinv(A)*b
3-element Vector{Float64}:
 1.0
 2.0
 2.9999999999999996

It is just that the computation is a bit less efficient (you need to alocate memory for the pseudoinverse first, then compute it, and then multiply with it the b vector).

A bit cheaper (compared to the SVD-based solution) is a solution based on a QR decomposition:

julia> F = qr(A)
LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}}
Q factor:
3×3 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}}:
 -0.267261   0.872872   0.408248
 -0.534522   0.218218  -0.816497
 -0.801784  -0.436436   0.408248
R factor:
3×3 Matrix{Float64}:
 -3.74166  -5.34522   -9.08688
  0.0       0.654654   0.654654
  0.0       0.0       -1.11022e-16

julia> xᵣ = F\b
3-element Vector{Float64}:
  4.000000000000012
  4.99999999999999
 -0.0
3 Likes

Although algorithms are relevant as discussed, let’s try to understand the 10.95% failure rate. It’s because A = rand(1:4,3,3) is only a slightly random matrix due to the integer entries. There are only 4^3=64 unique columns possible. A solvable linear system needs the columns (or rows) to be linearly independent, and it’s easy for them not to be. In fact, it appears 11% of the possible matrices are singular.

Consider the possibilities. The second column has 1/64 chance of repeating the first. The third column has 1/64 chance as well, plus 1/64 of repeating the second. That yields just under 3/64 chance of a column repeating another, or nearly 5 of the 11% already. It’s also possible for one column to be the sum or difference of the other two, or small multiples thereof. It’s not so easy to calculate how many, but brute force would work, since there are only 4^9=262,144 possibilities to try. That’s much less than the 10 million random ones tested by OP. With luck, the random number generator might have sampled all of the unique matrices 40 times each.

Others have pointed out that you can use SVD, pinv, and QR when the standard LU-based \ yields SingularException. But be mindful that just because they don’t fail doesn’t mean they give the “right” answer. When the system is ill-conditioned, these other methods are designed to give a unique and sensible solution, such as minimum norm x. But here with integer entries, ill-conditioned probably means exactly singular, where there are infinite possible solutions, and no algorithm will resolve them perfectly or generally yield integer x. (With real numbers, ill-conditioning means extremely high sensitivity to error, say in A or b, and even then the pseudo-inverse isn’t necessarily the answer you want.)

No matter the algorithm, it’s generally advisable to know when the system is ill-conditioned and choose the right approach accordingly, or make sure the system is not set up wrong.

Note that OP could have likely avoided ill-conditioned matrices with A=rand(3,3) where there’s very low probability of linear dependence with Float64, with (EDIT: a lot of) unique columns.

9 Likes

Singular matrices squish the space (here R^3) into a smaller subspace (a plane, a line or even a point, depending on the dimension of the null space (the rank)). There is a whole family of vectors x for which Ax has the same value.

If our b is indeed mapped by A (there exists x such that Ax = b), x is not unique and there’s an infinity of solutions. So solving the linear system with a numerical library may produce different results, that are all valid.

3 Likes