So, I tried to understand what is happening. I’ve executed the code 10000000 times and found out that it failed 11% of the times it was executed.

using Printf
i = 0
test = 10000000
for x in 1:test
try
A = rand(1:4,3,3)
x = fill(1.0, 3)
b = A * x
A\b
catch
i = i+1
end
end
fail_percentage = (i/test)*100
@printf "this code has failed in %.2f%%" fail_percentage

No. The system does have a solution, doesn’t it? You have generated it by yourself. Obviously, with your data A, x, b the equality is satisfied. Therefore your claim about nonexistence of a solution is clearly incorrect.

True, the matrix A is singular. But it is not the end of the world. Depending on whether b is in the range of A or not, a solution exists or not. Do not give up your search for one

Before we hit a solution (method) with Julia, I also tried Matlab with the same data:

>> A = [1.0 2.0 3.0;
2.0 3.0 5.0;
3.0 4.0 7.0];
x = [1.0, 2.0, 3.0]';
>> b = A*x
b =
14
23
32
>> x = A\b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND =
3.289550e-18.
x =
0.5000
1.5000
3.5000

Interesting, isn’t it? It gives a solution that differs from the one we started with. And a solution it is:

hm ok, i’m quite confused now. Aren’t our vectors different? Mine one is an 3-dimensional vector of ones. Does this condition change things a little bit?

I generated my own data. It is just that you used a random matrix A and I wanted the behaviour of the problem be always the same. While fixing the matrix A, I also arbitrarily assigned the other two guys - x and b. But this should not have an impact on the general outcome.

It appears that the convention adopted by Julia is that the backslash symbol used in x = A\b with a square A matrix really stands exclusively for the inverse of a matrix (computed perhaps using LU decomposition). If A is singular (or detected close to singular), the call of x= A\b fails. In such situations, SVD is your friend:

Obviously, when the backslash is used on the SVD object (a variable of type SVD), it does the standard routine of removing all those small enough singular values and the corresponding columns in U and V matrices and then doing the multiplication of b by the inverse of the reduced SVD of A

In fact, the same ideas are behind the concept of a pseudoinverse. And Julia can compute it too:

julia> x = pinv(A)*b
3-element Vector{Float64}:
1.0
2.0
2.9999999999999996

It is just that the computation is a bit less efficient (you need to alocate memory for the pseudoinverse first, then compute it, and then multiply with it the b vector).

A bit cheaper (compared to the SVD-based solution) is a solution based on a QR decomposition:

Although algorithms are relevant as discussed, let’s try to understand the 10.95% failure rate. It’s because A = rand(1:4,3,3) is only a slightly random matrix due to the integer entries. There are only 4^3=64 unique columns possible. A solvable linear system needs the columns (or rows) to be linearly independent, and it’s easy for them not to be. In fact, it appears 11% of the possible matrices are singular.

Consider the possibilities. The second column has 1/64 chance of repeating the first. The third column has 1/64 chance as well, plus 1/64 of repeating the second. That yields just under 3/64 chance of a column repeating another, or nearly 5 of the 11% already. It’s also possible for one column to be the sum or difference of the other two, or small multiples thereof. It’s not so easy to calculate how many, but brute force would work, since there are only 4^9=262,144 possibilities to try. That’s much less than the 10 million random ones tested by OP. With luck, the random number generator might have sampled all of the unique matrices 40 times each.

Others have pointed out that you can use SVD, pinv, and QR when the standard LU-based \ yields SingularException. But be mindful that just because they don’t fail doesn’t mean they give the “right” answer. When the system is ill-conditioned, these other methods are designed to give a unique and sensible solution, such as minimum norm x. But here with integer entries, ill-conditioned probably means exactly singular, where there are infinite possible solutions, and no algorithm will resolve them perfectly or generally yield integer x. (With real numbers, ill-conditioning means extremely high sensitivity to error, say in A or b, and even then the pseudo-inverse isn’t necessarily the answer you want.)

No matter the algorithm, it’s generally advisable to know when the system is ill-conditioned and choose the right approach accordingly, or make sure the system is not set up wrong.

Note that OP could have likely avoided ill-conditioned matrices with A=rand(3,3) where there’s very low probability of linear dependence with Float64, with (EDIT: a lot of) unique columns.

Singular matrices squish the space (here R^3) into a smaller subspace (a plane, a line or even a point, depending on the dimension of the null space (the rank)). There is a whole family of vectors x for which Ax has the same value.

If our b is indeed mapped by A (there exists x such that Ax = b), x is not unique and there’s an infinity of solutions. So solving the linear system with a numerical library may produce different results, that are all valid.