Finding the rank of an augmented matrix

I want to determine if a system of equations has no solutions.
According to Wikipedia a system of equations has no solutions if its augmented matrix has higher rank than its coefficient matrix.

In Julia the natural way to check this is

rank(hcat(A,b)) > rank(A)

But it feels like there should be a better way to check the rank of the augmented matrix than to copy the coefficient matrix with hcat.

1 Like


\operatorname{rank} \left( \begin{bmatrix} \boldsymbol{A} & \boldsymbol{b} \end{bmatrix} \right) > \operatorname{rank} \left( \boldsymbol{A} \right) \iff \boldsymbol{b} \notin \mathcal{R} \left( \boldsymbol{A} \right)

Where \mathcal{R} \left( \boldsymbol{A} \right) is the columns space of \boldsymbol{A}.
So basically if you solve the equation \boldsymbol{A} \boldsymbol{x} = \boldsymbol{b} and the residual is smaller than your threshold, then you have your answer.

Potentially you could also project \boldsymbol{b} on the column space and see if it stays the same.

I wouldn’t recommend using rank for this.

Basically, you want to check whether b is in the column space of A, which is equivalent to checking whether b is perpendicular to the left nullspace of A (the Fredholm alternative). If N is a matrix whose columns are a basis for the left nullspace of A (the nullspace of A^*), corresponding to N = nullspace(A') in Julia, then you want N^* b = 0.

The practical difficulty of all such computations (including the computation of rank), however, is that floating-point roundoff errors make it impossible to do any such checks exactly. (Unless you are doing exact calculations with rational numbers or computer-algebra systems or similar.). Life in the real world can be quite different from hand calculations in first-year linear-algebra courses where almost all the numbers are integers (or fractions on rare occasions)!

So, in practice you might check that \Vert N^*b \Vert \ll \Vert b \Vert, assuming N is an orthonormal basis (so that it doesn’t affect the scaling). In Julia, you can check this as:

norm(nullspace(A')'b) <= rtol * norm(b)

for some relative tolerance rtol. You would then compute the solution using the pseudo-inverse with the same tolerance: x = pinv(A, rtol=rtol)*b. However, both nullspace and pinv use the SVD (along with rank), so you might as well just compute the pseudo-inverse solution first

x = pinv(A, rtol=rtol)*b

and then check whether x is an “exact” solution to your desired precision, i.e. check something like:

norm(A*x - b) <= rtol * norm(b)

Understanding the SVD is pretty crucial to understanding the solution of rank-deficient systems.

(You could also compute x = A \ b, which is faster than the pinv method — it used pivoted QR factorization for non-square A — but doesn’t allow you to specify a tolerance for dropping small singular values in cases where A is nearly rank deficient. The QR factorization also gives you an orthonormal basis \hat{Q} for the column space, so you could check whether \hat{Q} (\hat{Q}^* b) \approx b (since \hat{Q} \hat{Q}^* is projection onto the column space). And there are other ways to do it, too, of course.)