I wouldnâ€™t recommend using rank for this.

Basically, you want to check whether b is in the column space of A, which is equivalent to checking whether b is perpendicular to the left nullspace of A (the Fredholm alternative). If N is a matrix whose columns are a basis for the left nullspace of A (the nullspace of A^*), corresponding to `N = nullspace(A')`

in Julia, then you want N^* b = 0.

The practical difficulty of all such computations (including the computation of `rank`

), however, is that floating-point roundoff errors make it impossible to do any such checks exactly. (Unless you are doing exact calculations with rational numbers or computer-algebra systems or similar.). Life in the real world can be quite different from hand calculations in first-year linear-algebra courses where almost all the numbers are integers (or fractions on rare occasions)!

So, in practice you might check that \Vert N^*b \Vert \ll \Vert b \Vert, assuming N is an orthonormal basis (so that it doesnâ€™t affect the scaling). In Julia, you can check this as:

```
norm(nullspace(A')'b) <= rtol * norm(b)
```

for some relative tolerance `rtol`

. You would then compute the solution using the pseudo-inverse with the same tolerance: `x = pinv(A, rtol=rtol)*b`

. However, both `nullspace`

and `pinv`

use the SVD (along with `rank`

), so you might as well just compute the pseudo-inverse solution first

```
x = pinv(A, rtol=rtol)*b
```

and *then* check whether `x`

is an â€śexactâ€ť solution to your desired precision, i.e. check something like:

```
norm(A*x - b) <= rtol * norm(b)
```

Understanding the SVD is pretty crucial to understanding the solution of rank-deficient systems.

(You could also compute `x = A \ b`

, which is faster than the `pinv`

method â€” it used pivoted QR factorization for non-square A â€” but doesnâ€™t allow you to specify a tolerance for dropping small singular values in cases where A is nearly rank deficient. The QR factorization also gives you an orthonormal basis \hat{Q} for the column space, so you could check whether \hat{Q} (\hat{Q}^* b) \approx b (since \hat{Q} \hat{Q}^* is projection onto the column space). And there are other ways to do it, too, of course.)