Nullspace Function Vs QR decomposition

I am attempting to compute a nullspace for a matrix A. I have two potential techniques of doing this.

One is to use a permuted QR decomposition, and the other is to use the nullspace function.

These two approaches give different nullspace basis that drastically impact some experiments further down the line.

I was wondering if someone could provide the documentation needed for me to understand what the LinearAlgebra.nullspace function algorithm is, so I can compare against my algorithm with a permuted QR to see why they differ so greatly.

Thank you!

1 Like

The basic problem is that there is no fully sensible way of computing the rank of numerical matrices. nullspace uses the SVD which is slightly more robust, but the real problem here is that fundamentally there is no numerically stable way to do this reliably.

When you say “no sensible way of computing rank”, I was under the impression that numerical rank is looking for singular values above a certain tolerance. Also, there are numerically stable ways (to my understanding) of doing these matrix factorizations (modified gram schmidt for QR, for example).

One extra thought is that, as my matrix gets large, storing it in memory becomes problematic. Are there any matrix free / iterative procedures currently out there for computing a nullspace? Some looking online did not have a lot of results.

One extra thought is that, as my matrix gets large, storing it in memory becomes problematic. Are there any matrix free / iterative procedures currently out there for computing a nullspace? Some looking online did not have a lot of results.

You can solve Ax = 0 with conjugate gradients actually, provided A is symmetric and does have a zero eigenvalue. I don’t know how you would find a full basis though without knowing its dimension ahead of time.

3 Likes

This will happen. Do you know if ts possible if I have a bound on the rank? Could I compute at least n nullspace vectors with conjugate gradient?

In theory yes, by starting with different random vectors. In practice, my recollection is that CG can sometimes experience breakdown for semidefinite matrices, due to floatingh-point issues, so care is required. You might be better off with iterative eigensolver methods like LOBPCG or Lanczos.

When you say “different nullspace bases”, what do you mean? The basis is not unique, of course, since you can multiply it by any unitary matrix.

As @Oscar_Smith alludes to, because numerical computations of nullspace bases and ranks involve an arbitrary threshold on the smallest singular value (or the smallest R diagonal for QR), so even running the same algorithm on different machines (much less entirely different algorithms like QR vs SVD) will sometimes give different dimensions/ranks. You’ll need to think carefully about your problem if you don’t want your algorithm to be sensitive to this issue.

There isn’t any builtin function to compute the nullspace from a QR factorization (though perhaps there should be?), so without seeing your code it’s not clear whether you might have a bug there too.

See also the discussions in How to find the linearly independent columns (rows) of a matrix - #14 by stevengj and How to find the linearly independent columns (rows) of a matrix - #25 by mstewart on rank from QR.

1 Like

Block methods for the eigenproblem are probably fastest, but only if the matrix isn’t (too) indefinite. And you still have to (over)estimate the dimension of the null space when picking the block size.

You could also try partialschur from ArnoldiMethod.jl with DoubleFloats.jl numbers if you need more accuracy at reasonable speed :slight_smile: with the caveat that it is not a block method, so may not find a full basis for the null space.

1 Like

I see. Thanks for the long response! I will definitely use some of these ideas for solving my problem.

I will also look into this. Thanks!