How to find the linearly independent columns (rows) of a matrix

Without checking, a theorem making LU reliable given a bound of condition number and maximum and/or minimum matrix value wouldn’t surprise me. So it depends on OP use-case, as does the hunger for speed vs. incorrectness risk.

I fear that most proposals here may be misleading. The authors of the paper you linked are not very clear on this, but: I think you are really faced with a problem that belongs to computational algebra, not numeric analysis.

In numerical analysis / numerical linear algebra, there is no real concept of “linear independence”. You can of course ask about ranks of matrices – but that is haggling over spectral gaps, as e.g.:

help?> LinearAlgebra.rank
  rank(A::AbstractMatrix; atol::Real=0, rtol::Real=atol>0 ? 0 : n*ϵ)
  rank(A::AbstractMatrix, rtol::Real)

  Compute the rank of a matrix by counting how many singular values of A have magnitude greater than max(atol, rtol*σ₁) where σ₁ is A's largest singular value. atol and rtol
  are the absolute and relative tolerances, respectively. The default relative tolerance is n*ϵ, where n is the size of the smallest dimension of A, and ϵ is the eps of the
  element type of A.

Instead you must deeply think about where the entries in your matrix come from: Many zeros will be structural, with a causal story that tells you that they really are exactly zero. Many entries will be exact integers, coming from ultimately topological constraints and conversation laws – say Kirchhoff’s law or stoichiometry for chemical stuff.

Other entries will be real numbers – stuff like viscosities or resistances or reaction rate coefficients or whatever. These will have no relations – they will be “generic” (doesn’t matter whether in the sense of Baire or Lebesgue).

Or maybe they will have relations – but all these relations will come with a story, e.g. because you are studying a critical point, or maybe because these are reaction rates coming from mass action law. In that case, this story is part of your problem! If you try to handwave it away, you miss the point of this exercise.

Now there are various approaches. A good keyword for what you’re looking at is “linear matroid”.

A good intro for handling mixed matrices (where some entries are structural zero/integer and others are generic real numbers / free monomials) is Matrices and Matroids for Systems Analysis | SpringerLink . Consider skimming that a little before you commit to reading it – the Murota monograph is pretty readable, but long and weird and I fear that it is only known to a tiny niche, maybe ahead of its time. The introduction and organization of the book is already insightful. Unfortunately I don’t know of any other implementations for the combinatorial canonical form than the reference C implementation by Murota himself :frowning:

The sagemath matroid module is pretty good; last time I worked in that region of maths was before I came to julia, so I can’t really say whether an analogue exists here.

A pretty good low-effort way of dealing with mixed matrices is the following: You select a random medium-sized prime p (e.g. 128 bit), and then you do all your computations over \mathbb{Z}_p, and you fill your generic real numbers with random entries.

Two keywords for related questions you may have are: Polynomial identity testing / Schwartz-Zippel Lemma and black-box factorization.

===========
Or are you in the simpler setting where all entries are exact integers? Then just make them Rational.

3 Likes

I think this is a fair point. Ultimately what I am concerned with getting out of the calculation is simply “Which entries of the final \hat{Q} and \hat{P} are nonzero?” so I can display the nonzero entries as links on a graph between nodes represented by Y and U. Final values don’t matter to me so much as knowing what ends up being 0 and what doesn’t.

Then I recommend the \mathbb{Z}_p approach. Look at Nemo.jl

If performance becomes critical, consider using sagemath instead of julia. This is because sagemath wraps the excellent GitHub - linbox-team/fflas-ffpack: FFLAS-FFPACK - Finite Field Linear Algebra Subroutines / Package , and afaiu no julia wrapper exists.

Also consider skimming or even reading Murota. The linked book is an absolute monument, an entry to a fantastical wonderland of new ideas you can discover, open questions and research avenues, and new ways of looking at the world.

(ultimately I also went with \mathbb{Z}_p + randomization instead of matroids in the end – so much easier to implement, pretty fast runtime, and you don’t impose a huge weirdness tax on your readers)

2 Likes

To expand on the potential problems with QR factorization with column pivoting that @stevengj alluded to, the standard example is due to Kahan. The following code generates the matrix and compares the results of QR factorization and the SVD:

using LinearAlgebra

n=100
c = 1.0
s = 2.0
c, s = [c,s] / norm([c,s])
C = triu(-c * ones(n,n), 1) + I
S = Diagonal([s^(j-1) for j in 1:n])
K = S*C
F = qr(K, ColumnNorm())
@show F.R[n,n]
@show minimum(svdvals(K))

On my system this gives output

F.R[n, n] = 1.5957114308101243e-5
minimum(svdvals(K)) = 3.615747500695125e-21

So the matrix is closer to a rank deficient matrix by many orders of magnitude than is revealed by R_{nn}. The example is somewhat delicate in that it is constructed so that no pivoting should occur, but depending on rounding the algorithm might pivot anyway. For example, starting with s=2.5 prior to normalization, QR with pivoting did actually do some pivoting and achieved results comparable to the SVD when I ran it. Despite improvements to the LAPACK pivoted QR routines in other respects, the algorithm can still fail to detect near rank deficiency on Kahan’s matrix. However, while imperfect, it is considered a reasonably reliable way to identify near rank deficiency in most cases. The SVD is always the most reliable, but QR with column pivoting isn’t very likely to be off by many orders of magnitude unless you have set out to break the algorithm or are very unlucky.

For LU factorization, as already noted, partial pivoting is not at all reliable for this. (It also breaks on Kahan’s matrix and on many other simpler examples.) LU with complete pivoting is much better and you can prove bounds on the size of pivots when factoring a nearly rank deficient matrix. But the bounds are no better than for QR and it’s also not 100% reliable. Also, as far as I know Julia doesn’t provide access to the LAPACK routines for LU with complete pivoting, so that option is not very convenient.

4 Likes