Say I have a matrix A
. How would I go about finding the linearly independent columns (or rows) of A
? I.e., I’m looking for a set of steps I can use to get either the indices which correspond to the linearly independent columns of the matrix, or a new matrix consisting of those columns. (I assume I could apply the same steps to transpose(A)
to get the linearly independent rows; if not, please note what I need to do differently in that case.)
can you clarify the question a little? if the matrix is singular then there is not a unique choice of indices to obtain linearly independent column vectors.
in general, the LinearAlgebra
package has a lot of useful tools
According to this reference, compute the QR factorization:
Q, R = qr(A)
The columns of A
are linearly independent if and only if R
has no 0 diagonal entries.
To get the linear independent columns of a matrix A, use the QR-decomposition with pivoting:
julia> using LinearAlgebra
julia> A = [1.0 -2.0 -4.0 -7.0;
2.0 4.0 0.0 -6.0;
3.0 1.0 -5.0 -14.0;
0.0 2.0 2.0 2.0]
4×4 Matrix{Float64}:
1.0 -2.0 -4.0 -7.0
2.0 4.0 0.0 -6.0
3.0 1.0 -5.0 -14.0
0.0 2.0 2.0 2.0
julia> qra= qr(A, Val(true))
QRPivoted{Float64, Matrix{Float64}, Vector{Float64}, Vector{Int64}}
Q factor:
4×4 LinearAlgebra.QRPackedQ{Float64, Matrix{Float64}, Vector{Float64}}:
-0.414644 0.512849 0.746547 0.0878822
-0.355409 -0.736769 0.361409 -0.44748
-0.829288 -0.00361161 -0.489788 0.269026
0.11847 -0.440617 0.268623 0.84833
R factor:
4×4 Matrix{Float64}:
16.8819 -1.1847 6.04196 -3.61333
0.0 -4.85762 -2.91457 -0.971524
0.0 0.0 1.51007e-15 -9.99909e-16
0.0 0.0 0.0 -4.08126e-17
permutation:
4-element Vector{Int64}:
4
2
3
1
The matrices in this decomposition are accessed as qra.Q, qra.R, qra.P
, and the permutation vector as qra.p
.
In theory, this is a QR-decomposition with pivoting, A=Q*R*P'
, where P is the permutation matrix defined by the permutation returned above. Since the permutation matrix is an orthogonal matrix, its inverse is its transposed matrix, P^{-1}=P'. Hence we get that the product Q*R is the QR-decomposition of the matrix A*P (after multiplying at right with P, the inverse of P'
), i.e. the linear independent columns of the matrix A are the columns of A*P given by the first two coordinates of the permutation vector:
julia> (A*qra.P)[:, qra.p[1:2]]
4×2 Matrix{Float64}:
1.0 -2.0
2.0 4.0
3.0 1.0
0.0 2.0
How do you know to use just the first two columns of A*P? Is that from looking at the diagonal of R, as per the previous comment?
Because of roundoff errors in floating-point arithmetic, this can be a tricky question to give a precise answer to, similar to the question of trying to compute the rank of a matrix in finite precision.
For example, the vectors [1, 0]
and [1, 1e-15]
are nearly parallel, but are technically linearly independent in infinite precision. But what if the 1e-15
resulted from a roundoff error in some other calculation, and it would have actually been zero if you had done the calculation exactly?
(The nullspace
function can tell you what linear dependencies exist up to some specified tolerance.)
Can you give some more context on what you are trying to accomplish?
Oh, not the first two columns of A*P. You said the columns selected by the first two elements of the permutation vector. So more generally, if I’m understanding correctly, I would want to use the columns of A*P corresponding to the first rank(A)
elements of the permutation vector.
The first two elements on the main diagonal of the matrix R are different from zero. Hence the matrix A*P has two independent columns, and these columns are
qra.A*qra.P[:, qra.p[1:2]]
I am trying to follow a procedure outlined in this paper on the third page (Definition 2), which starts with taking a matrix C and reordering the rows so that the first rank(C) rows are linearly independent. From there, I need the submatrix consisting of those first rank(C) rows in order to construct other matrices and proceed.
Wouldn’t A*P already be permuted so that the first rank(A) columns of A are linearly independent? Why do I need to index A*qra.P
with the permutation vector? I would expect what I want is either (A*qra.P)[:,1:rank(A)]
or A[:,qra.p[1:rank(A)]]
, and that those two should be equivalent.
it also depends how much performance you need… if this is not a performance critical component of the algorithm you could do something kind of silly / naive like
using LinearAlgebra: det
function get_independent_col_idxs(A)
idxs = empty(axes(A, 2))
for ix ∈ axes(r, 2)
cols = [idxs; ix]
isapprox(det(A[cols, cols]), 0; atol=1e-8) && continue
push!(idxs, ix)
end
return idxs
end
It’s exactly how I explained.
A*qra.P= qra.Q*qra.R
is the usual QR decomposition of the matrix A*qra.P
. But we need the independent columns not in this product, but for A. To get the independent columns in A, just “unpermute” the columns in A*P
, i.e. take the columns p[1:2] .
Please search the web for qr-decomposition with column pivoting to familiarize with this method.
I would believe you but I have an example that I’ve been playing with (it’s 40x70 matrix so I won’t post it here) where rank(A)
is 40 but rank((A*qra.P)[:, qra.p[1:rank(A)]])
(as you suggest) is 32, which doesn’t seem correct. The submatrix selected shouldn’t drop rank. On the other hand, (A*qra.P)[:,1:rank(A)]
and A[:,qra.p[1:rank(A)]]
both give me rank 40 matrices (which match each other), which seems more believable.
Yes, I think you can use something like:
using LinearAlgebra
function independent_cols(A; rtol=size(A,2)*eps(eltype(A)))
QR = qr(A, ColumnNorm())
atol = rtol * abs(QR.R[1,1])
# just use QR as alternative to rank(A) (which computes the SVD):
rank = something(findfirst(i -> abs(QR.R[i,i]) <= atol, 1:size(A,2)), size(A,2)+1) - 1
return QR.p[1:rank]
end
to get a list of the independent columns up to some tolerance.
PS. For the algorithm in the linked paper, note that the expression R_1 = C_1^T (C_1 C_1^T)^{-1} is a badly conditioned way to compute the pseudo-inverse of C_1 — you would be generally better off using the QR factorization (of C^T or C_1^T, and you already have the former from computing the independent rows of C so you might as well re-use it) or the SVD for this. In general, the formulations in that paper appear oriented towards exact arithmetic and seem like they would benefit from some re-formulation using SVD or pivoted QR.
PPS. Seems like rank(QR)
should be implemented. I filed an issue: rank(::QRPivoted) method? · Issue #53214 · JuliaLang/julia · GitHub
To check it, just take a matrix A in which some columns are 3 independent vectors and other columns are linear combinations of those three. I’m out and I’m writing on my phone. So I cannot give a more convincing example.
For example, these matrices should both have rank 3:
julia> A = rand(10,3);
julia> B = [A[:,1] A[:,1]+A[:,2] A[:,3] A[:,2] A[:,1]+A[:,2]+A[:,3]];
julia> independent_cols(A) # from my post above
3-element Vector{Int64}:
1
3
2
julia> independent_cols(B)
3-element Vector{Int64}:
5
1
2
Note that which independent columns it picks, and in which order, are not necessarily the first 3. There are multiple choices, depending on the random entries of A
, and it picks the “most” independent according to a greedy column-norm pivoting strategy. But it is clearly returning valid choices.
LU decomposition might be enough (as the orthogonality of QR is not really used):
function independent_cols_LU(A; atol = size(A,2)^2*eps(float(eltype(A))))
L, U, p = lu(A'; check=false)
return p[.!isapprox.(diag(U), 0; atol)]
end
For example:
julia> A = 1.0*rand(1:10, 40,40);
julia> A[:, 33] = A[:, 32] + A[:,34];
julia> join(sort(independent_cols_LU(A)),',')
"1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,35,36,37,38,39,40"
(note column 34 is missing, which is appropriate according to construction)
On A
above, LU benchmarked 6x faster than QR.
Note to get independent rows, it is even simpler, replace A'
in function with A
.
P.S. not thoroughly tested !
This is known to fail for some matrices with the standard partial-pivoting LU. In particular, Peters and Wilkinson (1970) give the following example matrix:
PW(n) = Diagonal(ones(n)) - triu(ones(n,n),1)
which has 1 on the diagonal and -1 above the diagonal. It is already upper triangular, so LU factorization does nothing: diag(lu(PW(n)) == ones(n)
. However, it is numerically ill-conditioned for large enough n.
For example, PW(50)
has a condition number around 10^{16} because it has a single small singular value, leading to a numerical rank of 49.
julia> cond(PW(50))
1.1606586600070784e16
julia> show(IOContext(stdout, :limit=>true, :displaysize=>(15,80)), "text/plain", svdvals(PW(50)))
50-element Vector{Float64}:
30.91044380772692
10.394699940469549
6.344808050306397
4.6452130160333915
3.726992461199156
⋮
1.5013383776136788
1.500749471046912
1.500332036431184
1.5000828505183934
2.663181249820547e-15
The SVD-based rank
function correctly determines the numerical rank of 49, as does the QR-based independent_cols
function above, but the independent_cols_LU
function incorrectly finds a rank of 50:
julia> rank(PW(50))
49
julia> length(independent_cols(PW(50)))
49
julia> length(independent_cols_LU(PW(50)))
50
There are apparently more reliable rank-revealing variants of LU based on different pivoting strategies (Pan, 2000), but I don’t think they are implemented in LAPACK?
There are cases where rank-revealing QR based on column-norm pivoting fails, too, but an improved QR pivoting algorithm described in that link was included in LAPACK 3.1.0 in 2006. The gold standard is still SVD, of course.
So what is the rank of PW(50)
? (I mean the 49 value omits the first column which should be a linear-combo of the others - which in non-numerical sense can’t be).
Anyway, if the OP use-case does not get into these matrices, perhaps the LU speed is better.
In exact arithmetic, the rank is 50. But you can approximately make the 1st column from the subsequent 49 columns to about 16 digits of accuracy, so in Float64
precision it is essentially indistinguishable from a rank 49 matrix. We say that the “numerical rank” is 49.
The problem is that LU with the standard pivoting strategy is unreliable — it is not clear when it will work and when it won’t. Is the extra speed worth the risk?