The last comparison makes sense because one needs a pivoted QR for rank-deficient case.
Perhaps there is a kwarg in qr(::SparseMatrixCSC) that I can use to get the same result as the dense fully pivoted case and without intermediate conversion of A to dense.
You have a 12 \times 10 matrix A of rank 5, so the least-square problem \min_x \Vert Ax - b\Vert_2 (which is what A \ b and your other expressions solve) has infinitely many solutions.
For dense matrices, Julia’s A \ b (along with qr(A) \ b and pinv(A) * b) computes the minimum-norm solution to the least-squares problem.
However, for sparse matrices, Julia’s (SuiteSparse’s) A \ b (and qr(A) \ b) computes the sparsest solution, also called the “basic” solution (the solution with the most zero entries).
In your case, you’ll notice that Matrix(A) \ b has 4 zero entries and norm ≈ 1.57, whereas A \ b has 5 zero entries and norm ≈ 2.62.
This is documented if you look at the documentation for \, for example.
Many thanks! I should have read the documentation very carefully. Could you have a look at the logic of the following? I did some partial testing and it seems to return the correct solution.
# Compute the pseudoinverse solution for least squares problem
function pinvsol(A::SparseMatrixCSC, b::AbstractVector)
m, n = size(A)
F = qr(A) # SuiteSparse.SPQR.QRSparse
rnk = rank(F)
if rnk == n # full column rank
return qr(A) \ b
end
# rank-deficient
R11 = F.R[Base.OneTo(rnk),Base.OneTo(rnk)]
R12 = F.R[Base.OneTo(rnk),rnk+1:end]
S = sparse(R11 \ R12)
xb = R11 \ (F.Q[:,Base.OneTo(rnk)]'*b[F.prow])
p = n - rnk
x2 = [S; -sparse(I,p,p)] \ [xb; zeros(p)]
x1 = xb - S*x2
x = [x1; x2]
x[F.pcol] = x
return x # least squares least norm solution for sparse A
end
Unfortunately, this creates a dense matrix. (F.Q is not stored explicitly, but as a composition of sparse or low-rank operations so that you can compute F.Q times a vector efficiently. Trying to pull out a subset of the columns of Q discards this structure.)
Instead, I would compute (F.Q' * b[F.prow])[1:rnk], which should be equivalent and efficient without constructing a dense matrix.
Note that this is also not a sparse matrix in general, since inv(R11) is in general not sparse. In general you want to compute the action of matrices like this on vectors rather than computing the matrix itself. But it may be okay for your purposes since maybe you are interested only in the case where the rank is very low, so a dense R11 matrix inverse is acceptable?
I’ve just skimmed your code looking for common sparse-matrix gotchas; I haven’t really looked at the structure of your algorithm. Note that there are some papers on minimum-norm solutions with sparse matrices that may be helpful; for example, Solution of Sparse Underdetermined Systems of Linear Equations (1984)
Many thanks for your help. The logic of this implementation follows from the book by Bjorck (section 2.7.4).
This is actually an overlook on my part. Generally I would avoid this if possible.
I had a look at the paper you sent and it appears the complete orthogonal decomposition is a good candidate to implement. Could you outline what are the “best” (simplicity vs speed) approaches for this problem that people have come up after the book of Bjorck?