The matrix is singular. Using Ax=b terminology, the right-hand-side vector b is in the range of A, so solutions to Ax=b exist, but they are not unique. Different algorithms will produce different answers x, each of which satisfies Ax=b
Why do you think it is not a good idea?
Computationally wise it might be better use direct methods to find the solution.
But accuracy wise and certainly if one needs the Least Norm solution of the Least Squares there is nothing wrong about it.
By the way, MATLAB has the function lsqminnorm() which uses the complete orthogonal decomposition (COD) to compute the minimum norm least squares solution.
If it is available in Julia one could use it as well as it should be faster than pinv() which uses the SVD.
SuiteSparseQR, which is used when A is sparse, returns a basic least-squares solutions, which means one with many zeros (hence the zeros that you observe). LAPACK, used in the dense case, returns the minimum-norm solution.
Great, so we agree that accuracy wise there is no advantage.
Regarding speed, well, I don’t fully agree with the analysis you linked to.
He argues one can solve the linear system taking advantage of the special properties of the matrix. Yet this can be done for calculation of the inverse as well.
Let’s take Tridiagonal Matrix for instance. You can solve directly using known methods. But you can also do the same tricks for calculating the inverse.
Solving a general N×N tridiagonal system Ax=b takes O(N) time and memory. Computing the inverse of A takes O(N²) time and memory because there are N² nonzero elements of the inverse to compute. This is better than O(N³), but still far worse than not computing the inverse at all.
For a general dense matrix, it’s true that directly computing the inverse is not terrible, maybe a factor of 2 more time than just computing the LU factorization and within a small constant factor of the accuracy. But there’s usually no compelling reason to do it, and it’s arguably a bad habit because it will lead you into trouble for sparse and structured matrices.
I totally agree with you.
But in the case of the OP he had no choice but using the Pseudo Inverse.
Then he was warned not to do it as usually most of us are warned in their first numerical course not to invert matrices for solving Linear Equations. Usually we are hinted it has something to do with accuracy and not only speed.
I was just pointing this is something that might be exaggerated.
Oh, I just noticed your question. Yes, I need the solution with minimal norm. The matrix I use is the transpose of the Laplacian of a graph and it can be very large but also very sparse, so I’m convinced there has to be a better way of doing that.
As I understand it, the latest algorithms to compute minimum-norm solutions using SuiteSparseQR are found in the spqr_rank package described in this paper. In particular, section 2.7 of the paper says that the COD-based algorithm typically requires more time and memory (about a factor of 2 from fig. 6) than the newer algorithm. Both spqr_rank and the older Factorize package are BSD-licensed by Tim Davis and are part of SuiteSparse. Would be nice to port spqr_rank to Julia. (See also #20409.)