I need to get any solution of a bunch of underdetermined systems A*x = b fastly and cheaply. My first thought is to compute a minimum norm solution using the LQ decomposition of A (which can be used as workspace to minimize allocations) with something like this:

using LinearAlgebra
m = 5; n = 10
A = randn(m, n)
b = randn(m)
x = similar(A, n)
ldiv!(x, lq!(A), b)

but this results in a DimensionMismatch error, which I think is due to the square matrix Q of the LQ decomposition.

Since I want to make this operation as cheap as possible, I considered calling LAPACK.gels! directly, but the documentation says that b is updated with the solution. How does this work in the minimum norm case, when the size of b is less than the size of the solution x?

Right now my best clue is to iterate over the LQ decomposition of A (allocating more memory than necessary) and perform the matrix products manually:

using LinearAlgebra
m = 5; n = 10
A = randn(m, n)
b = randn(m)
L, Q = lq!(A)
ldiv!(LowerTriangular(L), b)
Q'b

Note: using a preallocated x and trying mul!(x, Q', b) also produces an DimensionMismatch error.

Just do x = A \ b â€” it gives you the minimum-norm solution by default when A has more columns than rows. If you need to solve a sequence of right-hand sides for the same A, use the pivoted QR factorization:

It is nice to see that.
I was under the impression Juliaâ€™s \ is identical to MATLAB.
Yet MATLABâ€™s solution isnâ€™t guaranteed to return the Least Norm solution for Under Determined Systems.
I look on the documentation, both use QR yet Julia states it uses Pivoted QR. Is that the trick for that?
Does it cost more (Performance wise) than regular QR?

I was looking for a solution that minimizes memory allocation because Iâ€™m solving several systems in a loop. Since my systems have full rank, the cheaper LQ factorization can achieve the same result as pivoted QR.

I was wondering if there is any function available which can use the LQ factorization itself, without constructing the L and Q matrices explicitly.

The dense QR and LQ solvers in LinearAlgebra all seem to materialize a new triangular matrix, against Lucasâ€™ wish to avoid allocation. IIUC the corresponding LAPACK routines use views into the packed factorizations, so why donâ€™t the Julia versions?

The new LQ solver works in-place when you call ldiv!.

The pivoted-QR solver needs to call tzrzf!, which requires a new array. (Maybe the subsequent line constructing an UpperTriangular matrix could use a view, though? Better yet, maybe this factorization could be cached in the QRPivoted for subsequent calls?)

(The pivoted-QR ldiv! implementation is fairly old â€” it derives from the / function in #3315 based on LAPACKâ€™s xgelsy, which was ported to an â€śin-placeâ€ť function in #5526 without changing the fact that it allocated new matrices. It would probably be useful to revisit it, cc @andreasnoack.)

Itâ€™s been a while but I guess I couldnâ€™t figure out a way to avoid allocations. I think the LAPACK version uses work space for this step. However, it might very well be possible to reduce the number of allocations here.