Solution of underdetermined systems using LAPACK

I need to get any solution of a bunch of underdetermined systems A*x = b fastly and cheaply. My first thought is to compute a minimum norm solution using the LQ decomposition of A (which can be used as workspace to minimize allocations) with something like this:

using LinearAlgebra
m = 5; n = 10
A = randn(m, n)
b = randn(m)
x = similar(A, n)
ldiv!(x, lq!(A), b)

but this results in a DimensionMismatch error, which I think is due to the square matrix Q of the LQ decomposition.

Since I want to make this operation as cheap as possible, I considered calling LAPACK.gels! directly, but the documentation says that b is updated with the solution. How does this work in the minimum norm case, when the size of b is less than the size of the solution x?

Right now my best clue is to iterate over the LQ decomposition of A (allocating more memory than necessary) and perform the matrix products manually:

using LinearAlgebra
m = 5; n = 10
A = randn(m, n)
b = randn(m)
L, Q = lq!(A)
ldiv!(LowerTriangular(L), b)

Note: using a preallocated x and trying mul!(x, Q', b) also produces an DimensionMismatch error.

Is there any better way than this?

Since they have an infinite number of solutions (the subspace), do you need the one with the minimum norm, or would an arbitrary one do?

It can be any solution. I chose the minimum norm for convenience with the LAPACK functions.

I assume A has full rank, but I don’t have which columns of A form a nonsingular submatrix of A.

Just do x = A \ b — it gives you the minimum-norm solution by default when A has more columns than rows. If you need to solve a sequence of right-hand sides for the same A, use the pivoted QR factorization:

QR = qr(A, Val(true))
x = QR \ b

If you search discourse for “minimum norm” you will find more discussions of these topics.


It is nice to see that.
I was under the impression Julia’s \ is identical to MATLAB.
Yet MATLAB’s solution isn’t guaranteed to return the Least Norm solution for Under Determined Systems.
I look on the documentation, both use QR yet Julia states it uses Pivoted QR. Is that the trick for that?
Does it cost more (Performance wise) than regular QR?

I was looking for a solution that minimizes memory allocation because I’m solving several systems in a loop. Since my systems have full rank, the cheaper LQ factorization can achieve the same result as pivoted QR.

I was wondering if there is any function available which can use the LQ factorization itself, without constructing the L and Q matrices explicitly.

Seems like it is just a bug / missing feature. lq(A) \ b should be perfectly well defined.


The dense QR and LQ solvers in LinearAlgebra all seem to materialize a new triangular matrix, against Lucas’ wish to avoid allocation. IIUC the corresponding LAPACK routines use views into the packed factorizations, so why don’t the Julia versions?

The new LQ solver works in-place when you call ldiv!.

The pivoted-QR solver needs to call tzrzf!, which requires a new array. (Maybe the subsequent line constructing an UpperTriangular matrix could use a view, though? Better yet, maybe this factorization could be cached in the QRPivoted for subsequent calls?)

(The pivoted-QR ldiv! implementation is fairly old — it derives from the / function in #3315 based on LAPACK’s xgelsy, which was ported to an “in-place” function in #5526 without changing the fact that it allocated new matrices. It would probably be useful to revisit it, cc @andreasnoack.)

It’s been a while but I guess I couldn’t figure out a way to avoid allocations. I think the LAPACK version uses work space for this step. However, it might very well be possible to reduce the number of allocations here.