# Julia Linear System Solver (`mldivide()`)

The Matrix Division operator, `\` according to Julia’s documentation uses, for this case, the Pivoted QR algorithm to return the Minimum Norm Solution.

In MATLAB’s documentation for the algorithms it says it uses also the QR solver, yet it doesn’t say what kind. Yet MATLAB doesn’t guarantee to return the Minimum Norm solution and indeed in most cases doesn’t.

My questions are:

1. Does Julia return the Minimum Norm solution since it uses the Pivoted QR or is there an additional step on top of that?
2. Performance wise, does it have any effect on the speed of the operation? Could it be faster if there was no requirements for the minimum norm solution?

Since Matlab is closed-source, it’s difficult to know for certain what algorithm it is using. For non-square `x = A \ b` my understanding is that it calls `lscov`, which uses QR but does not find the minimum-norm solution. Instead, it finds a “basic” solution in which `x` is as sparse as possible (by some algorithm that they don’t document, but probably is similar to the one described here).

Julia also uses QR, but in a different way (dating back to #3315 in 2013): it implements the algorithm from LAPACK’s `xgelsy` to obtain the minimum-norm solution from the pivoted QR factorization.

So, basically, the difference lies mainly in the post-processing, i.e. how you use the factorization. I expect that the costs are pretty similar, since in both cases most of the time is spent: (i) performing the QR factorization, (ii) doing a triangular solve, and (iii) multiplying by Q’. However, in the minimum-norm case there is some additional work in determining the rank and calling `xtzrzf`, so it is probably slower by a small factor.

If you have a “wide” matrix that is not rank-deficient and you want the minimum-norm solution, it will soon be faster (> 2× for large `A`) to do `lq(A) \ b` (#34350). The xgelsy algorithm with pivoted QR is more general in that it handles rank-deficient `A`, however.

3 Likes

(I have little doubt that Julia’s pivoted-QR solver could be further optimized, though, since it hasn’t really changed since the first draft in 2013. e.g. it doesn’t have any special-case code for the common situation of a full-rank `A`, which could probably use a faster method.)

2 Likes

If I want a solution, any solution, with the highest speed, which function should I use?

For what kind of problem?

I guess for:

1. `A` is Square, no other property.
2. `A` is not square, over determined. No other property.
3. `A` is not square, under determined. No other property.
4. `A` is not square, over determined, Full rank. No other property.

For the case `A` is under determined, Full rank. No other property you wrote the optimal will be `lq(A)`.

If by “no other property” you mean that you don’t know whether it is rank-deficient, that is hard to answer in general. A typical default would be using pivoted QR, but you might need something fancier like an SVD-based solution to regularize your problem.

As I mentioned above, I’m guessing the minimum-norm solution from pivoted QR is slightly slower than getting a basic solution from pivoted QR, but we don’t currently have a function to give us a basic solution as far as I know, so you’d have to implement it yourself.

`A` is not square, over determined, Full rank. No other property.

If it is far from rank-deficient, i.e. it is very well conditioned so that you are okay with losing twice as many digits to roundoff errors, you could use the normal equations as I mentioned in another approach. Otherwise probably `qr(A) \ b` (i.e. non-pivoted QR).

For very small matrices, of course, matters change because you don’t want the overhead of LAPACK. (I’m not sure where the crossover point occurs, but certainly < 10x10 you want simple triple loops or even something unrolled like StaticArrays.)

2 Likes