Sorry to come late to this thread, but I’m surprised that no one seems to have given the correct explanation here for the performance differences between various methods to solve \min_x \Vert Ax-b\Vert_2.
-
x̂ = (A'A)\(A'b)
is the “normal equations” approach. (You could also usecholesky(Hermitian(A'A))\(A'b)
to explicitly take advantage of the fact thatA'A
is SPD). This (and variations thereof) is the fastest approach, but it is also the least accurate. In particular, it squares the condition number ofA
, so it doubles the number of digits you lose to roundoff errors and similar. You should only use this method if you know that you have a well-conditionedA
. -
x̂ = A \ b
is equivalent toqr(A, Val{true}()) \ b
— it uses a pivoted QR factorization. This is slower than the normal-equations approach, but it is much more accurate for badly conditioned matrices because it doesn’t square the condition number. -
x̂ = pinv(A) * b
uses the SVD ofA
to apply the pseudo-inverse. This is the slowest method, but it gives you the most control over accuracy for ill-conditioned matrices because you can specify a tolerance inpinv
to regularize the problem by dropping noisy singular values.
QR is the default choice used for A \ b
(in Julia and many other systems) because it is reliable without being too slow.