Hello,

I am working on the orthogonal matching pursuit algorithm to select a sparse set of features to solve Ax = b. In this algorithm, we add in a greedy fashion a new column (i.e. feature) at each step. The residual at step k is r^{(k)} = A^{(k)}x^{(k)} - b, where A^{(k)} contains the selected columns of A at the step k.

We want to avoid to solve the linear system A^{(k)}x^{(k)} = b to compute the norm of the residual error \delta^{(k)} = ||r^{(k)}||_2 at each step. To do so, Baptista et al. (Redirecting) sequentially update a QR factorization of A^{(k)} = Q^{(k)} R^{(k)} to efficiently compute the norm of the residual error \delta^{(k)} (equations (14) - (18)).

There are a few options in Julia to update a QR factorization by adding a new column to the right of A^{(k)}:

`QRupdate.jl`

(https://github.com/mpf/QRupdate.jl), but this one only updates the R matrix, (and not Q), and squares the conditioning number for R.

`GeneralQP.jl`

(https://github.com/oxfordcontrol/GeneralQP.jl/blob/master/src/linear_algebra.jl) but does not seem to be maintained anymore (last commit in June 2019).

Is there a better option to perform this QR factorization?