How to procede with QR decomposition with julia? I tryed to port a python code, but im having trouble with qr julia function from linearalgebra module. Issues with matrix size and dimensions.
X = df.values[:, :-1].flatten()
X1 = np.vstack([X, np.ones(len(X))]).transpose()
Y = df.values[:, -1]
Q, R = qr(X1)
b = inv(R).dot(Q.transpose()).dot(Y)
print(b)
Hello, thanks for your reply. Tested it, but Im having trouble to port my code. Ill post the entire code here and the output.
import pandas as pd
from numpy.linalg import qr, inv
import numpy as np
df = pd.read_csv("https://bit.ly/3goOAnt", delimiter=",")
X = df.values[:, :-1].flatten()
X1 = np.vstack([X, np.ones(len(X))]).transpose()
Y = df.values[:, -1]
Q, R = qr(X1)
b = inv(R).dot(Q.transpose()).dot(Y)
print(b)
Python did some things behind scenes I cant figure to fix in code Im porting. The transpose maybe is not necessary. But its hard to got it. Debugging both at the same time here; Almost sure that is a transpose in a place that should not be.
It looks like you are trying to solve the linear system Ay=b. If so, the \ operator is probably what you are looking for. Unless of course you need the QR factors again later on.
This originates from two notions of the QR decomposition of rectangular matrices (the âfull-sizeâ and âeconomy-sizeâ). I didnât look at the snippet you posted, but thatâs something to keep in mind when comparing with other languages (Iâm not sure what python does)
python needs to transpose a matrix in place to generate a 10x2 one:
X1 = np.vstack([X, np.ones(len(X))]).transpose()
julia dont need as vstack or [x y] already create it correctly. That is one issue I found it here checking the values one by one. When I finnish Iâll return the code here.
But interesting how numpy takes about matrix⌠it needs to be aware this caveits.
Even if you want the QR factors too, you can take the factorization object and use \ for the least-square solution:
julia> using LinearAlgebra
julia> X = rand(100,3); y = rand(100);
julia> F = qr(X);
julia> F \ y â X \ y â F.R \ (Matrix(F.Q)' * y) â F.R \ (F.Q' * y)[1:size(X,2)]
true
One of the tricky things about the QR factorization is that there are two variants when the m \times n matrix X is âtallâ (m > n) (as is typical in least-square problems): the âthinâ variant X = \hat{Q}\hat{R} in which \hat{Q} is the same âtallâ m \times n size as X and \hat{R} is square, and the âfullâ variant X = QR in which Q is a square unitarym \times m matrix and R is âtallâ m \times n. If you do F = qr(X) then:
F.Q represents the square unitary matrix Q from the âfullâ QR factorization. (This is also what you get from Q, R = qr(X).) Technically, it doesnât actually store this matrix explicitly â instead, it has a compact representation of the linear operator Qx so that F.Q * x and/or F.Q' * y (=Q^* y) can be computed quickly. (Essentially, this is what the underlying numerical algorithms ânativelyâ give you.)
Matrix(F.Q) gives you \hat{Q}, the âthinâ QR factor, as an explicit matrix. Computing this is not free, however â it basically works by multiplying F.Q by the first n columns of the identity matrix I â so you generally avoid constructing it if you can.
F.R always gives you the square upper-triangular matrix \hat{R} from the thin QR. (This is also what you get from Q, R = qr(X).)
When you find the least-square solution from QR, what you want is \hat{R}^{-1} \hat{Q}^* y using the thin QR factors. In particular, this means that if you want to compute \hat{Q}^* y you cannot simply use F.Q' * y, and instead need to either use something like Matrix(F.Q)' * y or, more efficiently, (F.Q' * y)[1:n].
Itâs generally better to use F \ y or even better X \ y (which uses a pivoted QR factorization automatically) to solve least-square problems:
F \ y will deal with the size of the Q matrix for you efficiently. It will also do the upper-triangular solve \hat{R}^{-1} \text{(vector)} efficiently â since this is an upper-triangular system you can solve it in O(n^2) operations using backsubstitution. (Get out of the habit of inverting matrices!inv(A) * b is almost never going to be the best way to solve a linear system.)
X \ y is even better, because it defaults to a pivoted QR factorization â corresponding to F = qr(X, ColumnNorm()) â which deals better with badly conditioned X (e.g. nearly parallel columns). Thatâs not what qr(X) gives you by default because it involves an extra permutation matrix compared to ordinary QR. i.e. it is a factorization XP = QR where P is a permutation. See also Efficient way of doing linear regression - #33 by stevengj
Even if you want to explicitly compute \hat{R}^{-1} z for some z, it makes my eyes water to see people write inv(R) * z (in Julia) or inv(R).dot(z) (in Python). Get out of the habit of inverting matrices. At the very least, do R \ z (in Julia) or scipy.linalg.solve(R, z) (in Python). Better yet, take advantage of the fact that R is upper-triangular by doing UpperTriangular(R) \ z (in Julia) or scipy.linalg.solve_triangular(R, z) (in Python)
This is even worse than just inverting R (which you should also generally avoid) â by putting the parentheses like this, you are doing a matrixâmatrix multiplication followed by a matrixâvector multiplicatio rather than two matrixâvector multiplication. (AB)x is much slower than A(Bx) when A and B are matrices and x is a vector.
If your matrices are small and you donât care about performance, then maybe you donât care, but you really want to get into the habit of thinking more carefully about this kind of thing in case your matrices ever get big. In any case, there are better ways to solve least-square problems as I explained above.
PS. That being said, inv(R) * Q' does automagically pick the right size of Q for you â it ends up corresponding to \hat{R}^{-1} \hat{Q}^* from the thin QR. But itâs even easier to do QR \ Y or X \ Y, which do the right thing.
Code is just for learning julia. Im porting a lot of exercises I have in python, but your tips about performance is very apreciated. I will do like you have demonstred, as because in real world we will doing linear regressions in a large sample. Thank you