Efficient IRLS

regression

#1

I was inspecting GLM.jl IRLS implementation, but haven’t figured out from the code how many matrix decompositions it performs.

For any given, A::AbstractMatrix{<:Real}, b::AbstractVector{<:Real}, and wts::AbstractWeights,
for solving the linear system: Ax = b with weights w, x = (A^{\top}WA)^{-1}A^{\top}Wb.
Assuming I compute the F::QRCompactWY = qrfact(sqrt.(w) .* A) and solve x = F \ (sqrt.(w) .* y), for new weights w_{2}, how can I solve the new system relying on my already computed F::QRCompactWY? Do I need to compute a new factorization every iteration? This study suggests the best would be a Q-less decomposition.


#2

I am not sure if there is a way to re-use the QR factorization since stretching the coordinates of orthogonal vectors by different amounts distorts the orthogonality. But how big is A^TWA? If it is small, you can just factorize/invert it, possibly using StaticArrays.jl if it is of fixed size. If it is large and the weights are not aggressively changing then consider IterativeSolvers.jl from the second iteration onwards since the previous solution would be a good starting point for the next one. One advantage of iterative solvers is that you don’t have to multiply out the matrices, you can just define a linear operator that applies the matrices one by one to a vector at the cost of some loss of accuracy.


#3

I found this cool post with a few variants that achieves efficient IRLS.