Initialization of vector of matrix decompositions

For one of my projects, I need to perform multiple QR (up to multiple hundreds) decompositions of dense tall complex matrices. These decompositions are later used in some linear algebra operations, where the operations are performed on views of subblocks.

After trying qr, qrfact and their bang versions, I am facing the issue of the storage.
The packed QR format returning objects of type e.g. LinearAlgebra.QRCompactWY seems to be efficient both in terms of allocation and for later manipulation when called directly, but in my case, i will have to use subarrays of such objects.
Is it therefore still better to store the decompositions this way of to store separately the`Q and R matrices ?
Also,my initial guess to store the decompositions was to initialize a vector of those.
However, while on can initialize a vector of matrices using a comprehension e.g.

[Matrix{ComplexF64}(undef, i, j) for _ in 1:k]

it does not seem to be possible to preallocate for

[LinearAlgebra.QRCompactWY{Float64,Array{Float64,2}}(undef, i, j) for _ in 1:k]

How should this storage be pre-allocated ?

You can’t fully preallocate a QR factorization using the julia wrappers anyway: eg try

a = randn(2,2)
b = qr!(a)
a .= 0

From there I see only two alternatives :

  • iteratively push! the decompositions into the vector
  • Pre-allocate for dense Q and R matrices

Do you concur ?

My point is that in terms of pre-allocations, you can’t do better than ret = [qr!(arr[i]) for i=1:N] (which will overwrite a[i] and allocate memory for the Q factors), unless you bypass julia’s qr wrapper and go straight for the LAPACK calls.

1 Like

I tested the method you proposed, but I is overall slower than by a direct allocation of the matrices, seemingly due to the access on views afterwards.