I have to compute the lu decomposition of a matrix that is dependent - I was thinking to create a buffer first and then inplace compute the lu decomposition when needed, but after googling and searching through some threads, I still have not found the correct function to use with the LinearAlgebra.jl package.

Below is a MWE of my use case. I would like to substitute LinearAlgebra.lu with something like LinearAlgebra.lu! and find out how exactly to assign the buffer that would work with this.

using Distributions, LinearAlgebra, Random
T=100
n=3
rng = Xoshiro(1)
d = InverseWishart(10, [1.5 0.8 0.5 ; 0.8 2. .5 ; .5 .5 1.] )
for t in Base.OneTo(T)
mat = rand(d)
mat_lu = LinearAlgebra.lu(mat)
# do some computations with mat_lu
end

Does this only work with Sparse Matrices or also out of the box with standard LU decomposed matrices as buffer? I.e., the following would not work (continuation from MWE):

buffer = LinearAlgebra.lu( rand(d) )
mat = rand(d)
LinearAlgebra.lu!(buffer, mat) # MethodError: no method matching lu!(::LU{Float64, Matrix{Float64}, Vector{Int64}}, ::Matrix{Float64})

I couldnâ€™t link to it but I think the second docstring for lu! on this page is more generic. From what I understand, both the L and U parts can be stored in the matrix itself

I think the 2 argument variant only works for sparse matrices and is intended for factorizing another sparse matrix with the same sparsity structure as the previous matrix.

In your case I think you need to keep the reference to mat like so:

mat = rand(d)
buffer = LinearAlgebra.lu(mat)
mat = rand!(mat)
buffer = LinearAlgebra.lu!(mat)

You could maybe wrap this logic into a struct to make it more ergonomic to use.

Thanks a lot! This still seems to be allocating though, i.e.:

using BenchmarkTools
mat = rand(d)
buffer = LinearAlgebra.lu(mat)
mat = rand!(mat)
buffer = LinearAlgebra.lu!(mat)
@btime LinearAlgebra.lu!($mat) #186.438 ns (1 allocation: 80 bytes)

I assume this is because mat is a standard matrix, while the LinearAlgebra.lu! output is a LU{Float64, Matrix{Float64}, Vector{Int64}} output.

Is there a way around this? For a single argument function, I assume I would already need to provide a LU buffer, but then I would already need to destructure the time dependent covariance matrix into a LU decomposition, which is what I want to avoid in the first place.

Judging by the source code of lu!, I donâ€™t think there is any way around allocation of ipiv without manual hacking.
Note that strictly speaking, the ! doesnâ€™t guarantee the absence of allocation: it just warns the user that some inputs might be overwritten.

FastLapackInterface.jl provides ways to separate memory allocation from (a subset of) LAPACK computations. It allows to create a â€śworkspaceâ€ť which can be reused for several calls to the same functions on similar matrices.