Avoiding memory allocation for solves with multiple right-hand sides

Hi everyone,

I am new to Julia and I try to understand why a solve using the backslash operator allocates memory although the RHS vector is allocated, the solution vector is allocated and the factorisation is precomputed (here Cholesky factorisation). Is there a way to avoid such behaviour?

Below is a minimum example.

Cheers and thanks for the hints!

using LinearAlgebra, SparseArrays

function Successive_solves!( x, Kc, b, nsolves )
    for i=1:nsolves
        x  .= Kc\b 
    end 
    return
end

function ThreePointStencil( N, dx )
    # Assemble matrix (not the part of interest)
    cC =  2.0/dx^2 * ones(Float64, N); cC[1]   = 3.0/dx^2; cC[end]   = 3.0/dx^2
    cW = -1.0/dx^2 * ones(Float64, N); cW[1]   = 0.0
    cE = -1.0/dx^2 * ones(Float64, N); cE[end] = 0.0
    iC = 1:N
    iW = ones(Int64, N); iW[2:end-0] .= iC[1:end-1]
    iE = ones(Int64, N); iE[1:end-1] .= iC[2:end-0]
    I  = [iC[:]; iC[:]; iC[:]]
    J  = [iC[:]; iW[:]; iE[:]]
    V  = [cC[:]; cW[:]; cE[:]]
    K  = sparse( I, J, V )
    droptol!(K, 1e-9)
    return K
end


# Input
N  = 1000000
dx = 1

# Sparse matrix
K = ThreePointStencil( N, dx )

# Allocate arrays
b = rand(N)           # Right-hand-side array 
x = zeros(Float64, N) # Solution array

# Factor
Kc = cholesky(K)

# One solve
@time x  .= Kc\b  # why does this allocate memory although the factorisation done and the vectors (x, b) are allocated?

# Successive solves
nsolves = 10
@time Successive_solves!( x, Kc, b, nsolves ) # then, memory allocation depends on the number of solves...

It allocates because the operation Kc\b allocates, on the right-hand side. You either have to use an in-place version, or if the matrices are small, use static arrays:

julia> M = rand(3,3);  b = rand(3);

julia> @btime x = $M \ $b;
  346.107 ns (3 allocations: 384 bytes)

julia> using StaticArrays

julia> M = @SMatrix(rand(3,3));  b = @SVector(rand(3));

julia> @btime x = $M \ $b;
  5.228 ns (0 allocations: 0 bytes)

Probably this is what you are looking for: Non-allocating matrix inversion - #7 by stevengj

1 Like

Thanks a lot for the insights!

I guess I would really need an in-place version of backslash rather than using StaticArrays since I would like to work with large systems.
So, is there an in-place version of backslash existing or does one really need to call low-level LAPACK function like in the stevengj post?

Initially I was looking for an in-place version of cs_ltsolve() and cs_lsolve() from SuiteSparse but I’ve not yet found how to call them within Julia. These functions are really nice to use in combination with Cholmod.

LinearAlgebra.ldiv! is what you need.

ldiv!(Y, A, B) -> Y
Compute A \ B in-place and store the result in Y, returning the result.

4 Likes

I would catch the LU of Kc and use it with ldiv!. A bit like

function Successive_solves!( x, Kc, b, nsolves )
Kclu = lu(Kc)
    for i=1:nsolves
        ldiv!(x, Kclu, b) 
    end 
    return
end

Thank you all!

OK, get it ldiv!() to work with an LU factorisation, cool! However, it does not seem to work with a Cholesky factorisation as it returns:
ERROR: LoadError: MethodError: no method matching rdiv!(::Vector{Float64}, ::SuiteSparse.CHOLMOD.Factor{Float64}, ::Vector{Float64})

Actually, if one inspires from the example given in the doc of ldiv!() and changes A to a symmetric positive-definite matrix, it seems that ldiv!() can work with Cholesky if A is dense but not when it is sparse, see below:

A = [2.0 -1.0 0.0; -1.0 2.0 -1.0; 0.0 -1.0 2.0];
Ac = cholesky(A)   # works
# Ac = cholesky(sparse(A))   # does not work
X = [1; 2.5; 3];
Y = zero(X);
ldiv!(Y, Ac, X);

Any hints?

EDIT: Actually it looks like there is no trivial was to combine ldiv!() with a sparse Cholesky factorisation:
https://github.com/JuliaLang/SuiteSparse.jl/issues/19
I wonder whether the original SuiteSparse CS functions (like cs_ltsolve) are available from within Julia’s SuiteSparse implementation…

I did that for my library, see here

Pr = cholesky(L1)
using SuiteSparse
# we need this "hack" to be able to use Pr as a preconditioner.
LinearAlgebra.ldiv!(P::SuiteSparse.CHOLMOD.Factor{Float64}, v) = -(P \ v)

# rtol must be small enough to pass the Fold points and to get precise eigenvalues
# we know that the jacobian is symmetric so we tell the solver
ls = GMRESKrylovKit(verbose = 0, rtol = 1e-9, maxiter = 150, ishermitian = true, Pl = Pr)

Thanks! If I get it right you actually overlad the ldiv!() function such that it can work with Cholesky factors. Is that right?

Would there also be a hack for the 3 argument version of ldiv!() such that the entries of v are not replaced by the entries of the solution vector?