How can I help Julia better allocate memory on sparse matrix operations?

question
performance

#1

I have a function in a for loop along the lines of this:

function spmat_ops!(B, C, D, E, F, G, b, x)
    # Upper case vars are SparseMatrixCSC and lower case are Vectors
    A = B*C + D*E*F + G
    x = somefastsolver(A,b)
end

function main()
    for i = 1:100
        spmat_ops!(B, C, D, E, F, G, b, x)
        b += x
    end
end

The SparceMatrixCSC operation is causing a lot of large allocations thus slowing down the code quite a bit. Is there any way to alleviate this?


#2

What’s going on here is a bunch of temporary matrices are being constructing in the process of calculating A. To make this explicit, the line A = B*C + D*E*F + G gets expanded into something like:

T1 = B*C
T2 = E*F
T3 = D*T2
T4 = T1 + T2
A = T4 + G

So you end up generating 4 temporaries matrices, allocating a bunch of memory. What to do about this, I’m not exactly sure. Julia uses UMFPACK/SuiteSparse for sparse matrix operations and there might be a way to provide the output matrix for each operation (possibly reusing a single temporary matrix several times), but then you have to figure out the sparsity pattern of the output matrix, which is non-trivial for matrix-matrix multiplication.

Bottom line, I would look at the SuiteSparse documentation, see if the capability exists there, then have a look at julia’s base/sparse/cholmod.jl and see if the capability is exposed in julia.


#3

Thanks for the feedback. I’ll look at the SuiteSparse docs and look through cholmod.jl.

The temp array structures should be unchanging and I do know their sparsity pattern.

I’m curious, for example, if T1 is created, then directly modified? Or perhaps arrays I, J, V are created then T1 = sparse(I, J, V) in which case it seems like preallocating T1 would be waste of space.


#4

First of all I suspect you want x[:] = ....

If somefastsolver is an iterative solver, then you are probably better off storing A as a type - say (I simplify because I don’t want to type so much)

type MyOperator
    B
   C
   G
end

function (Base.*)(A::MyOperator, x::AbstractVector)
   # now perform in-place sparse matrix multiplications using `A_mul_B!`
end

#5

Perfect, the in-place sparse functions were what I was looking for. The operator overloading is an interesting solution I might consider.