Allocations in sparse matrices

Hello, I have a code where I use ‘normal’ (Array{Float64,2 } type) matrices and SparseMatrixCSC{Float64} matrices. Lets say

A::SparseMatrixCSC{Float64} = sparse(1.0I, 100, 100)
a::Float64 = 2.0

If I create other matrices, for example, if B is a SparseMatrixCSC already defined (including its type), if I assign

B = sparse(a*I, 100, 100) .- A

This operation changing the values of B generates allocations.

I am new using sparseMatrix and analyzing allocations per se, do you have any advice (if possible) to hopefully get 0 allocations?

EDIT: I found that some basic arithmetic operations with sparse matrices algo generates allocations.

For example,

A .= spdiagm(600, 600, 0 => centdiag) .+ 
        spdiagm(600, 600, -1 => lowdiag[1:Ia*Jz-1]) .+
        spdiagm(600, 600, 1 => updiag[2:Ia*Jz]) .+ B

Where A and B are predefined SparseMatrixCSC{Float64} and centdiag, loading, updiag are Arrays of floats generates almost 2.7MiB of allocations (according to @timeit of TimerOutputs). Do you know a better approach to deal with sparse matrices or how to reduce allocations.

I believe you are creating a new matrix on the right hand side. A space needs to be allocated for it.

1 Like

Indeed, if you want to update B in-place, you should do something like

B .= A1 .+ A2

However, note that such an operation will still allocate in the sparse case unless B, A1 and A2 have the same sparsity pattern (and even then it might, I’m not 100% sure). Are you working with matrices whose sparsity patterns are identical?

1 Like

Note that

julia> sparse(a*I, 100, 100) ≈ a*I
true

No need to construct the intermediate sparse matrix (if a scaled identity is really what you need in your application).

EDIT: I think I have to withdraw my comment because, while true, the UniformScaling doesn’t work well when broadcasting.

Yes and no, Yes because A has the same pattern that sparse(aI, 100, 100) (in fact is A = sparse(1.0I, 100, 100)). B is an auxiliar matrix to save the sparse matrix sparse(aI, 100, 100) .- A (B initilizty is also defined as sparse(I, 100, 100)). I can avoid to define B and directly use sparse(aI, 100, 100) .- A, but in order to make the code clearer I would prefer to keep B as auxiliary matrix.

Why do you need to keep both the matrix and the matrix minus identity?

In the example you provided

A .= spdiagm(600, 600, 0 => centdiag) .+ 
        spdiagm(600, 600, -1 => lowdiag[1:Ia*Jz-1]) .+
        spdiagm(600, 600, 1 => updiag[2:Ia*Jz]) .+ B

This will naturally allocate. Everytime you call ‘spdiadm’ you are creating a new sparse matrix and this allocates.

Then everytime you add two matrizes you allocate a new one.

If you have three sparse matrizes, C, A and B, with the same sparsity pattern and want to make C = A + B the way to do this in a allocation free way is by making

C.nzval .= A.nzval .+ B.nzval

In the general case, adding or multiplying two sparse matrizes will always envolve allocation (even of you try to do operations in-place). The reason for this is that you do not know the sparsity pattern of the output matrix before actually adding the two matrizes. Therefore you will have to dynamically resize the output matrix as you do the operation (or use a two-stage method and resize only once). Furthermore, the way algorithms for sparse matrix addition/multiplication work is to evaluate each column of the output matrix by initially computing it in a temporary dense vector and then copying the non-zero values of the temporary dense vector into the sparse output matrix. This temporary array has to be allocated.

To summarize, when dealling with sparse matrices it is very hard to avoid allocations. The functions implemented on SparseArrays.jl always do some allocations : always allocate two dense vectors + possible dynamical resizes of the output matrix.