Spdiagm allocation madness

I am seeing surprisingly large allocations from spdiagm. The compute
times are ok for what I need to do, so perhaps this is a case where large
allocations do not harm. Here’s the story.

I compute the discrete negative Laplacian with homogeneous Dirichlet
boundary conditions on the unit square with an n x n grid.

function Lap2d(n)
h=1/(n+1);
maindiag=4*ones(n^2,)/(h*h);
sxdiag=-ones(n^2-1,)/(h*h);
sydiag=-ones(n^2-n,)/(h*h);
for iz=n:n:n^2-1
   sxdiag[iz]=0.0;
end
L2d=spdiagm(-n => sydiag, -1 => sxdiag, 0=> maindiag,
             1 => sxdiag, n => sydiag);
return L2d
end

The sparse matrix should take rougly 5 vectors or 40 n^2 bytes.

Not exactly …

julia> n=1000;

julia> @btime Lap2d($n);
  129.733 ms (81 allocations: 358.46 MiB)

So I need 40 MB to store the matrix and am allocating 9 times that.
Why?

The timings are fast enough for what I need, so performance is fine.

1 Like

spdiagm uses sparse! to create a SparseMatrixCSC. The result size will be 100 MB instead of 40 MB. It would be possible to construct the CSC representation directly and avoid all extra allocations, but it might not be worth the trouble.

Multiplying a matrix with a scalar allocates a new matrix for the result. Use fill instead of ones.

3 Likes

That’s progress. Thanks. This version save me some storage and is a somewhat faster.

function Lap2d(n)
h=1/(n+1);
hm2=(n+1.0)^2;
maindiag=fill(4*hm2,(n^2,));
sxdiag=fill(-hm2,(n^2-1,));
sydiag=fill(-hm2,(n^2-n,));
for iz=n:n:n^2-1
   sxdiag[iz]=0.0;
end
L2d=spdiagm(-n => sydiag, -1 => sxdiag, 0=> maindiag,
             1 => sxdiag, n => sydiag);
return L2d
end
julia> @btime Lap2d(1000);
  99.829 ms (69 allocations: 312.69 MiB)