# Sparse matrix and sparse subarray subtraction returns full arrays

Hi! So, I am writing a routine to recursively invert a big, sparse matrix which is known to be nice and reasonably invertible with this recursive method. Specifically, this pertains to the green’s function of a sparse hamiltonian for nanoelectronics modelling.

Unfortunately, when I take some @view of a sparse matrix, the product of a sparse matrix and the subarray return full arrays, greatly reducing the efficiency of this method.

The function is defined below, and here is a test case to profile it:

``````Using SparseArrays; Using LinearAlgebra; f(n) = recInv(sprand(n,n,1/n))

@time f(2000);
``````
``````function recInv(M::Union{SparseMatrixCSC, SubArray}, sizecutoff::Int=128)
nx = size(M)[2]
ny = size(M)[1]
if(nx <= sizecutoff || ny <= sizecutoff)
return sparse(inv(Array(M)))
else
hxu = Int(round(nx/2))+1; hyu = Int(round(ny/2))+1
hxl = hxu -1; hyl = hyu -1
A = @view M[1:hyl,1:hxl];  U = @view M[1:hyl,hxu:nx];
V = @view M[hyu:ny,1:hxl]; C = @view M[hyu:ny,hxu:nx];
Cinv = recInv(C,sizecutoff)
Γ = Cinv*V; Δ = sparse(U*Cinv)
expThatBreaks = A .- Δ*V
Σ = recInv(sparse(expThatBreaks),sizecutoff)
Minv = [Σ          -Σ*Δ;
-Γ*Σ   Cinv .+ Γ*Σ*Δ]
return sparse(Minv)
end
end
``````

If one takes a look at expThatBreaks, Δ*V is returning full Arrays though I see no reason why it should be. Do I need to write my own routines for sparse array - subarray multiplication?

I can’t reproduce:

``````julia> using SparseArrays

julia> A = sprand(10,10, 0.1);

julia> @views A * A[:, 1:3]
10×3 SparseMatrixCSC{Float64, Int64} with 3 stored entries:
0.0486689   ⋅        0.537533
⋅          ⋅         ⋅
⋅          ⋅         ⋅
⋅         0.367537   ⋅
⋅          ⋅         ⋅
⋅          ⋅         ⋅
⋅          ⋅         ⋅
⋅          ⋅         ⋅
⋅          ⋅         ⋅
⋅          ⋅         ⋅
``````

Could you make a similar simple example that illustrates your problem?

Huh. Okay, let me mess around with this for a bit, definitely might be on my end.

Okay. I apologize, I realize my premise was false. My issue arises in the following MWE with sparse matrix / sparse subarray subtraction, not multiplication:

``````exmat(n) = sprand(n,n,1/n)+5I(n);
A = exmat(50); B = exmat(100);
Bview = @view B[1:50,1:50];
C = A .- Bview
display(typeof(A))
display(typeof(Bview))
display(typeof(C))
``````

Which returns

``````SparseMatrixCSC{Float64, Int64}
SubArray{Float64, 2, SparseMatrixCSC{Float64, Int64}, Tuple{UnitRange{Int64}, UnitRange{Int64}}, false}
Matrix{Float64} (alias for Array{Float64, 2})
``````

Can anything to be done to prevent the subtraction from returning a non-sparse matrix?

Not using views seems to be the current workaround: `A .- B[1:50,1:50]` works fine. Of course, this allocates a bit extra memory.

Probably there is a missing broadcast method in the SparseArrays package — can you file an issue?

PS. Note that `sprand(n,n,1/n)+5I` should be faster than `sprand(n,n,1/n)+5I(n)`. In Julia, `I` can be used as a “magic matrix” that automatically chooses its size according to what it is added to, and there are specialized methods for adding it to most matrix types. `I(n)` allocates a `Diagonal` matrix, which isn’t too bad (only the diagonal entries are stored), but it doesn’t take advantage of the fact that all of the diagonal entries are the same.

Ah well enough.That is somewhat unfortunate to hear, since I think my code is scaling with O(n^2) on account of the memory allocations.

Sure, I will file an issue then.

Thank you for the insight!

The copying workaround won’t hurt the asymptotic scaling, since `A + B` has to allocate anyway for the result — it only changes the constant factor if you convert the `B` view into a copy.

(If you really want to cut down on allocations for this type of code, you need to think more deeply about the CSC data structure used to store sparse matrices.)