# Efficiently check if a sparse vector is a column of a sparse matrix

I have a sparse matrix A and a sparse vector x and want to check if the vector is a column of the matrix.

using SparseArrays, BenchmarkTools
A = spdiagm(1:1000);
x = sparsevec([950],[950],1000);
B = Array(A);  # dense
y = Array(x);  # dense
> @btime \$x in eachcol(\$A);
2.586 ms (0 allocations: 0 bytes)
> @btime \$y in eachcol(\$B);
339.167 μs (0 allocations: 0 bytes)

Using dense arrays is much faster.

I wrote a simple loop and found the issue with view:

function foo(x,A)
for i in axes(A,2)
if x == view(A,:,i)
return true
end
end
return false
end
> @btime foo(\$x,\$A);
2.282 ms (0 allocations: 0 bytes)

So, I dropped the view and got the fastest version so far:

function foo2(x,A)
for i in axes(A,2)
if x == A[:,i]
return true
end
end
return false
end
> @btime foo2(\$x,\$A);
90.281 μs (1900 allocations: 118.75 KiB)

The vast number of allocations hints that there might be a faster implementation. Any suggestions?

You could always loop over the low-level rowvals and nonzeros of the sparse matrix.

Ideally, @view on a sparse-CSC matrix would create a sparse vector with views of the data. I tried defining this:

using SparseArrays
using SparseArrays: AbstractSparseMatrixCSC, getcolptr
function Base.view(x::AbstractSparseMatrixCSC, ::Colon, j::Integer)
checkbounds(x, :, j)
r1 = convert(Int, getcolptr(x)[j])
r2 = convert(Int, getcolptr(x)[j+1]) - 1
return @views SparseVector(size(x, 1), rowvals(x)[r1:r2], nonzeros(x)[r1:r2])
end

but it fails because SparseVector requires its arguments to be Vector and doesn’t accept views. Would be nice to generalize this type to allow other AbstractVector types.

1 Like

You could always loop over the low-level rowvals and nonzeros of the sparse matrix.

That improves the performance:

function foo3(x,A)
for j in axes(A,2)
r1 = SparseArrays.getcolptr(A)[j]
r2 = SparseArrays.getcolptr(A)[j+1]-1
if isequal(rowvals(A)[r1:r2],rowvals(x)) && isequal(nonzeros(A)[r1:r2],nonzeros(x))
return true
end
end
return false
end
> @btime foo3(\$x,\$A);
23.000 μs (951 allocations: 59.44 KiB)

I don’t know if it’s possible to get 0 allocations, but this is fast enough for my purpose right now.

I agree that it would be best if view(::AbstractSparseMatrixCSC,::Colon,j::Integer) would return a sparse vector viewing the data.

Try putting @views on this line (or on a surrounding block) so that you don’t make a copy with slices. Also, isequal is probably not the function you want here:

julia> isequal(-0.0, 0.0)
false

Just use rowvals(A)[r1:r2] == rowvals(x) etc.

I have Integer coefficients, so isequal is fine.(?)
But it doesn’t hurt to replace it by ==.

It took me some time to figure out how to use view on rowvals and nonzeros, but

@views if rowvals(A)[r1:r2] == rowvals(x)
if nonzeros(A)[r1:r2] == nonzeros(x)
return true
end
end

works!

> @btime foo3(\$x,\$A)
3.287 μs (0 allocations: 0 bytes)

Thank you again for the help!

There is the small issue on non-structural zeroes, which if you have any, might confuse this test.

2 Likes

Just put @views on the whole block of code, and it will change every [...] slice into a view, including expressions like rowvals(A)[r1:r2].

(@views is one of those syntaxes that seems to be under-used in Julia code … it’s a lot cleaner than putting @view or view(...) on every slicing expression.)

1 Like

Thanks! I fixed it using dropzeros!(x) in my first line, which doesn’t add any allocations.

Unfortunately, that changes x, which works for me, but it is obviously not a feasible replacement for x in eachcol(A). I don’t have any non-structural zeros in A, but for a general solution, that would also have to be taken into account.

1 Like

The zeros might have to be dropped in A too (as non-structural zeros on either side of equality may mess it up)… I see you have already taken note of this.

Another point is the difficulty of benchmarking this operation, as the test will vary widely in timing depending on the specific instance of x and A (and the correct value of x in eachcol(A)).

An efficient implementation of in should stop as soon as it finds the first match. There is no point in comparing the remaining columns since the result won’t change. For benchmarking, one should probably consider the worst case, i.e., no match and the full search. In my example above, I considered one of the last columns to have a long search, but I confess that the choice was ad hoc.

The correct value of x in eachcol(A) is indeed a design question. One has to decide if x shall be considered a column of A if it differs by non-structural zeros. I would suggest considering that true because it is in line with x == A[:,i] and the dense transformations from my initial post y in eachcol(B).

Here is a little shortcut optimization (assuming no non-structural zeros):

function foo4(x,A)
for j in axes(A,2)
r1 = SparseArrays.getcolptr(A)[j]
r2 = SparseArrays.getcolptr(A)[j+1]-1
r2-r1+1 == nnz(x) || continue
@views if rowvals(A)[r1:r2] == rowvals(x)
if nonzeros(A)[r1:r2] == nonzeros(x)
return true
end
end
end
return false
end

It exploits the necessity of column length to equal vector length (non-zeros-wise).

A relevant test:

julia> B = sprand(100,100,0.2);

julia> y = B[:, 100];

julia> @btime foo3(\$y, \$B);
272.737 ns (0 allocations: 0 bytes)

julia> @btime foo4(\$y, \$B);
82.257 ns (0 allocations: 0 bytes)
1 Like
julia> @btime \$x in eachcol(\$A);
1.008 ms (0 allocations: 0 bytes)

julia> @btime \$y in eachcol(\$B);
200.541 μs (0 allocations: 0 bytes)

julia> @btime any(\$A'*\$x .≈ \$x'*\$x)
39.750 μs (988 allocations: 31.47 KiB)
true

This can give a false positive if a column of A is equal to x plus something orthogonal to x.

1 Like

Very true!