Threaded loop shows error when using sparse matrix, but not with dense matrices

I’m having trouble with a parallelized for loop that replaces the values of a sparse matrix. Here is a simple example that reproduces the error:

using SparseArrays

function test(θ,Ŵ_prime,Ŵ)

    value, index = findmax((θ.^(0.5)).*(Ŵ_prime.-Ŵ))

    return (value = value[1], index = index)

end

gridsize  = 100
θ_vec     = ones(gridsize)
Ŵsol      = rand(gridsize)
A_θ_e_new = spzeros(Float64,gridsize,gridsize)

Threads.@threads for i in eachindex(Ŵsol)
    idx              = test(θ_vec,Ŵsol,Ŵsol[i])[2]
    A_θ_e_new[i,idx] = 1.0
end

This produces the following error message:
nested task error: BoundsError: attempt to access 77-element Vector{Int64} at index [59]

However, if I changed the sparse matrix
A_θ_e_new = spzeros(Float64,gridsize,gridsize)
for the dense matrix
A_θ_e_new = zeros(Float64,gridsize,gridsize)
the code seems to work well. Any suggestion for what is causing the error? Thank you!

The problem is that adding new non-zeros to a sparse matrix is not thread-safe, whereas adding to a dense matrix is.

Thank you! I didn’t know this. Is there a reason for this? Should we expect this to be a permanent feature?

In the docs (Sparse Arrays · The Julia Language) it says that the actual values of the matrix are stored in a flat vector. If you are setting a non-zero element, you will be resizing this vector, which is why it is not thread safe. This happens on writes, but reads are likely thread safe, writes are unlikely to be thread safe in the future.

1 Like