Can dicts be threadsafe?

dict = Dict{Int64,Vector{Float64}}()
nentries = 5e5

Threads.@threads for n in 1:nentries
    dict[n] = [rand() for i in 1:100]
end

@assert length(dict) == nentries

The assertion fails for me in Julia 1.1.0, as the dict is shorter than the number of entries. Is there a way to safely use dicts with threads?

1 Like

1.3 (nightly) might have it.

1 Like

The assertion passes in 1.3 alpha, certainly looks like dicts are threadsafe in the new build. Thank you!

edit: Forgot to export the threads environment variable-the assertion does not pass after all. Curiously, 1.3 seems to be handling my actual code better than this minimal example.

1 Like

push!(dict, i=>rand(100)) works but seems to be slower when using multiple threads.

I don’t think this uses multithreading? I don’t see any locks or anything radically new in the Dict implementation in Julia Base in the current master, so I would be surprised if they are thread safe automagically.

There is a prototype package ThreadSafeDataStructures.jl which unfortunately is not operationally currently. I would like to take a stab at it at some point, because I am also interested in having thread safe arrays and dictionaries. FWIW, LRUCache.jl is thread safe since v1.0 and satisfies most of the dictionary interface, but it might be less efficient depending on what you want to use it for.

3 Likes

Basic data structures are not and probably never will be threadsafe—it introduces way too much overhead when you don’t need thread safety. If you want to safely share a dict between threads, you can use a lock, or better still, take a page out of Go’s playbook and have a single task which manages the dict and let other tasks send it messages to tell it to update the dict instead.

19 Likes

It’s interesting to know of this branch at ThreadSafeDataStructures.jl.

Indeed, there was a PR corresponding to this, but it is closed (unmerged):
https://github.com/JuliaCollections/ThreadSafeDataStructures.jl/pull/1

So to update, add and delete I should use a task and communicate via a channel.
Is it thread safe to retrieve data anywhere else?

This is the right approach. A long time ago in the beginning Java also had thread safe collections like java.util.Dict (they are still there but are basically deprecated) but in java 1.2 they switched policy and made new thread unsafe collections (HashMap, ArrayList,…) and it was up to the user to add synchronization. Why? performance overhead for single threaded cases.

6 Likes

What is the benefit of doing it this way (with channels) instead of using packages like:

https://github.com/wherrera10/ThreadSafeDicts.jl/blob/master/src/ThreadSafeDicts.jl

https://github.com/JuliaCollections/ThreadSafeDataStructures.jl

Using ThreadSafeDicts.jl/ThreadSafeDicts.jl at master · wherrera10/ThreadSafeDicts.jl · GitHub works perfectly fine.

include(source)
using .ThreadSafeDicts

dict = ThreadSafeDict{Int64,Vector{Float64}}()
nentries = 5e5

Threads.@threads for n in 1:nentries
    dict[n] = [rand() for i in 1:100]
end

@assert length(dict) == nentries

length(dict)

No one is saying that you will get the wrong result. It’s just that locks in the sense of

is usually not that good of a synchronization strategy (in terms of performance and code maintainability).

With regard to performance, you pay the cost of the lock for any operation and you will also block while waiting for the lock to be released. In the channel case, you can probably go and do other work since putting something into a channel doesn’t necessarily block you.

With regards to code maintainability, for the lock strategy, you need duplicated data structures that just add locks around all the methods. This can be error-prone and toggling between the threaded and unthreaded case can be annoying since you probably want to use the non-locked version of the data structure when you run non-threaded code.

In my opinion, the channel approach is a more composable strategy. You add the synchronization on top of the existing machinery, you don’t modify the machine itself. As a silly example, let’s say people want to put things into a cupboard, but only one person is allowed to put things in there at the same time. You can either have a lock with a single key and everyone fights for the key and tries to put things in there. Or you put a guard outside the cupboard to which everyone leave their things and then go about their other business and it is up to the guard to nicely put things into the cupboard. Putting it like that, it seems likely that the guard can do a better job since he fully controls the insertion process and might e.g. insert things in bulk if possible.

7 Likes

So is this the way to go?

@enum DictAction begin
    DictAction_Add = UInt8(0)
    DictAction_Remove = UInt8(2)
    DictAction_Get = UInt8(3)
end

function thread_safe_dict_via_channels(
   d::Dict,
   request_channel::Channel # Tuple(action, key, value)
)

   response_channel = Channel(100)

   Threads@spawn for request in request_channel
      if request[1] == DictAction_Add
         d[request[2]] = request[3]
      elseif request[1] == DictAction_Remove
         delete!(d, request[2])
      elseif request[1] == DictAction_Get
         if request[2] in keys(d)
            put!(response_channel, d[request[2]])
         end
      end
   end
   return response_channel
end

Also note that a thread safe data structure is usually not what you want. Unless the problem you want to solve only need a single shared data structure you usually care about synchronizing between multiple states rather than just a single one. Admittedly, this is more likely to be the case for a Dict than for a single atomic variable but I believe a lot of real world problems will be complex enough to need multiple “basic” data structures to maintain it’s states. That’s why you can’t just add a lock to each individual data structure and call it a day, even from the correct point of view, without even considering performance.


Edit: note that OTOH, you can add a global local and call it a day, correctness wise.

2 Likes

So you’re saying that package is (usually) worthless? I’m not sure, are locks used even for read-only operations, or are they fast (why you wrote “usually”?). [Does so-called “lock-free programming” have any relevance here?]

For users/developers, not too up-to-speed with threading, would you recommend just not using it (start julia without -t), or if you want to learn, is there any way to protect all data structures with locks, to at least get correct (slow) code that you can later optimize, by gradually reducing locking? I think you’re saying use Channels as an alternative, which I need to look into.

One workaround is to preallocate dictionary entries!

nentries = 5e5
dict = Dict(1:nentries .=> Ref([]))

Threads.@threads for n in 1:nentries
    dict[n] = [rand() for i in 1:100]
end
@assert count(isempty.(values(dict))) == 0
1 Like

Minimal example of saving the results of multithreaded work in a Dict, via a Channel:

ch = Channel(Inf)  # `Inf` size, so no blocking on `put!`
@threads for i in 1:100
    put!(ch, i => work(i))
end
d = Dict()
while !isempty(ch)
    i, w = take!(ch)
    d[i] = w
end
4 Likes