I have experienced similar issues when I have started threads with @spawn that started their life with waiting for something.
I guess your issue may be related: the scheduler thinks that the threads are available and reuses them instead of round-robin-ing. For me it seemed that this is not a bug, although it was very annoying. I ended up using your solution from here:
Now it is a little bit different since I’m developing an actor library and I want parallel actors to use all available threads.
I could use the solution you mentioned to start tasks on predetermined threads. But then I had to do the load-balancing myself, which I don’t want to. In a first approach Threads.@spawn seems to work for me but not Channel(..., spawn=true).
Yes, it seems that there is a difference (can confirm on Linux, v1.5.2)
It may be worth opening an issue, I would also be happy to see more deterministic scheduling behavior when having only a few threaded tasks.
On the other hand I am not sure that filling all threads with tasks as soon as possible is the best strategy for every situation, so promising it in Base would not be wise. Maybe that’s why the docs of @spawn says “any available thread”:
After looking at the Threads.@spawn macro I guessed that the spawning may be linked to the scope where the macro is executed. If I execute the same for loop in a local scope, I get the same observed behavior as for Channel(..., spawn=true):
function dospawn()
for i in 1:nthreads()
ch = Channel(1)
Threads.@spawn tinfo(ch)
yield()
put!(ch, me)
end
end
julia> map(x->take!(me), 1:nthreads())
8-element Array{Int64,1}:
2
3
7
6
4
2
7
3