Apart from the obvious (one is a loop macro the other one isn’t), tforeach
gives you
- load balancing by default (it creates
2*nthreads()
tasks by default) - more control over the “chunking”, that is, how the interval is divided into tasks (
@threads :dynamic/:static
always createsnthreads()
tasks)
We want to add more features, like new schedulers, perhaps automatic pinning of threads, and more in the future. As for the “macro API”, it gives us some flexibility to make some patterns, like using TLS, a bit simpler (although we generally prefer the higher order function API).
OhMyThreads.jl also supports reductions, which @threads
doesn’t (although @distributed
does for multiprocessing). The @threaded
macro will automatically expand to tmapreduce
(rather than tforeach
) if you provide a reducer
argument, e.g.
julia> @threaded reducer=(+) for i in 1:3
sin(i)
end
1.8918884196934453
But now it’s time for me to stop (1) promoting our package and (2) derailing this thread any further.