I would like to experiment with parallelizing an embarrassingly parallel computation on an iterable using the new multithreading in 1.3. I need
I would prefer to leave return type determination to Julia, so I would prefer not to preallocate.
Do I just need to
@spawn and then
fetch, as in
using Base.Threads: @spawn, fetch, threadid, nthreads @show nthreads() function ploop(f, itr) map(fetch, map(i -> @spawn(f(i)), itr)) end function pmapreduce(f, op, itr) mapreduce(fetch, op, map(i -> @spawn(f(i)), itr)) end f(i) = (@show threadid(); sleep(rand()); Float64(i)) ploop(f, 1:10) pmapreduce(f, +, 1:10)
(The motivation for this approach is that in practice
f itself can use threads and the whole things hopefully composes neatly.)