I am looking for a @distributed reducer for ...
version which simply parallelizes the function as large as possible.
I.e. using both available machines and available threads. Is @distributed
already doing so?
if not, is there a package which supports this? (I looked into the Transducer ecosystem, but there both are also separated).
Dagger.jl is your best (and only?) bet.
1 Like
I thought so…
has someone build a @distributed like wrapper around Dagger.jl
?
I did a little research and it is not super straighforward. The best workaround is to take DTables.jl and do a reduce on it, but it is not exactly like distributed. I guess it would be nice to have such a functionality inside Dagger.jl itself.