Behavior of worker pool in pmap

pmap

#1

Hi everyone, I’ve been coding in julia for a while, but I haven’t used any of the parallelization features beyond the Distributed package. My experience with parallelization is mainly with OpenMP in Fortran. I wanted to fully understand the behavior of pmap in the new version since I can’t seem to find enough information in the documentation.

My main question is how the function will assign threads for each of the parallel processes. In the canonical example in the documentation (here), pmap takes svd as an input and evaluates this in parallel. The svd uses (I think) either the MKL or BLAS library and probably has some sub parallelization based on how that library is used. Does pmap limit the number of cores that are used for each parallel evaluation? For example, does thread 1 get 2 cores, thread 2 get 2 cores, etc.? Does this happen dynamically by pmap? If this isn’t happening, doesn’t that mean that each of the individual processes is requesting a common set of threads from the computer, but how would I make sure this won’t happen with julia’s parallelization? My concept of pmap, however, is that it acts like it is evaluating the task as though each were completely independent–similar to running the same thing on different computers.

I ask this because I have a task that (I think) is ideally suited for pmap. It has a lot of functions in it and one of those functions (svd, etc.) is parallelized with the @distributed macro. I wasn’t sure if just specifying the worker pool in pmap would automatically assign threads in the smartest way or if I needed to use some other implementation. If anyone has any extra details about how pmap assigns resources from the worker pool, I’d really appreciate whatever you know. I’m sorry if this was answered elsewhere.


#2

As far as I know, pmap dosn’t use threads. It executes separate workers on available cores.


#3

Khm, AFAIK workers aren’t executed, they are computational resource representation.

@everywhere collection(worker1:workerN) parfunc(x,y1,..yK)
    return SingleWorkerFunc(y1,..yK ; kwarg1=...,kwarg2=...,...,kwargM=...)
end

ArrayOfAny=pmap(parfunc,CollectionOf_x,CollectionOf_y1,..,CollectionOf_yK)

length(CollectionOf_x)=length(CollectionOf_y1)=…=length(CollectionOf_yK)
I think, we can go on without CollectionOf_x and x argument in parfunc, because pmap already has something to iterate upon.