Can you communicate with those distributed processes? Are they running on your local machine? I guess so…
Is it possible that you have used a lot of memory and have started swapping on your local machine?
It may have to do with the overhead of pmap. I’ve heard in general that pmap should be used when the function g does a large amount of work to offset the cost of overhead. In your case, apparently, the function g is much faster than the time it takes to send out the work to different processors. If you want to parallelize a fast function (such as your g), either use low level primitives such as spawnat and fetch or macros like @parallel on your for loop.
@affans makes an excellent point. If you are going to parallelize by distrributing functions you need to give each worker a ‘decent’ amount of work to do. May apology to non-English native speakers.
As @affans says the ration of the time taken to do the task should be greater than the time to comminucate, send out and return the data.
I still think that pmap is not the way to go here (in the context of your function). See this excerpt from the documentation:
Julia’s pmap is designed for the case where each function call does a large amount of work. In contrast, @distributed for can handle situations where each iteration is tiny, perhaps merely summing two numbers. Only worker processes are used by both pmap and @distributed for for the parallel computation. In case of @distributed for , the final reduction is done on the calling process.
Can you try @distributed for and see if it speeds up your result?