Hi all,

I know next to nothing about parallel processing, so apologies if this question is obvious. I start julia with `julia -p 2`

. Here is a broken simplified version of what my code does:

```
d = read(some_filepath) # d is really big and takes up most of my RAM
some_vector = read(some_other_filepath)
@everywhere n = 1
while n <= N
@everywhere current_d_small = some_function_available_everywhere(d, n)
@everywhere anonymous_function = (x -> another_function_available_everywhere(x, current_d_small)
y = pmap(anonymous_function, some_vector)
n += 1
end
```

This code obviously doesn’t work, because of the line:

```
@everywhere current_d_small = some_function_available_everywhere(d, n)
```

This line errors because `d`

is not available everywhere. But I can’t make `d`

available everywhere, since a single copy of it takes up most of my RAM.

For the actual work done in `pmap`

I only need `current_d_small`

to be available to all workers, which is *much* smaller than `d`

. I’ve read the docs, but can’t seem to work out how, on each outer iteration, to get a copy of `current_d_small`

to each of the workers so I can make the `pmap`

call.

Any help would be much appreciated. Also, bear in mind, I know very little about parallel processing, so maybe I should be using something completely different to `pmap`

here.

Thanks,

Colin