How to create SharedArrays on other workers

I am trying to create a SharedArray on a different worker in julia 1.3.0.
Consider the following code snippet:

using Distributed
addprocs(1)

using SharedArrays

f = @spawnat 2 SharedVector{Float64}(10)
fetch(f)

If I try to execute it, I get an error

ERROR: On worker 2:
UndefVarError: SharedVector not defined

What am I doing wrong? Is it possible to create SharedArrays on other workers at all?

The reason I am trying to do it in the first place is the following.
For my parallel computation, I reserve several SharedArrays from the master process and then pass them to the workers. If I run this computation on a NUMA machine, there will be a problem for the workers which live on the NUMA node different from the one occupied by the master process (but may be I am wrong). I think that the problem can be solved if the Arrays backing the SharedArrays are created on the respective workers.

Hello, @Gregstrq.

I think you have to create the SharedArray first, and later you can work with it with the different process. It has not sense to try to create the shared vector only in one process. A shared vector is a vector whose values are distributed between several processes, see Parallel Computing · The Julia Language

Take in account, anyway, that the processing time must be expensive enough to compensate the distribution, if not, it is better to use threads.

Check this thread:

In that thread, the function was not defined on the other worker.
Here, using SharedArrays should have defined the necessary structures on all of the workers.

I don’t think so.
Here and there it should be

@everywhere using SharedArrays

I think you have to create the SharedArray first, and later you can work with it with the different process. It has not sense to try to create the shared vector only in one process. A shared vector is a vector whose values are distributed between several processes, see Parallel Computing · The Julia Language

I meant that I construct SharedArray-s on master process, but they are indeed shared across all workers.
My problem is with memory allocation: on machine with non-uniform memory access (NUMA) the cores have more fast access to some parts of RAM and more slow access to other parts of RAM.
I am not sure, but I think that when you create a SharedArray on some process, the memory is allocated in the region of RAM corresponding to the cpu this process runs on. If you try to use this SharedArray from the process which runs on the other cpu node, the acess to the underlying memory would be slower.

I don’t think so.
Here and there it should be

@everywhere using SharedArrays

My bad. You were right. using loads the package but doesn’t bring it into scope on other workers.
I should have checked the docs more thoroughly.

1 Like

If someone else encounters a similar problem, I should also point out that the alternative to

@everywhere using SharedArrays

is to use

f = @spawnat 2 SharedArrays.SharedVector{Float64}(10)

I may be talking out of turn here. On a large NUMA machine I have lot of experience in using cpusets and corresponding memory sets (768 core SGI UV 1000)
These days you would be using cgroups for this.
Which then leads to containerisarion.
So I think the answer to what you are wanting to accomplish is to run Julia processes in containers, which then have isolated memory spaces, and the containers should have the memory ‘pinned’ to to particular cpus they are allocated to.

1 Like

Do you know of a good guide about using Julia with cgroups?