Difference between SharedArrays and DistributedArrays

Hello,

I am struggling with understanding the difference between SharedArrays and DistributedArrays.

I do not understand (I am pretty new to parallelism) what is meant by (from the documentation)

Shared Arrays use system shared memory to map the same array across many processes. While there are some similarities to a DArray, the behavior of a SharedArray is quite different. In a DArray, each process has local access to just a chunk of the data, and no two processes share the same chunk; in contrast, in a SharedArray each “participating” process has access to the entire array. A SharedArray is a good choice when you want to have a large amount of data jointly accessible to two or more processes on the same machine.

SharedArrays: I understand that the same array is copied for each of the cpus. So, if I have a vector with N components and P cpus, I will end up with a stored vector of N*P components.

DistributedArrays: I understand that only one array exists and each cpu has access to a portion of the array.

I would like to understand what’s the difference between these two types of data. In case it helps, I intend to run my code in a single machine with multiple cpus. Specifically, I have a vector of dimensions (d,N) with N d-dimensional initial conditions to be run using Newton’s method. I would like to run each initial condition on a different cpu.
Pseudocode:

xIni # array of dimensions (d,N) with initial conditions
xOpt # array of dimensions (d,N) to store optimal solutions
for n in 1:N
  xOpt[:,n] = RunNewton(xIni[:,n])
end

Could you please tell me what is the difference between the two types of data, and which one is the most useful in my case?

Thank you very much in advance.

Best regards.

2 Likes

No, SharedArrays are shared, not copied.
If you have a SharedVector of length N on P cpus, you will only need memeory for those N elements, even though all P cpus have access.

Here’s how I would describe the two types of arrays:

  • SharedArrays: Every worker sees the entire array.
  • DistributedArrays: Every worker sees only “its own share” of the array.

Your understanding of SharedArrays is hence wrong: SharedArrays share the underlying memory (who would have though? :exploding_head: ). Such an array requires storage for only N components, not N*P, and if one worker updates / overwrites a component, then the other workers see that change also.

I’d say the main disadvantage of each type of memory is as follows:

  • SharedArrays: Requires synchronisation, i.e. you must make sure that no worker tries to update one entry while another is reading it. Getting this wrong is known as a race condition, and if you do get this wrong then your code will silently produce wrong results which is the worst possible type of bug you can have.
  • DistributedArrays: Requires explicit communication between the workers. It’s a bit more work to set it up, but on the other hand you can be sure that you will not encounter a race condition.

I’d say SharedArrays are easier in your case. Avoiding race conditions is trivial (each worker reads from and writes to memory locations which are completely independent), and the result is easier to handle if everything lives in the same address space.

5 Likes

Ok. I understand. Thank you for you reply.

Oh, I see. Thank you very much for your detailed reply. I will code with SharedArrays then. Thank you very much again.

If I may revive this not so old thread, what is then the difference of SharedArray to a standard array?

2 Likes

SharedArrays are shared across processes (separate workers in Julia). Obviously SharedArrays require shared memory. Usually separate workers on same machine, rather than workers distributed across cluster.

Standard arrays are local to each process (Julia worker) only. But can be “shared” across threads.

On a shared-memory machine I don’t really see an argument for using multiple workers. I believe the mechanisms in Julia will be much more effective with threads. Then ordinary arrays are quite sufficient.