Dividing a 1D domain into subdomains using Distributed Arrays (boundary element implementation)

Hello everyone,

I am new to Julia and this is a baby-step towards writing boundary element simulations with distributed computing.

I wish to divide a 1-D line (a sin-curve) into different domains (elements) and distributed them using @DArrays.
Question1: I don’t understand how does the I->fill syntax below works. The result looks intuitive but I am not entirely sure how the code knows what values of ‘I’ to pick.

using Distributed, DistributedArrays, InteractiveUtils

# Add 4 processors only once
if nprocs() == 1
     addprocs(4)
else
     println("Only add 4 processors")
end

@everywhere using Distributed, DistributedArrays, InteractiveUtils

npts = 17
nelm = npts - 1
x = DArray(I->fill(myid(), length.(I)), (1, nelm), workers()[1:nworkers()], [1,nworkers()])
julia> x
1×16 DArray{Int64, 2, Matrix{Int64}}:
 2  2  2  2  3  3  3  3  4  4  4  4  5  5  5  5

This is also related to the I that appears in the life-cellular automaton-example Introduction · DistributedArrays.jl
I don’t understand how does the code know what values of I it needs to iterate in DArray(size(d),procs(d)) do I

Question 2) How do I implement the following two methods using @DArrays?

The sin curve

x = LinRange(-π,π,npts)

is discretized as

17-element LinRange{Float64}:
 -3.14159,-2.74889,-2.35619,-1.9635,-1.5708,-1.1781,-0.785398,-0.392699,0.0,0.392699,0.785398,1.1781,1.5708,1.9635,2.35619,2.74889,3.14159

which I wish to store in 4 processors as

Method-1 (store 4 elements in each processor)
4 elements in Proc 1 → -3.14159, -2.74889, -2.356190, -1.963500, -1.5708
4 elements in Proc 2 → -1.57080, -1.17810, -0.785398, -0.392699, 0.0
4 elements in Proc 3 → 0.0, 0.392699, 0.785398, 1.1781, 1.5708
4 elements in Proc 4 → 1.5708,1.9635,2.35619,2.74889,3.14159

As I store 4 elements in each processor I don’t need communication with others when I want to find properties of an element like normal vector etc.

Method - 2 (store 4 nodes in each processor)
4 nodes in Proc 1 → -3.14159, -2.74889, -2.356190, -1.963500
4 nodes in Proc 2 → -1.57080, -1.17810, -0.785398, -0.392699
4 nodes in Proc 3 → 0.0, 0.392699, 0.785398, 1.1781
4 nodes in Proc 4 → 1.5708,1.9635,2.35619,2.74889,3.14159

Here I don’t store the end nodes twice and some communication will be required when computing element properties. Method-2 is similar to the life-cellular automaton-example in the documentation of @DistributedArrays.jl

Any help is greatly appreciated. Thank you.

I have gotten it to work but it may not be written in the most elegant way as I am relatively new to Julia (< 1 month old). I still don’t understand how to initialize such arrays using dfill(v, args...) = DArray(I->fill(v, map(length,I)), args...) or lines such as DArray(size(d),procs(d)) do I. It is not clear to me how how I values are picked
https://juliaparallel.github.io/DistributedArrays.jl/stable/#Distributed-Array-Operations-1

If someone can comment on any better way to do this, I will be grateful.

using Distributed, DistributedArrays, InteractiveUtils, LinearAlgebra

# Add 4 processors only once
if nprocs() == 1
     addprocs(4)
else
     println("Only add 4 processors")
end

@everywhere using Distributed, DistributedArrays, InteractiveUtils, LinearAlgebra

npts = 16
# Method 1
x1 = Array{Float64}(undef, (npts,1))
x1[:,1] = -π:2π/(npts-1):π

# Remember p goes from 2 to 5 and not 1 to 4
xsplit = [@spawnat p x1[4p-7:4p-4] for p in workers()[1:4]]
D1   = DArray(xsplit)

# Method 2
x2 = Array{Float64}(undef, (npts+1,1))
x2[:,1] = -π:2π/npts:π
# Remember p goes from 2 to 5 and not 1 to 4
xsplit2 = [@spawnat p x2[4p-7:4p-3] for p in workers()[1:4]]
D2   = DArray(xsplit2)