How can i use more than one processor - or give tasks to each processor

Hello Guys …

I am tryng to use parallel computing in julia

I have to ideas …one is try to set more processors in julia starting ( i do not know how to do that using ATOM ) .

or set a task to each processor …

for example: i have 3 things to calculate … Qa , Qb and Qc

Qa, Qb and Qc is independent … one processor can run Qa , another processor run Qb , and another processor run Qc … is it possible ?? should i use @parallel (+) , or another command, in where ?

function interpQa()
    

     ag = 0.01:0.1:1.51
     aσ = 0.07 : 0.05 : 0.4

    if PrimeiraVez

    mQ = [Q(g,sigma) for g in ag, sigma in aσ]


     writedlm("Qa_cobre.dat",mQ)

    else

     mQ=readdlm("Qa_cobre.dat")

    end
          # Matriz com os valores de Q na grade
    iQ = interpolate(mQ, BSpline(Cubic(Line())), OnGrid())    # Interpola na grade
    sQ = scale(iQ, ag, aσ)
    (x,y)->sQ[x,y]
end

# Modo de usar:
#Qb = interpQb()
 # Depois usamos Qb(g,sigma,α)
function interpQb()
  

    ag = 0.01:0.1:1.51
    aσ = 0.07 : 0.05 : 0.4
     aα = 0:0.2:3


    if PrimeiraVez
    mQ = [Qd(g,sigma,α) for g in ag, sigma in aσ, α in aα]
     writedlm("Qb_cobre.dat",mQ)
    else
     mQ=readdlm("Qb_cobre.dat")
    end

           # Matriz com os valores de Q na grade
    iQ = interpolate(mQ, BSpline(Cubic(Line())), OnGrid())    # Interpola na grade
    sQ = scale(iQ, ag, aσ, aα)
    (x,y,z) -> sQ[x,y,z]
end

# Modo de usar:
#Qc = interpQc()
 # Depois usamos Qc(g,sigma)
function interpQc()
   

    ag = 0.01:0.1:1.51
   aσ = 0.07 : 0.05 : 0.4


    if PrimeiraVez
    mQ = [Qi(g,sigma) for g in ag, sigma in aσ]
    writedlm("Qc_cobre.dat",mQ)
   else
      mQ=readdlm("Qc_cobre.dat")
   end

         # Matriz com os valores de Q na grade
    iQ = interpolate(mQ, BSpline(Cubic(Line())), OnGrid())    # Interpola na grade
    sQ = scale(iQ, ag, aσ)
    (x,y)->sQ[x,y]
end



time0 = time()
println("Gerando a função Qa(g,sigma)")
Qa = interpQa()             ## here we calculate Qa 
tempod = (time() - time0)/60
println("Tempo decorrido: $tempod min")

time0 = time()
println("Gerando a função Qb(g,α,sigma) ")
Qb = interpQb()   ## here we calculate Qb
tempod = (time() - time0)/60
println("Tempo decorrido: $tempod min")

time0 = time()
println("Gerando a função Qc(g,sigma)")
Qc = interpQc()   ## here we calculate Qc 
tempod = (time() - time0)/60
println("Tempo decorrido: $tempod min")

The manual explains this in detail:

https://docs.julialang.org/en/release-0.6/manual/parallel-computing/

remotecall, @parallel, pmap, etc.

for me it is not well explicated, because i use atom to compile … i am not compile in julia terminal …

so how can i write in atom to run julia in more than 1 processor ?

or should i put @spawn command into the lines to identify each task for one processor ?

You can addprocs(N) to add N processes, and then use the parallel computing. Or use multithreading, where the number of threads is an option in Juno.

I don’t use atom myself, but you either need to start julia with julia -p <number of workers> or else use addprocs(<number of workers>) to add worker processes. Then you can use remotecall or @spawn to call functions on these workers.

For example

julia> addprocs(2)
2-element Array{Int64,1}:
 2
 3

julia> workers()
2-element Array{Int64,1}:
 2
 3

julia> @everywhere function hello_world()
           println("Hello from worker $(myid())")
       end

julia> remotecall_fetch(hello_world, 2)
    From worker 2:    Hello from worker 2

julia> remotecall_fetch(hello_world, 3)
    From worker 3:    Hello from worker 3

In your case, if you want to run interpQa, inerpQb and interpQc on three separate workers, then the most basic thing you can do is

future_a = remotecall(interpQa, 2)
future_b = remotecall(interpQb, 3)
future_c = remotecall(interpQc, 4)
Qa = fetch(future_a)
Qb = fetch(future_b)
Qc = fetch(future_c)

in which case interpQa will always run on worker 2, interpQb will always run on worker 3, and interpQc will always run on worker 4.

but i have to run julia with more processors, and later should i copy this and paste in atom ?

Again, I don’t use atom, so I don’t know.

I suggest that you get something simple working before doing something bigger. So start from the Julia REPL, add some worker processes using addprocs, and experiment using remotecall to call simple functions on those processes.

Then go back to atom, and repeat the simple experiments you did in the REPL. My best guess (and this is a guess), is that you might need to put something like

if nworkers() < 3
    addprocs(3 - nworkers())
end

at the top of your file in order to make sure you have enough worker processes. Be aware though that these workers aren’t necessarily going to be numbered 2, 3, and 4 respectively. You can get a list of worker processes with the workers() function.

In my experience, parallel code works the same in the REPL and Atom. You can verify that it is working correctly by placing a print statement in your function. It should print the worker ID along with the string. For example,

if nworkers() < 2
    addprocs(2 - nworkers()+1)
end

@everywhere function myFunction1()
    sleep(1)
    println("Hello from myFunction1")
end

@everywhere function myFunction2()
    sleep(1)
    println("Hello from myFunction2")
end

future_a = remotecall(myFunction1, 2)
future_b = remotecall(myFunction2, 3)

When you run this code from a file in Atom, it should output the following into the console:

	From worker 2:	Hello from myFunction1
	From worker 3:	Hello from myFunction2

Alternatively, you can apply pmap() to a vector of functions and corresponding inputs like so:

if nworkers() < 2
    addprocs(2 - nworkers()+1)
end

@everywhere function myFunction1(x)
    sleep(1)
    println("Hello from myFunction1")
end

@everywhere function myFunction2(x)
    sleep(1)
    println("Hello from myFunction2")
return x
end

functions = [myFunction1,myFunction2]
inputs = [1,2]
pmap((fun,input)->fun(input),functions,inputs)

The advantage of using pmap() is that you do not have to manage the workers. If you have more jobs (e.g. 10) than workers (e.g. 4), it will automatically schedule the next job once a worker is finished. It also

My apologies. I accidentally posted while trying to add return statements to the code. If your functions have an output like so:

@everywhere function myFunction2(x)
    sleep(1)
    println("Hello from myFunction2")
    return x
end

all of your results will be assigned to a single array, rather than separate variables.

Also, if your functions require a different number of arguments, you can use ... to map an array onto the arguments.

pmap((fun,input)->fun(input...),functions,inputs)