Why @parallel doing nothing?

Julia 5.0
Why my @parallel doing nothing ? all other proceses are OK .

julia>addprocs(5)

file=jldopen(“U_baza_Ski\S_Ski”)
julia> tic()
0x000015af252b402e

julia> @parallel for i=0:l-1
eval(parse(“S=read(file,"Sw$i")”))
minuty=zeros(minut)
for j=1:uu
minuty+=cumsum(S[:,j])
if mod(j,1000)==0 println(j) end
end
wynik[:,i+1]=minuty
println(i)
end
5-element Array{Future,1}:
Future(2,1,7,#NULL)
Future(3,1,8,#NULL)
Future(4,1,9,#NULL)
Future(5,1,10,#NULL)
Future(6,1,11,#NULL)

julia> toc()
elapsed time: 57.977429403 seconds
57.977429403

Paul

Don’t think of it as a loop: it’s quite different in some respects. You should give a reduction, otherwise it returns the result of each iteration. Also, you’ll want to @sync that if you want the values computed before you use them.

because without a “reducer” the process @parallel launches the loops and gets on with the next instructions without waiting for the futures to be fetchable.

to make @parallel wait for all those futures to return put an @sync in front of it or
come up with a reduce() function

@sync @parallel for ...

or

 @parallel (reduce) for ...

:confused:
This did not work:

minuty=SharedArray(Float64,minut)
@sync @parallel for j=1:uu
minuty+=cumsum(S[:,j])
if mod(j,1000)==0 println(j) end
end

this work :

minuty=SharedArray(Float64,minut)
for j=1:uu
minuty+=cumsum(S[:,j])
if mod(j,1000)==0 println(j) end
end
Paul

W dniu 2017-11-11 o 23:06, Tom Conlin pisze:

I am not sure if a simple sum will show benefits if parallelized.

Consider the example below (where I have inserted a small waiting time)


S=rand(300,12);

@everywhere function my_cumsum(x)
    sleep(.5)
    return cumsum(x)
end

function mi(S)
res=@parallel (+) for j=1:size(S,2)
    my_cumsum(S[:,j])
    end 
    return res
end
mi(S);
@time res=mi(S); #time is 6 seconds on 1 core

addprocs(4)

@everywhere function my_cumsum(x)
    sleep(.5)
    return cumsum(x)
end
mi(S);
@time res2=mi(S); #time is 1.5s on 4 cores



Thanks is very useful, but i am lookingo too to compute cumsum of all
columns of S SEPARATLY.

Paul

W dniu 2017-11-12 o 14:15, bernhard pisze:

continuing with @bernhard standalone example


function mi(N)
    S=rand(N,N)
    result=SharedArray(Float64,(N,N))
    @sync @parallel for j=1:size(S,2)
        result[:,j]= cumsum(S[:,j])
    end
    fetch(result)
end

@time r=mi(300);

size(r)

Unfornatly is longer becouse job is moving to SWAP
my ram = 8 G, job in swap with 2 workeres is 13 G :wink: and evrything is
slower:/ What now?:slight_smile:

S=read(file,“S0”)
263641×5693 sparse matrix with 1693405 Float64 nonz

function cum(S)
k,N=size(S)
result=SharedArray(Float64,(k,N))
@sync @parallel for j=1:N
result[:,j]= cumsum(S[:,j])
end
fetch(result)
end

@time sum(S)
114 sec
(Of coures is to long becose I have of thousends Arrays like S )

addprocs(1)

@time sum(S)
400 sec !!!

Paul

W dniu 2017-11-12 o 22:18, Tom Conlin pisze:

get more ram is one solution, which is part a throw away joke and part true, parallel work typically requires more resources than serial.

Running this on a machine that is already near capacity for serial will require extra work.

Your goal will be to partition S into as many parts as you have processors and each process only receive its portion; perhaps using DistributedArrays, or perhaps coercing S to a Shared array could work.

But I am only a little bit ahead of where you are and I have not needed DistributedArrays yet, so hopefully some one who has will chime in.

Thanks, I am steel looking :slight_smile:
Paul;
W dniu 2017-11-13 o 18:36, Tom Conlin pisze:

Unfornatly is longer becouse job is moving to SWAP
my ram = 8 G, job in swap with 2 workeres is 13 G :wink: and evrything is
slower:/ What now?:slight_smile:

S=read(file,“S0”)
263641×5693 sparse matrix with 1693405 Float64 nonz

function cum(S)
k,N=size(S)
result=SharedArray(Float64,(k,N))
@sync @parallel for j=1:N
result[:,j]= cumsum(S[:,j])
end
fetch(result)
end

@time sum(S)
114 sec
(Of coures is to long becose I have of thousends Arrays like S )

addprocs(1)

@time sum(S)
400 sec !!!

Paul

W dniu 2017-11-12 o 22:18, Tom Conlin pisze:

2 Likes