# Parallelization of several independent loops

Hello all,

How could I parallelize the following independent loops?

``````for i=1:N
x1=A1\b1
end
for i=N+1:2*N
x2=A2\b2
end
for i=2*N+1:3*N
x3=A3\b3
end
for i=3*N+1:4*N
x4=A4\b4
end``````

How does `i` come in to play in the loops?

Itâ€™s something like this:

``````psi = zeros(4*N,M)
for i=1:N
x1=A1\b1
psi[i,:] = x1'
end
for i=N+1:2*N
x2=A2\b2
psi[i,:] = x2'
end
for i=2*N+1:3*N
x3=A3\b3
psi[i,:] = x3'
end
for i=3*N+1:4*N
x4=A4\b4
psi[i,:] = x4'
end
``````

I suppose something like this might work:

``````    while true
...
rref = @spawnat p <kick off some work on process p>
push!(rrefs, rref)
if <all work parceled out>
break
end
end
for r in rrefs
results = fetch(r)
<do something with the results>
end
``````

Using a certain number of workers, in your case 4.

3 Likes

Iâ€™m not very experienced in multithreading, so there is probably a good reason to do the channel solution that @PetrKryslUCSD suggested. Could you explain why not just use `@spawn` like below?

``````psi = zeros(4*N,M)
@sync
x1=A1\b1
psi[i,:] = x1'
end
x2=A2\b2
psi[i,:] = x2'
end
x3=A3\b3
psi[i,:] = x3'
end
x4=A4\b4
psi[i,:] = x4'
end
end

``````

I donâ€™t know the answer but I was trying to use the code similar to yours. It never worked out for me. Does it work for you? mrufsvold

I donâ€™t have definitions for the As and Bs so I couldnâ€™t text the code, but I was able to kick off several loops in different threads using the strategy!

Here is a reproducible version

``````N = M = 10
psi = zeros(4*N,M)
A1 = ones(M)
A2 = A1 .+ A1
A3 = A2 + A1
A4 = A3 + A1

@sync begin
for i=1:N
psi[i,:] = A1
end
end
for i=N+1:2*N
psi[i,:] = A2
end
end
for i=2*N+1:3*N
psi[i,:] = A3
end
end
for i=3*N+1:4*N
psi[i,:] = A4
end
end
end
psi
``````

Maybe worth noting that you need to start Julia with `julia --threads auto` or replace auto with an integer for the number of threads you want to use. Otherwise, Julia starts with one thread by default.

I mean, was it actually about 4 times faster than without multi-threading?

Well, if you actually have several independent workers, thn the work of each processor can be run on multiple threads. Double parallelizationâ€¦

2 Likes

I donâ€™t know. I didnâ€™t benchmark because your question was just about running in parallel, not performance. There is overhead for spawning a new task, so it usually isnâ€™t worth it for small operations. Almost certainly not for my 10x100 array. You will have to benchmark your specific use case to find out if it is worth it.

Thanks!