Hello everyone, I am new to julia and currently I am converting my tensor networks codes from python to julia and till now it seems great and I could do it without much pain and just by rewriting it in julia I have gained some significant speedup. Now ,I want to parallelize it. For the starter I want to use simple form of parallelism where I have a function that takes some parameters and does some heavy calculations strictly in serial (DMRG and TDVP) and gives the output which is an MPS. I want to run this this functions for several parameters which are completely independent runs. I saw julia has @threads macro for these kind of problems but on a quick search I also found other methods like Floops.jl, ThreadX.jl etc. I am looking forward to suggestions from people who have been through similar cases. Thank you.
Check out GitHub - mcabbott/Tullio.jl: ⅀ for easy but performant, threaded, Tensor operations.
Another option for running multiple independent executions is the
I also like the ThreadPools package a lot, it allows more control over the parallelization (but it seems that the developer is writing something exactly now, so I will leave it here ).
For what you are describing, running a function with batches of differing parameters,
Threads.@threads should be fine.
params = [ (a1, b1), (a2, b2), ] results = Vector(undef, length(params)) @threads for (i, (a,b)) in collect(enumerate(params)) results[i] = f(a,b) end
If the jobs are nonuniform, you might get a speedup with the queued scheduling in the ThreadPools.jl package.
using ThreadPools as = [a1, a2] bs = [b1, b2] results = qmap(f, as, bs)
Alternately, BTW, if you want to keep the params grouped by run, you can always:
using ThreadPools params = [ (a1, b1), (a2, b2), ] results = qmap(x->f(x...), params)
Thank you, my jobs are fairly uniform so @threads should be fine.