I try to use multithreading in Julia and am aware of the data race issue. I then found that
FLoops provides macros that seems to be able to guard against the problem. I nevertheless encountered a strange behavior with
@reduce. Results from functions using
@reduce are not consistent when the number of loops incresases.
The following is a MWE, which is designed to have 1.1 as the correct answer regardless of the number of loops (
cc in the example). The problem begins to emerge when
cc >= 8. If I switch off the line
@reduce(aa += bb) and use
aa += bb instead (that is, single thread), the function returns correct results.
For my real work, I have to do some nonlinear transformation before adding up the elements and taking the average. The MWE is only to highlight the problem with
using Statistics, FLoops function demo1(cc) e = ones(10)*0.1 M = ones(5, 10) vec1 = zeros(cc) @floop for i in 1:cc aa = 0.0 bb = 0.0 for j in 1:size(M,1) bb = e[i] + M[j,i] @reduce(aa += bb) # multi-threading, wrong answer # aa += bb # usual single thread, correct answer end vec1[i] = aa/size(M,1) end return mean(vec1) end
Let’s iterate the function with different numbers of loops in the function.
julia> for k in 1:10 display(demo1(k)) end 1.1 1.1 1.1 1.1 1.1 1.0999999999999999 1.0999999999999999 1.6499999999999997 1.5888888888888886 1.5399999999999996
Any advice will be appreciated.