Problem with `@reduce` of `FLoops`; data race?

I try to use multithreading in Julia and am aware of the data race issue. I then found that FLoops provides macros that seems to be able to guard against the problem. I nevertheless encountered a strange behavior with @reduce. Results from functions using @reduce are not consistent when the number of loops incresases.

The following is a MWE, which is designed to have 1.1 as the correct answer regardless of the number of loops (cc in the example). The problem begins to emerge when cc >= 8. If I switch off the line @reduce(aa += bb) and use aa += bb instead (that is, single thread), the function returns correct results.

For my real work, I have to do some nonlinear transformation before adding up the elements and taking the average. The MWE is only to highlight the problem with @reduce.

using Statistics, FLoops

function demo1(cc)
    e = ones(10)*0.1
    M = ones(5, 10)
    vec1 = zeros(cc)

    @floop for i in 1:cc
            aa = 0.0
            bb = 0.0
            for j in 1:size(M,1)  
                bb = e[i] + M[j,i]
                @reduce(aa += bb) # multi-threading, wrong answer
                # aa += bb        # usual single thread, correct answer
            end
           vec1[i] = aa/size(M,1)
    end
   return mean(vec1)
end

Let’s iterate the function with different numbers of loops in the function.

julia> for k in 1:10
            display(demo1(k))
       end
1.1
1.1
1.1
1.1
1.1
1.0999999999999999
1.0999999999999999
1.6499999999999997
1.5888888888888886
1.5399999999999996

Any advice will be appreciated.

HJ

You’d need to use @floop ThreadedEx() for and then use aa += bb (no @reduce). For this code, you can also just use Threads.@threads for.

This is not the intended pattern for @reduce but it’d be nicer to give less cryptic results. I need to debug this.

FYI: @floop for without @reduce defaults to single-thread because of some historical API. I probably should stop doing this.

1 Like

Thanks a lot! That solves my question. I had tried Threads.@threads in the past in other problems and I thought I was bit by the data race problem. Guess it won’t occur in the current scenario.