I believe I have a problem using a parallel for loop without a reduction operator/function. I tried the following code, hoping to get a pmap-like behaviour:
a = @parallel for i in 1:100
i^2
end;
a
But this was the result:
1-element Array{Future,1}:
Future(1,1,3,#NULL)
Also fetch(a) gives me the same thing:
1-element Array{Future,1}:
Future(1,1,3,#NULL)
And fetch(a[1])
sometimes freezes the Julia session and sometimes returns nothing. Please let me know how to use the parallel for loop in a way that mimics pmap.
Also is there a similar abstraction to this for GPUs, i.e. a GPU for loop? I know ArrayFire in C++ offers a gfor loop, but is there something similar which uses CUDANative.jl?