Hi,
I think I am facing a similar issue. Here is another MWE:
using Distributed, SharedArrays
function parallel_compute()
shared_array = SharedArray{Float64,1}(100_000)
@sync @distributed for i in 1:100_000
shared_array[i] = randn()
end
return sum(shared_array)
end
addprocs(4)
for t = 1:10
x = parallel_compute()
println(Sys.free_memory()/2^20)
end
rmprocs(workers())
when I run this code, the amount of free RAM available (printed in MiB in the main loop) keeps decreasing:
493.28515625
492.31640625
491.46875
478.390625
477.6640625
476.81640625
463.73828125
463.01171875
462.52734375
461.55859375
If I push it further (with more iterations and a more complex parallel_compute
function), my code breaks because of memory shortage. I would also expect that the GC would help getting some free space after parallel_compute
is called but it seems that it doesn’t. I tried sharred_array = nothing; GC.gc()
at the end of parallel_compute
but it didn’t help. Is this the expected behavior ? Am I missing something ?