If I generate data as:
using Distributed
using SharedArrays
using JLD2
n_trials = 10^2;
results = SharedArray{Float64}(n_trials);
@sync @distributed for i in 1:n_trials
results[i] = i;
end
jldsave("test.jld2"; results)
when I then go to load it:
using JLD2
data = jldopen("test.jld2")
results = data["results"]
I get the error:
ERROR: ConcurrencyViolationError("setfield!: atomic field cannot be written non-atomically")
Stacktrace:
[1] macro expansion
@ ~/.julia/packages/JLD2/AilrO/src/data/reconstructing_datatypes.jl:563 [inlined]
[2] jlconvert(#unused#::JLD2.ReadRepresentation{Distributed.Future, JLD2.OnDiskRepresentation{(0, 8, 16, 24, 32), Tuple{Int64, Int64, Int64, ReentrantLock, Union{Nothing, Some{Any}}}, Tuple{Int64, Int64, Int64, JLD2.RelOffset, JLD2.RelOffset}}()}, f::JLD2.JLDFile{JLD2.MmapIO}, ptr::Ptr{Nothing}, header_offset::JLD2.RelOffset)
@ JLD2 ~/.julia/packages/JLD2/AilrO/src/data/reconstructing_datatypes.jl:508
[3] read_scalar
# and it continues for quite a bit
Is this a bug? Or is there an alternative way to save SharedArrays?