I wasn’t able to resolve the problem here, but by converting to the following structure (after reading Parallel computing in Julia: Case study from Dept. Automatic Control, Lund University — Lund University), I now have a functioning program.
pids = addprocs(2)
@everywhere begin
using DataFrames
include("loadsDataAndCreates_randomPath_below.jl")
parameters = 1.0
end
futures = Vector{Future}(undef, num_simulations)
for i in eachindex(1:num_simulations)
state0_i = randomInitialState()
futures[i] = @spawnat :any simulate(state0_i, parameters) # This returns a DataFrame
end
simulations = fetch.(futures)
df = [append!(simulations[1], simulations[i]) for i in 2:num_simulations]
rmprocs(pids)
I am not certain this performs as @sync @distributed would (I was surprised how quickly the code ran when doing it inline as described in the original question). But at least this runs quicker than what it did before.