Solving EnsembleProblem efficiently for large systems: memory issues

I have a potential solution that appears to work. The principle is relatively simple

  1. Create a dictionary (or any ‘map’) that maps the task ID to an ODEProblem ascribed to that specific task. The ODEProblem is deepcopyd once in the beginning.
  2. In the prob_func, get the task ID using current_task(), deepcopy the original ODEProblem if it does not exist, otherwise mutate it.

The code looks like this, with some locks in place to prevent multiple tasks/threads writing to the dictionary at the same time:

#/ Create a dictionary to store task-local problems
tproblems = Dict{Task, ODEProblem}()
tlock = Threads.ReentrantLock()

function set_interactions(prob, k, nrepeats)
    #~ Update prob by setting new interactions and initial conditions
    tid = current_task()
    if !haskey(tproblems, tid)
        Threads.lock(tlock)
        tproblems[tid] = deepcopy(prob)
        Threads.unlock(tlock)
    end
    lprob = tproblems[tid]
    interactionsetter(lprob, amatrices[k])
    statesetter(lprob, rand(S))
    return lprob
end

with the statesetter and interactionsetter as above. This seems to work and indeed uses as much memory as the initial set of deepcopy’s, but nothing more.

If this is not the way to go I’d be happy to learn how I should otherwise implement this.