Agents.jl and DifferentialEquations.jl memory usage

Hi All,

I’m pretty new to Julia and have been playing around with a few packages that are available; specifically, Agents.jl and linking it to DifferentialEquations.jl. I’ve been following the example given here:

https://juliadynamics.github.io/Agents.jl/stable/examples/diffeq/#Coupling-DifferentialEquations.jl-to-Agents.jl-1

Running that example as is, I get a decent performance using @btime; for 10,000 agents @btime returns a simulation time of 765ms (approximately) and I see about 3% memory usage on my desktop, which has 32Gb. However, I see a lot of warnings from DifferentialEquations.jl about an interruption because Large MaxIters needed, or something to that effect. Never-the-less, I started playing around to see what the performance would be like if I made the DifferentialEquations.jl integrator specific to the agents. In the example above, it is quite a simple change: the :stock and :i values of the model properties in the function initialise_diffeq become dictionaries

model = ABM(
  Fisher;
  properties = Dict(
    :stock => Dict{Int, Float64}(),
    :max_population => max_population,
    :min_threshold => min_threshold,
    :i => Dict{Int, OrdinaryDiffEq.ODEIntegrator}()
  )
)

then below the for-loop which adds the agents, I put another for-loop which assigns the integrator and stock to an agent ID

for id in allids(model)
  prob = OrdinaryDiffEq.ODEProblem(fish_stock!, [stock], (0.0, Inf), [max_population, 0.0])
  integrator = OrdinaryDiffEq.init(prob, OrdinaryDiffEq.Tsit5(); advance_to_tstop = true)

  model.stock[id] = stock
  model.i[id] = integrator
end

Then the function model_diffeq_step! is changed to loop over all the agent IDs:

function model_diffeq_step!(model)
  for id in allids(model)
    # We step 364 days with this call.
    OrdinaryDiffEq.step!(model.i[id], 364.0, true)

    # Only allow fishing if stocks are high enough
    model.i[id].p[2] = model.i[id].u[1] > model.min_threshold ? model.agents[id].yearly_catch : 0.0

    # Notify the integrator that conditions may be altered
    OrdinaryDiffEq.u_modified!(model.i[id], true)

    # Then apply our catch modifier
    OrdinaryDiffEq.step!(model.i[id], 1.0, true)

    # Store yearly stock in the model for plotting
    model.stock[id] = model.i[id].u[1]

    # And reset for the next year
    model.i[id].p[2] = 0.0
    OrdinaryDiffEq.u_modified!(model.i[id], true)
  end
end

There are no other changes to the example given above. Running this through @btime with 10,000 agents removes the DifferentialEquations.jl warnings about Large MaxIters but takes a lot longer to complete; @btime returns about 1m 30s, which is a significant increase on the 765ms given previously. However, the main issue is the memory usage - it peaks at about 50% of my memory (although it does fluctuate significantly). Given that an equivalent piece of C/C++ code has less than 0.1% of my memory, 50% is huge.

Although this example is incredibly contrived, I’m envisaging a scenario when each agent may require a copy of the same ODE but have different parameters. I thought the ODEIntegrator would be relatively lightweight, but it doesn’t appear to be - although, I suppose that could be related to a choice of algorithm (I did try an RK4 but found the same behaviour).

As I’m new to Julia, I’m struggling to understand the performance here and how to improve it. Chances are I can make changes to the model setup, but without understanding the performance issues, it’s a bit difficult to see how to do that.

Thanks in advance.

I’m sure others will have much more insightful comments and intuition about where the performance difference is coming from, but in a situation like this if I were going in (more or less) blind I would look to some common pitfalls to see if they applied.

The first thing I would check in a case like this is type stability. If the compiler can’t determine what type something is when it wants to compile it, then it will defer this to runtime. It may be the case that there is some subtle type instability introduced by the changes you made. (Untyped globals are a common newbie pitfall, I did this myself)

Secondly, applying a tool like PProf can help you visualise where the performance and allocation difference between the two versions of your code is coming from. If the only change you made was what you laid out here perhaps it’s worth profiling both forms of your model and using PProf to examine the flamegraph output. You may see that the runtime is all coming from one place which is some allocation or conversion which could be elided.

Finally, you could try using different data structures. I tend to use Dicts when I have more dynamic requirements for my keys. If the number of keys is fixed and you’re looping over it many times, it may be worth using a data structure which has better cache locality properties like an array.

I apologise if this is vague or general advice but I hope it can give you a place to start. :slight_smile:

Thanks for your reply and suggestions. In the code, there aren’t any untyped globals (with the exception of the initial number of agents, but that is only used in the setup), not unless Agents.jl and DifferentialEquations.jl have untyped globals. I’ll take a look at PProf and see if it points to anywhere in particular. I’ll also try using Vector instead of Dict to see if it improves things. Thanks again for your help.

You’re not changing the saving behavior, but you’re also just using the integrator interface. Consider setting save_on = false, and/or save_everystep = false so it’s not saving time steps.

4 Likes

Thanks for your reply. This sorted out the memory issue and improved the performance :slight_smile: