Hi, all. I am relatively new to Agents.jl and am noticing that my model is running quite slowly compared to similar models using other platforms. Following the performance advice in the Agents.jl documentation, I’d like to use BenchmarkTools to profile several chunks of code within the agent_step and model_step functions.
However, it’s not immediately clear to me how to do this. The BenchmarkTools documentation seems to focus on setting the benchmark immediately prior to the function of interest. However, my agent_step function doesn’t contain other functions – it’s procedural code (if x do a, b and c. If y, do d, e and f). I am wondering if there’s a way to benchmark sections of this type of procedural code. Thanks!
using TimerOutputs
const to = TimerOutput()
function time_test()
@timeit to "nest 1" begin
# put your first code segment here
end
@timeit to "nest 2" begin
# put your second code segment here
end
end
for _ = 1:100 # set to large number to get more accurate result
time_test()
end
reset_timer!(to) # reset
Thanks for this, the approach is very helpful and TimerOutputs looks cool! I ended up going with a slightly different approach, because the code I’d like to time is called by another series of functions. Here’s what I have now:
function agent_step!(agent, model)
t = @elapsed begin
# step code for agents
end
# capture the elapsed time for this step and increment the time tracker
model.agentStepTotalTime += t
end
I am running similar code for the model_step function. My idea is to get a sense of the overall elapsed time for each, and then dig into subsections of those two functions using @elapsed.
However, I am noticing that my total run time for 1,000 steps of the model is exponentially longer than the cumulative run time for agent_step and model_step. This is confusing to me, because these two functions seem to be the only code that runs for each model step, and the only other code that runs at all is model initialization.
Anyone have thoughts on what might be going on here or how to further debug?
It’s hard to say without seeing the code, but that sounds like it might be a type inference/type instability problem. @code_warntype on the model might help check if this is the case. Cthulhu.jl could help in trickier cases.
Thanks, I will give this a look. The other thing I am noticing is that running the model for additional steps creates an massive increase in run time. i.e., run!(model, 200) creates a run time that is equal to run!(model, 100) to the fifth power or something. I am watching CPU and RAM usage as this is happening and not seeing anything crazy.
I think you need to create a MWE and present it here because this doesn’t sound like normal Agents.jl usage. Running the model twice as long should take exactly twice the time.