I am planning to implement a continuous-time version of a discrete-time agent-based simulation (see https://github.com/mhinsch/RRGraphs). That means there will be some sort of central scheduler that handles dispatch of actions with, both, the type of the objects and the functions to be dispatched varying dynamically. In the light of this, I thought it would make sense to invest some time to find the fastest method.
Here is some test code I wrote: https://gist.github.com/mhinsch/773b4d34012090c6c551e32ff3f2f576. Basically there are two types of agents and two types of actions where the combination of the two is determined at run-time. I expect actions to be not entirely trivial, therefore the random numbers in step*. Part of the consideration also has to be the time it takes to line up agents and actions to be consumed by the scheduler which is why I am benchmarking the entire process not just the scheduling itself.
The results (Julia 1.1) are interesting, I think, and somewhat unexpected (for me):
4.800 ms (49 allocations: 4.00 MiB)
generic function, command array:
6.521 ms (34 allocations: 4.00 MiB)
8.255 ms (89454 allocations: 5.37 MiB)
19.733 ms (189371 allocations: 4.89 MiB)
7.245 ms (189291 allocations: 4.89 MiB)
manual type check:
6.290 ms (34 allocations: 4.00 MiB)
array of functors:
8.842 ms (189463 allocations: 4.89 MiB)
FunctionWrapper with functor:
11.308 ms (300017 allocations: 9.63 MiB)
FunctionWrapper with closure:
6.596 ms (300017 allocations: 9.63 MiB)
To sum it up, from what I have read (and tested) before I would have expected for generic function calls to perform a lot worse and FunctionWrappers with functors a lot better.
In any case, any suggestions on how to improve the code or even entirely different solutions for the problem will be gratefully received
My first worry is the code that is populating the agents/action structures. Is that part of scheduling or not…If it’s not I’m worried that your “dispatch” numbers are being overshadowed by the initialization. So breaking the population of the arrays into a different function would probably give you better results.
If it is part you can change sched_gen's first two lines to:
You will probably get better results just because multiple allocations to populate the arrays turn into exactly 2 allocations. It appears all the tests except sched_static can benefit from this type of change, only in sched_static you can’t predict how large the arrays need to be.
That’s a good point, but in practice the array of actions/agents will grow and shrink dynamically as a result of what’s going on in the simulation (which therefore has to be part of the benchmark), so filling them like I did seems like a reasonable approximation to what’s going to happen in real life.
Edit: More specifically, the amount of allocation needed to hold the necessary structures (i.e. actions + objects) is part of the problem.
function pop_static(a1, a2, n)
agents1 = Agent1
agents2 = Agent2
action = Int
for i in 1:n
if rand(1:2) == 1
(agents1, agents2, action)
function sched_static(agents1, agents2, action)
i = 1
for a in agents1
action[i] == 1 ? step1(a) : step2(a)
i += 1
for a in agents2
action[i] == 1 ? step1(a) : step2(a)
i += 1
const ns = 100_000
agents1, agents2, action = pop_static(arr1, arr2, 1)
sched_static(agents1, agents2, action)
agents1, agents2, action = @btime pop_static(arr1, arr2, ns)
@btime sched_static(agents1, agents2, action)
4.367 ms (50 allocations: 4.00 MiB)
1.211 ms (0 allocations: 0 bytes)
So from an optimization standpoint there is more room to optimize populating the data structures than dispatching the calls…So that is what I would focus on first…
Yes, that’s expected. My idea was to add a bit of minimal “load” to the step functions so that speed differences can be judged in a somewhat realistic scenario. E.g. a difference of 700ms doesn’t matter if the whole process (allocation + dispatch + simulation) takes 10s anyway. The comparison of my results to those of cdsousa’s tests (linked in my OP) shows that this makes a certain amount of sense. That said, I think you are right in that a more microscopic approach, identifying the exact costs of all parts of the algorithm, might be more helpful. That requires, though, that I think a bit more about what actually is going to be needed…
This is kind of what I was getting at. Your examples have a lot of code difference so its hard to say where slowdowns come from, or really comment on the relative merits of the strategies without getting out a profiler and comparing line by line.
It would be better to benchmark separately the initialization and the actions. Paricularly, with respect dynamic dispatch, there are caches that help to speed it up. Therefore, you should also try to benchmark what happens when methods are not in cache (by having much more agents and actions)