Identical functions repeated benchmarks show systematic differences

This doesn’t have anything to do with one function being advantaged over another, it’s probably the abstraction(s) of technologies your computer uses leaking up the stack.

When code is compiled, it’s placed somewhere in memory. Depending on where it’s placed, it can be accessed faster or slower (e.g. because it’s placed closer to where the benchmarking code resides). This isn’t something that’s exposed through julia and hence the abstraction of the julia runtime/context of execution is leaking that low level detail of how code is placed in and loaded from memory (and maybe looked up during benchmarking as well). I’m almost certain if you place HopeRollSort1Alg after HopeRollSort2Alg, you could sometimes observe that HopeRollSort2Alg is faster.

At the moment, julia does not try to reorder compiled code based on how often it’s called, as far as I know.

If you know where and how to look, you can find these kinds of leaky abstractions all over the place - and how Joel Spolsky put it:

All non-trivial abstractions, to some degree, are leaky.

5 Likes