The benchmark tools typically run your code several times and produce statistics on the timings. Best starting place for documentation is the built-in help system. E.g. type ? @btime at the Julia prompt.
The @btime help only hints at the multiple runs, but it points to @benchmark, which gives you the full explanation.
help?> @btime
@btime expression [other parameters...]
Similar to the @time macro included with Julia, this executes an expression, printing the time it took to execute and
the memory allocated before returning the value of the expression.
Unlike @time, it uses the @benchmark macro, and accepts all of the same additional parameters as @benchmark. The
printed time is the minimum elapsed time measured during the benchmark.
help?> @benchmark
@benchmark <expr to benchmark> [setup=<setup expr>]
Run benchmark on a given expression.
Example
≡≡≡≡≡≡≡≡≡
The simplest usage of this macro is to put it in front of what you want to benchmark.
julia> @benchmark sin(1)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 13.610 ns (0.00% GC)
median time: 13.622 ns (0.00% GC)
mean time: 13.638 ns (0.00% GC)
maximum time: 21.084 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 998
There are a few handy keywords as well. If you want to force something to run only once you can always do
using BenchmarkTools
@benchmark sin(1) samples=1 evals=1
Or sometimes I’ll have a longer function to test where there not enough trials, so you can set it up to run as many times as it can for a set period of time