BenchmarkHistograms.jl is a simple package to provide a UnicodePlots.jl-powered show
method for BenchmarkToolsβ @benchmark
. This idea was originally discussed in BenchmarkTools.jl#180.
BenchmarkHistograms works by exporting its own @benchmark
macro which outputs a BenchmarkHistogram
object holding the results (which in turn are obtained from BenchmarkTools.@benchmark
), and it is for this object that a custom show
method is defined. In other words, BenchmarkHistograms does not commit piracy and simply provides an alternative @benchmark
.
For ease of use, BenchmarkHistograms re-exports all the other exports of BenchmarkTools. So one can simply call using BenchmarkHistograms
instead of using BenchmarkTools
, and the only difference will be that @benchmark
βs show method will plot a histogram in addition to displaying the usual values.
Example
The following is a simple example from the README which is designed to show that for some benchmarks simply looking at summary statistics can miss out on important information.
julia> @benchmark 5 β v setup=(v = sort(rand(1:10_000, 10_000)))
samples: 3192; evals/sample: 1000; memory estimate: 0 bytes; allocs estimate: 0
β β
[ 0.0, 500.0) β€ββββββββββββββββββββββββββββββββββ 2036
[ 500.0, 1000.0) β€ 0
[1000.0, 1500.0) β€ 0
ns [1500.0, 2000.0) β€ 0
[2000.0, 2500.0) β€ 0
[2500.0, 3000.0) β€ 0
[3000.0, 3500.0) β€βββββββββββββββββββ 1156
β β
Counts
min: 1.875 ns (0.00% GC); mean: 1.141 ΞΌs (0.00% GC); median: 4.521 ns (0.00% GC); max: 3.315 ΞΌs (0.00% GC).
Here, we see a bimodal distribution; in the case 5
is indeed in the vector, we find it very quickly, in the 0-500 ns range (thanks to sort
which places it at the front). In the case 5 is not present, we need to check every entry to be sure, and we end up in the 3000-3500 ns range.
Happy benchmarking!