[ANN] BenchmarkHistograms.jl

julia> @benchmark bevalpoly!($y, $x)
samples: 10000; evals/sample: 200; memory estimate: 0 bytes; allocs estimate: 0
                     ┌                                        ┐
      [400.0, 410.0) ┤▇▇▇▇▇▇▇▇▇▇▇ 2193
      [410.0, 420.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 6693
      [420.0, 430.0) ┤▇▇▇▇ 826
      [430.0, 440.0) ┤ 91
      [440.0, 450.0) ┤ 75
   ns [450.0, 460.0) ┤ 55
      [460.0, 470.0) ┤ 33
      [470.0, 480.0) ┤ 27
      [480.0, 490.0) ┤ 5
      [490.0, 500.0) ┤ 1
      [500.0, 510.0) ┤ 0
      [510.0, 520.0) ┤ 1
                     └                                        ┘
                                      Counts
min: 400.945 ns (0.00% GC); mean: 416.129 ns (0.00% GC); median: 419.205 ns (0.00% GC); max: 515.420 ns (0.00% GC).

julia> @benchmark evalpoly!($y, $x)
samples: 10000; evals/sample: 200; memory estimate: 0 bytes; allocs estimate: 0
                     ┌                                        ┐
      [400.0, 450.0) ┤▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 9873
      [450.0, 500.0) ┤ 116
      [500.0, 550.0) ┤ 2
   ns [550.0, 600.0) ┤ 8
      [600.0, 650.0) ┤ 0
      [650.0, 700.0) ┤ 0
      [700.0, 750.0) ┤ 0
      [750.0, 800.0) ┤ 1
                     └                                        ┘
                                      Counts
min: 400.950 ns (0.00% GC); mean: 416.991 ns (0.00% GC); median: 419.235 ns (0.00% GC); max: 777.840 ns (0.00% GC).

Based on min, mean, and median times, these both performed almost exactly the same.
But the second has a lot of samples in the 550-600 range, and a much higher maximum, placing their histograms on different scales, making visual comparisons difficult.

If we have a huge number of samples (e.g., 10_000), it may be nice to trim off some of the extremes. That’d also help us see more of a breakdown in the 400-450 range in the above example.

1 Like