Chairmarks.jl

I just wrote a little something here, the gist of it is that tuning is not that hard. All we need to do is figure out about how many times we can run the function in 30 microseconds. And we only even need to do that if evals is not specified.

That said, it’s not perfectly consistent. Especially at very low runtime budgets, sometimes it will report spurious results.

you say that RegressionTests.jl is still unstable, what are the odds that future breaking releases will be able to read old formats?

0%. RegressionTests.jl does not (yet) save results. In order to maximize precision and minimize false positives, each comparison run is a randomized controlled trial. If you save results that were run on an idle system and then compare to results with slack running in the background, it’s plausible you’ll see a false positive. I try very hard to avoid false positives in RegressionTests.jl, so it currently does not support that sort of storage and retrieval. If and when I add performance data storage, I’ll try to ensure that compatibility, though.

@stevengj

This is a very useful — and widely used — feature of @btime from BenchmarkTools.jl — why not just copy it?

Interpolation requires constructing and compiling a new expression at runtime. This makes runtime budgets faster than about 50ms impossible and causes a memory leak when running the same benchmark repeatedly in a loop (Memory leak when repeatedly benchmarking · Issue #339 · JuliaCI/BenchmarkTools.jl · GitHub).

@foobar_lv2

Can you explain why this is so much faster than benchmarktools?

Explanations · How-is-this-faster-than-BenchmarkTools?

4 Likes