Benchmarking tests to ensure PRs don't introduce regressions



Has anyone come up with a clever way to perform benchmarking of specific functions as part of the testing or CI process so that PRs can be checked to make sure no performance regressions are introduced?


You may want to checkout


But I don’t think you can run the benchmarks on the CI machines, as you have no control over what exact machine one gets and how many other processes are running. At least Julia itself has a dedicated machine for this: nanosoldier. But should be ok to run those BM locally.


This is a great point. I was hoping to be able to check out a PR locally, and run the benchmarks on it, comparing to previous benchmarks. I’ve been playing around with PkgBenchmark but am not quite sure how to do this properly.


Simply time your own tests in your own packages. If they slow down, report it.


I use this method.

If you know the numbers well you’ll see something fishy and it makes it quite easy to bisect to see what happened. Then we have DiffEqBenchmarks.jl which we try to run after any big change to the integrators to make sure it’s the same. I am planning on setting up a local computer with a CRON job to continually update those as well.