Chairmarks.jl

There’s no reason to deprecate BenchmarkTools.jl. It works completely fine, and there’s no 1-to-1 replacement for it anyways. Recommending that everyone switch to Chairmarks.jl seems incredibly premature given the stability of BenchmarkTools.jl

25 Likes

It is up to @gdalle this decision as the only maintainer of the package. If he doesn’t want to maintain it anymore, he can let people take his place or announce lack of maintainers.

4 Likes

Announcing a lack of maintainers does not need to involve deprecating a package.

18 Likes

Sure. A warning during package load can certainly lead to deprecation, but that is not mandatory. What I want to know as an and user and package developer, is that the packages I am depending on are being actively maintained.

Warnings during package load are super annoying. I definitely don’t want this.

Anyways, the topic here is Chairmarks.jl not when/how to announce a potential lack of maintainers.

6 Likes

To be clear, I don’t see BenchmarkTools.jl going out of business anytime soon, and I don’t want to deprecate it either. I’m just using the opportunity of a new package announcement to highlight the lack of maintainer energy for this cornerstone of the ecosystem.
Thus, even though Chairmarks.jl isn’t meant as a replacement, perhaps the idea isn’t as crazy as it sounds.

18 Likes

Totally understand it @gdalle. Please make it explicit in the README or elsewhere if you decide to step down from your maintenance role.

Update: I’ve released 1.1.0! As requested, it now has interpolation! The interpolation is largely the same as BenchmarkTools, but has different strange interactions with constant propagation:

julia> @btime ifelse(false, sum(sqrt(i) for i in 1:10000), 0.0);
  2.041 ns (0 allocations: 0 bytes)

julia> @btime ifelse($(false), sum(sqrt(i) for i in 1:10000), 0.0);
  8.569 μs (0 allocations: 0 bytes)

julia> @btime ifelse($(true), sum(sqrt(i) for i in 1:10000), 0.0);
  8.569 μs (0 allocations: 0 bytes)

julia> @b ifelse(false, sum(sqrt(i) for i in 1:10000), 0.0)
0 ns

julia> @b ifelse($(false), sum(sqrt(i) for i in 1:10000), 0.0)
2.731 ns

julia> @b ifelse($(true), sum(sqrt(i) for i in 1:10000), 0.0)
10.083 μs

1.1.0 also has a whole new documentation UI using DocumenterVitepress.jl! Thanks to @asinghvi17 for making that possible by helping me set it up. It should be pretty straightforward to use DocumenterVitepress.jl in most packages, but they’re on version 0.0.10, so definitely not stable yet.

13 Likes
3 Likes

Thank you @gdalle. Can you please add this information to the top of the README on github so that anyone considering usage can learn about the lack of maintainers?

A short message explaining that there are no active maintainers and that you can assign this role to anyone interested would be super helpful.

Why do I care so much about this? People care about software status, especially in industry. I’ll no longer use BenchmarkTools.jl knowing it doesn’t have a maintainer, this information is crucial.

  1. Am I correct to think that @b can be written much like @time, and the only remaining reason to $-interpolate is to do it for global variables to get around type instability?
  2. Is code being hoisted out of the benchmark loop still a possibility like in BenchmarkTools? The change from running it all in the global scope with heavy interpolation is throwing me off regarding that.

First I’d like to see if any volunteers pop up here. BenchmarkTools is widely used, there is no need to cause uproar by displaying an “unmaintained” status only to roll it back a couple days later

24 Likes
  1. Yes.
  2. Yes. The same “people doing benchmarks want the compiler to emit code that does useless work”/“everyone else wants the compiler not to do that” tension still exists.
1 Like

BenchmarkTools calls the garbage collector before running a benchmark and between trials. This does not seem to do that. Are you saying that that feature is useless?

@ballocated doesn’t make sense in the first place. It just runs @benchmark and grabs the minimum amount of memory. But why would you need to run a function 1000 times to know how much memory it uses? It is not noisy like execution time. Just use @time or @elapsed.

4 Likes

Are you saying that that feature is useless?

No.

The new package looks great! However, I’m getting confused when I compare the results to BenchmarkTools.jl for microbenchmarks. Here is an example, inspired by this thread:

f(x, n) = x << n
g(x, n) = x << (n & 63)

On my laptop I get

julia> x = UInt128(1); n = 1;
julia> @btime f($x, $n);
  4.371 ns (0 allocations: 0 bytes)
julia> @b f($x, $n)
1.341 ns
julia> @btime g($x, $n);
  2.736 ns (0 allocations: 0 bytes)
julia> @b g($x, $n)
1.341 ns

It’s not just that the absolute timings are different. According to @btime, g is faster than f, but @b doesn’t see any difference. How come?

1 Like

I do think this feature is worse than useless when comparing versions of code that allocate different amounts.
But if two versions of code have the exact same allocation behavior (which is likely when making minor changes!), it is useful to try and ignore everything other than the difference (such as the GC), so it definitely has merit.

Ideally, folks can make their micro-benchmarks non-allocating, but that’s not always practical.

2 Likes

Could that be hoisting? Though I’m used to hoisting showing <1ns timings. The differing timings also seems consistent with the earlier comment on 1.1.0 introducing $-interpolation for globals, but I’m not comfortable with @b and @btime showing different results because I’d prefer no benchmark artifacts or overheads in the result. What happens when you time it in the local scope, like let a=x, b=n; @b f(a, b) end, or compile the benchmark in a function, like timef(a, b) = @b f(a, b)?

Both variants lead to exactly the same results as before.