BenchmarkTools setup isn't run between each iteration?

Can someone explain to me what’s going on here? I would not expect the error that happens for the short input vector:

julia> using BenchmarkTools

julia> @btime pop!(v) setup=(v=rand(1000));
  8.295 ns (0 allocations: 0 bytes)

julia> @btime pop!(v) setup=(v=rand(100));
ERROR: ArgumentError: array must be non-empty
 [1] pop! at ./array.jl:1078 [inlined]
 [2] ##core#417(::Array{Float64,1}) at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:371
 [3] ##sample#418(::BenchmarkTools.Parameters) at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:379
 [4] sample at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:394 [inlined]
 [5] #_lineartrial#44(::Int64, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(BenchmarkTools._lineartrial), ::BenchmarkTools.Benchmark{Symbol("##benchmark#416")}, ::BenchmarkTools.Parameters) at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:133
 [6] _lineartrial(::BenchmarkTools.Benchmark{Symbol("##benchmark#416")}, ::BenchmarkTools.Parameters) at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:125
 [7] #invokelatest#1 at ./essentials.jl:709 [inlined]
 [8] invokelatest at ./essentials.jl:708 [inlined]
 [9] #lineartrial#38 at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:33 [inlined]
 [10] lineartrial at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:33 [inlined]
 [11] #tune!#49(::Nothing, ::Float64, ::Float64, ::Bool, ::String, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(tune!), ::BenchmarkTools.Benchmark{Symbol("##benchmark#416")}, ::BenchmarkTools.Parameters) at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:209
 [12] tune! at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:208 [inlined] (repeats 2 times)
 [13] top-level scope at /Users/dnf/.julia/packages/BenchmarkTools/eCEpo/src/execution.jl:482

It seems like the @btime macro runs pop! multiple times, without doing the setup in-between.

Julia version 1.3.0, BenchmarkTools version 0.5.0.

1 Like

That’s right–it generally runs multiple evaluations for each setup call in order to get more reliable timing data. If you want to avoid that, you can pass evals=1 to the @btime call, which will ensure that there is exactly one evaluation of your function per setup.


Ouch, that means that

@btime sort!(x) setup=(x=rand(1000));

will return results from sorting an already sorted vector! Yikes! This is really bad, I had no idea, and thought setup was done for each iteration. How does this improve reliability? Won’t this completely invalidate every benchmark involving mutating functions?

BTW, how do I pass evals=1 to @btime?

Check out this section of the manual, which specifically uses sort as an example:

1 Like

Thanks. It seems, though, that there is no obvious way to pass evals=1 directly to @btime, or I cannot figure it out.

This made the whole thing a lot more complicated than I knew. I have apparently given a lot of bad advice to people, and performed a number of meaningless benchmarks.

evals is just another keyword argument, like setup:

julia> @btime sort!(x) setup=(x = rand(5)) evals=1
  38.000 ns (0 allocations: 0 bytes)
5-element Array{Float64,1}:

or, with more parentheses for clarity:

julia> @btime(sort!(x), setup=(x = rand(5)), evals=1)
  43.000 ns (0 allocations: 0 bytes)
5-element Array{Float64,1}:

this link is 404 now, should be

1 Like