Do you know about BenchmarkTools.jl? Most Julia users use that package for benchmarking. It takes care of all the finicky precompilation, statistic collection & aggregation and, yes, also managing the GC to make the results as deterministic as possible.
There are a number of reasons why allocations or timing of a benchmark can fluctuate; first of which compilation time, how much your system is otherwise loaded, how much GC was loaded with previous values…
@time
isn’t tracking anything actively. It’s an extremely naive macro, literally only reporting the differences in time & GC statistics during its invocation:
julia> @macroexpand @time 1+1
quote
#= timing.jl:272 =#
begin
#= timing.jl:277 =#
$(Expr(:meta, :force_compile))
#= timing.jl:278 =#
local var"#1#stats" = Base.gc_num()
#= timing.jl:279 =#
local var"#3#elapsedtime" = Base.time_ns()
#= timing.jl:280 =#
Base.cumulative_compile_timing(true)
#= timing.jl:281 =#
local var"#4#compile_elapsedtimes" = Base.cumulative_compile_time_ns()
#= timing.jl:282 =#
local var"#2#val" = $(Expr(:tryfinally, :(1 + 1), quote
var"#3#elapsedtime" = Base.time_ns() - var"#3#elapsedtime"
#= timing.jl:284 =#
Base.cumulative_compile_timing(false)
#= timing.jl:285 =#
var"#4#compile_elapsedtimes" = Base.cumulative_compile_time_ns() .- var"#4#compile_elapsedtimes"
end))
#= timing.jl:287 =#
local var"#5#diff" = Base.GC_Diff(Base.gc_num(), var"#1#stats")
#= timing.jl:288 =#
local var"#6#_msg" = Base.nothing
#= timing.jl:289 =#
Base.time_print(Base.stdout, var"#3#elapsedtime", (var"#5#diff").allocd, (var"#5#diff").total_time, Base.gc_alloc_count(var"#5#diff"), Base.first(var"#4#compile_elapsedtimes"), Base.last(var"#4#compile_elapsedtimes"), true; msg = var"#6#_msg")
#= timing.jl:290 =#
var"#2#val"
end
end
So whatever bookkeeping the GC happens to do during a GC run that occurs in the code your @time
invocation encompasses is recorded too.
That’s also why its docstring recommends BenchmarkTools.jl
for bechmarking:
help?> @time
@time expr
@time "description" expr
[...]
│ Note
│
│ For more serious benchmarking, consider the @btime macro from the BenchmarkTools.jl package which among other
│ things evaluates the function multiple times in order to reduce noise.