I have found that @benchmark takes an exceptionally long (additional) time to run when the computation it is timing is more than a second or so. Here is an example:

```
julia> using Combinatorics
function sum_subsets(A)
return sum(sum(S) for S in powerset(A))
end
A = rand(Int64,24);
julia> @time B = @benchmark sum_subsets(A)
20.740046 seconds (234.92 M allocations: 23.190 GiB, 19.22% gc time, 0.11% compilation time)
BenchmarkTools.Trial: 2 samples with 1 evaluation.
Range (min … max): 2.885 s … 2.907 s ┊ GC (min … max): 17.20% … 16.99%
Time (median): 2.896 s ┊ GC (median): 17.09%
Time (mean ± σ): 2.896 s ± 15.491 ms ┊ GC (mean ± σ): 17.09% ± 0.15%
```

So the function takes about 2.9 seconds and it ran it twice (which makes sense since the default time is 5 seconds). But it took 20.7 seconds to do that!

```
julia> A = rand(Int64,26);
julia> @time B = @benchmark sum_subsets(A)
48.963094 seconds (536.91 M allocations: 55.002 GiB, 18.00% gc time, 0.05% compilation time)
BenchmarkTools.Trial: 1 sample with 1 evaluation.
Single result which took 12.163 s (17.08% GC) to evaluate,
with a memory estimate of 13.75 GiB, over 134217759 allocations.
```

So that took 48.9 seconds to time 12.2 seconds of computation. Any idea what is going on? Obviously, for longer computations just timing a single iteration is sufficient, but I sometimes do @benchmark in a loop as a vary the size of the input and this messes up the longer computations.