Retain computation results with @btime

I’m trying to time a function and keep its return value:
b = @btime y = foo(x)
but at the end y is not returned. How can get @btime to also return my result?

1 Like

Use @belapsed instead.

1 Like

I’m not clear on why you need this. I use

bar = foo(spam);

when I want results and

@btime foo($spam);

when I want to see how fast my function is. These are different missions for me.

1 Like

@belapsed did not seem to work, @elapsed did. Moreover, @btime provides more info than just time and is supposed to be more accurate ?

What did not work? Can you show a reproducible example and tell us your expected behaviour?

1 Like

It sounds like you just want

y = @btime foo(x)

However, I don’t see how this is useful. If this is part of a computation where you use y later, you probably don’t want to run foo(x) multiple times (which is what @btime does).

4 Likes

This is what I meant. @belapsed cannot return the computation result as I needed.

What doesn’t work in the link I shared above?

julia> @eval BenchmarkTools macro btimed(args...)
           _, params = prunekwargs(args...)
           bench, trial, result = gensym(), gensym(), gensym()
           trialmin, trialallocs = gensym(), gensym()
           tune_phase = hasevals(params) ? :() : :($BenchmarkTools.tune!($bench))
           return esc(quote
               local $bench = $BenchmarkTools.@benchmarkable $(args...)
               $BenchmarkTools.warmup($bench)
               $tune_phase
               local $trial, $result = $BenchmarkTools.run_result($bench)
               local $trialmin = $BenchmarkTools.minimum($trial)
               $result, $BenchmarkTools.time($trialmin)
           end)
       end
@btimed (macro with 1 method)

julia> b, y = BenchmarkTools.@btimed sin(12.3)
(-0.26323179136580094, 1.371)

julia> b
-0.26323179136580094

julia> y
1.371

Isn’t this what you want?

4 Likes

Yes. I replied to the wrong post

it would be great to have a way to do both though. Otherwise it’s just a waste of compute resources. BenchmarkTools.jl already calculates the result, why not also return it if needed ?

Tha solution from @giordano is appropriate for cases where the result is deterministic, and it would be nice to see it in BenchmarkTools.jl. (plus the variation to get the whole @benchmark metrics).
However, I am mostly preoccupied with cases where there is some stochasticity in the results, i.e. they’re not always the same. In these cases, it would be great to also return a vector of all calculated results (of the same length as the samples).

The docs in BenchmarkTools.jl specify the user should strive for reproducibility by e.g. using the same seed. Sometimes the nature of benchmarking itself is explorative and a variety of the results it’s expected. At that point, it would be nice to have the computed result accompanying the performance benchmark.

I can see that the motivation might be a bit shady, because now it’s not only about benchmarking performance of a function but also the content and functionality, but overall I think it would make a worthwhile addition.