BenchmarkTools.jl and Juno


I am currently trying to compare a performance of two codes. For this purpose, I decided to choose BenchmarkTools. However, I cannot fully understand how to obtain necessary information in Juno. Consider the example from the manual:

using BenchmarkTools
@benchmark sin(1)

If I do this in terminal, I get what expect:

  memory estimate:  0.00 bytes
  allocs estimate:  0
  minimum time:     14.605 ns (0.00% GC)
  median time:      15.790 ns (0.00% GC)
  mean time:        16.843 ns (0.00% GC)
  maximum time:     78.158 ns (0.00% GC)
  samples:          10000
  evals/sample:     1000
  time tolerance:   5.00%
  memory tolerance: 1.00%

However, if I do the same thing in Juno, the result is a bit different:

params → 
times → Float64[10000]
gctimes → Float64[10000]
memory → 0
allocs → 0

I believe that, for instance, the array times should correspond to the times in all samples. But where can I find the mean time, minimum time, etc? I tried to open every directory in this BenchmarkTools.Trial, but no. Neither had I found information about this in package’s documentation. What should I do?

Thank you in advance.

The various estimates are not stored in BenchmarkTools.Trial, but are rather calculated on the fly as part of the type’s show method.

Documentation on how to compute these estimates can be found in the BenchmarkTools manual as well as in the API reference; long story short, you can just call mean(::Trial), median(::Trial), etc.


Thank you very much!

I think Juno just displays the internal fields like you see here if there isn’t a dedicated render method for that type of object. However, if I am not mistaken, this has been changed on the master version of Juno.

Since the latest version of Juno.jl (0.2.5), this shouldn’t actually happen:

Have you Pkg.update()d recently?

Yeah, I haven’t been updating for some time. Thank you very much for reply and for reminding me to finally update packages. :slight_smile:

Julia moves fast. Last month’s packages are very different. For the time being (pre-1.0), I would update early and often. And most of the time when you encounter an error / depwarn, the answer is probably to update the package first (and then if that doesn’t fix it, you can checkout master. Or else… bingo you found it first! (is that a good thing?)).

Has this changed again? Using Juno 0.2.7 and updated all packages. If I am using @benchmark I still only get a Trial type back

This has changed with the recently released Juno version. Full results are back.

Thanks. Do I have to do something special to update? A simple Pkg.update() does not seem to go beyond 0.2.7

The latest version of Juno only works on Julia 0.6.

That would explain it. Thanks :slight_smile:

BenchmarkTools in Atom/Juno doesn’t show any output. If I run the example script below via “Juno/Run All”, there is a pause, but nothing happens

using BenchmarkTools 
@benchmark sin(1)

If however I run just the line via “Juno/Run Block”, I can display the results.
Windows 10, Julia 1.3.0, Atom/Juno recently installed.

@benchmark returns the result, it doesn’t display it. So either try display(@benchmark sin(1)) or run that block in the REPL or, as your already noted, with inline evaluation.

1 Like

@pfitzseb thank you! display(@benchmark sin(1)) works as expected.