Dear Julia friends,
I am glad to present TimerOutputs, a Julia package I created quite a long while ago and to which I recently made significant updates and improvements. You could consider it the big brother of the
@time macro in Base. As you know, the time macro prints the timings immediately when the timed section is finished. In TimerOutputs, one assigns a label to each timed section and thereby, times and allocations can be accumulated over the course of multiple calls to that section. When desired, data for the timed sections can be printed in a nicely formatted table which can be tweaked to only print the things you are interested in.
My aim for this package is to be useful for both people who want to time sections in their own programs (finite element simulations, optimization routines etc.) and also for package authors who want to present information about timings and allocations to the user. As an example, optimization packages could use this to easily measure how much time is spent in the user given function vs. in the actual optimization routine.
A short toy example to show the syntax and output is shown below:
using TimerOutputs # Create the timer object to = TimerOutput() # Time something with an assigned label @timeit to "sleep" sleep(0.3) # Data is accumulated for multiple calls for i in 1:100 @timeit to "loop" 1+1 end # Nested sections are possible @timeit to "nest 1" begin @timeit to "nest 2" begin @timeit to "nest 3.1" rand(10^3) @timeit to "nest 3.2" rand(10^4) @timeit to "nest 3.3" rand(10^5) end rand(10^6) end
to timer now shows:
─────────────────────────────────────────────────────────────────────── Time Allocations ────────────────────── ─────────────────────── Tot / % measured: 6.48s / 5.60% 77.4MiB / 12.0% Section ncalls time %tot avg alloc %tot avg ─────────────────────────────────────────────────────────────────────── sleep 1 338ms 93.2% 338ms 804KiB 8.43% 804KiB nest 1 1 24.7ms 6.80% 24.7ms 8.52MiB 91.5% 8.52MiB nest 2 1 9.10ms 2.51% 9.10ms 899KiB 9.43% 899KiB nest 3.1 1 3.27ms 0.90% 3.27ms 8.67KiB 0.09% 8.67KiB nest 3.3 1 3.05ms 0.84% 3.05ms 796KiB 8.34% 796KiB nest 3.2 1 2.68ms 0.74% 2.68ms 92.4KiB 0.97% 92.4KiB loop 100 6.97μs 0.00% 69.7ns 6.08KiB 0.06% 62B ───────────────────────────────────────────────────────────────────────
The README has full documentation of what the data in the table presents, customization of how the table is displayed, how to programmatically access the data in the table and much more.
As a more realistic example we can use TimerOutputs to “profile”
Optim when it optimizes a function. In the example below, we load an example optimization problem, create timed wrappers for the function to be optimized as well as the gradient and hessian, and also time the
using Optim using TimerOutputs const to = TimerOutput() prob = Optim.UnconstrainedProblems.examples["Rosenbrock"]; f(x ) = @timeit to "f" prob.f(x) g!(x, g) = @timeit to "g!" prob.g!(x, g) h!(x, h) = @timeit to "h!" prob.h!(x, h) begin reset_timer!(to) @timeit to "Trust Region" begin res = Optim.optimize(f, g!, h!, prob.initial_x, method=NewtonTrustRegion()) end show(to; allocations = false) end
show options, the displayed output is:
───────────────────────────────────────────── Time ────────────────────── Tot / % measured: 804μs / 95.7% Section ncalls time %tot avg ───────────────────────────────────────────── Trust Region 1 769μs 100% 769μs g! 16 5.95μs 0.77% 372ns f 16 3.82μs 0.50% 239ns h! 15 2.99μs 0.39% 199ns ─────────────────────────────────────────────
This shows that for the very simple optimization problem, most of the time is not spent in the functions being optimized but in
Optim. For a non-toy function the results would of course be very different.
Hope you find the package useful and if you have any issues or requests, just open an issue on the repo.
Thank you for your attention and happy timing!
// Kristoffer Carlsson