# Trouble understanding how time() works

I have a simple program using the time() function and can’t understand the results the time() function is giving. My guess is it has something to do with Julia’s optimization. I expect that two different calls to time() would report the elapsed time taken by two nested loops; but the reported time difference is 0.0. The function is:

``````function shrtTm(cnt)
t1 = time()
a = 0
b = 0
for i = 1:cnt
a +=1
for j = 1:8000
b +=1
end
end
t2 = time()
deltT = t2-t1
println(@sprintf("t1: %.12f, t2:%.12f, diff: %.12f, a: %u, b: %u", t1, t2, deltT, a, b))
deltT, b
end
``````

When I run it using BenchmarkTools (without using BenchmarkTools, I get similar results), I get:

``````julia> b = @benchmarkable shrtTm(1000000000000) samples=10
Benchmark(evals=1, seconds=5.0, samples=10)

julia> run(b)
t1: 1586091074.023055076599, t2:1586091074.023055076599, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023112058640, t2:1586091074.023112058640, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023137092590, t2:1586091074.023137092590, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023149013519, t2:1586091074.023149013519, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023161888123, t2:1586091074.023161888123, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023174047470, t2:1586091074.023174047470, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023190021515, t2:1586091074.023190021515, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023212909698, t2:1586091074.023212909698, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023232936859, t2:1586091074.023232936859, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
t1: 1586091074.023252010345, t2:1586091074.023252010345, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
BenchmarkTools.Trial:
memory estimate:  592 bytes
allocs estimate:  10
--------------
minimum time:     10.797 μs (0.00% GC)
median time:      16.431 μs (0.00% GC)
mean time:        19.046 μs (0.00% GC)
maximum time:     48.915 μs (0.00% GC)
--------------
samples:          10
evals/sample:     1

julia> (1586091074.023252010345 - 1586091074.023055076599)/10
1.969337463378906e-5

julia>
``````

The function is taking about 20us each time it runs, and this agrees with the absolute times t1 and t2 as seen by the last line, but what I can’t understand is why the reported deltT as in:

``````julia> deltT, b = shrtTm(1000000000000)
t1: 1586091612.786988973618, t2:1586091612.786988973618, diff: 0.000000000000, a: 1000000000000, b: 8000000000000000
(0.0, 8000000000000000)

julia>
``````

is being reported as 0.0 time. I would have thought it would report the time taken by the loops. The loops are certainly occurring as changing the argument changes the reported values for a and b.

``````julia> deltT, b = shrtTm(200)
t1: 1586091753.394443035126, t2:1586091753.394443035126, diff: 0.000000000000, a: 200, b: 1600000
(0.0, 1600000)

julia>
``````
1 Like

I think that the loops are being optimized out and that those 20 microseconds are spent on the `@sprintf` call and printing. E.g. I remove the `println` statement and call the function with different choices of `cnt` I get the same timing, whereas if the loops were not being compiled out, the running time should increase linearly with `cnt`. You can see the low level LLVM code with e.g. `@code_llvm debuginfo=:none shrtTm(10^12)`, but I’m not very practiced at reading it.

Might be worth an explanation of what “optimizing away” means. I think I know from reading other discussions here, but I don’t think it’s obvious

Your loops take effectively zero time, because the compiler is able to figure out the result without looping. The compiler knows that your loops are arithmetic progressions and applies the sum formula at compile time, using constant propagation.

2 Likes