Difference between @time in Julia 1.3.1 and Julia 1.4.0-rc2

Dear all,

While I was testing Julia 1.4.0-rc2 for my package I notice very different results of @time compared with Julia 1.3.0.
(I am sorry that I cannot provide an MWE for this.)

The code basically trying to run a “main” function like this one:

@time main(method="SCF")
@time main(method="Emin")
@time main(method="SCF")
@time main(method="Emin")

From my current understanding, the first @time should also measure compilation time and the rest should exclude the the compilation.

This is the result for Julia-1.3.1:

 22.605433 seconds (68.37 M allocations: 5.628 GiB, 7.37% gc time)
  0.813059 seconds (951.45 k allocations: 2.221 GiB, 7.86% gc time)
  0.997460 seconds (956.30 k allocations: 2.538 GiB, 6.51% gc time)
  0.830678 seconds (951.45 k allocations: 2.221 GiB, 7.38% gc time)

And this is the result for Julia-1.4.1 (the code didn’t change):

  2.768451 seconds (4.49 M allocations: 1.832 GiB, 4.28% gc time)
  0.637647 seconds (727.59 k allocations: 1.337 GiB, 4.56% gc time)
  0.813684 seconds (731.74 k allocations: 1.653 GiB, 3.57% gc time)
  0.635642 seconds (727.59 k allocations: 1.337 GiB, 3.85% gc time)

From this result, I can conclude that the timing improves for Julia 1.4.0-rc2 (Kudos for the developers :100:).
However, the timing for the first call is kinda surprising for me because it is very different from Julia 1.3.1 (too good to be true?). Actually by using time command in Linux I got not too different overall timing results:

v1.3.1:

real	0m26.518s
user	0m26.247s
sys	0m0.256s

v1.4.0-rc2:

real	0m26.563s
user	0m26.260s
sys	0m0.285s

My question is: are there any changes of @time between v1.3.1 and v1.4.0-rc2 ? Or are there any explanations for this result?

2 Likes

I believe latency caused by compilation is something that the code development team has spent a lot of efforts on, and it seems to be paying off. When measuring from the terminal, you include the time to start Julia, which maybe has increased such that it evens out the timings? You could try measure the time of just starting Julia to see the difference.

1 Like

Are you saying that the startup time of Julia increased with 20 seconds to offset for the 20 second reduction in the time reported by @time? That seems unlikely.

1 Like

Those last timings look suspiciously identical. Could you try including a sleep(10) in one of them to verify that they actually run in 36 and 26 seconds respectively, instead of both running in 26 seconds?

Those last timings look suspiciously identical.

Yes, that it is actually close. My point is that the result reported by the first call to @time is not accurate (?) in Julia-1.4.0-rc2 as it is drastically very different from Julia-1.3.1. This can give false impression that precompilation in v1.4.0-rc2 is more than 10 times faster (although I am quite convinced that it is faster in v1.4.0-rc2, but not as fast as the result of @time suggest.

By the way I came up with the following code:

function my_func(N::Int)
    x = zeros(N)
    for i in 1:N
        x[i] = rand() + sin(x[i]) + cos(rand())
    end
    C = my_func_02()
    return
end

function my_func_02(; A=rand(100,100), B=rand(100,200))
    return A*B
end

@time my_func(1_000_000)
@time my_func(1_000_000)

and it gives the following results:

# v 1.3.1
 time julia test_timing.jl 
  0.812998 seconds (3.26 M allocations: 163.071 MiB, 3.61% gc time)
  0.027455 seconds (12 allocations: 8.011 MiB)

real	0m0.979s
user	0m0.907s
sys	0m0.052s

# v 1.4.0-rc2
time julia-1.4.0-rc2 02_timing.jl 
  0.024799 seconds (29 allocations: 8.031 MiB)
  0.035043 seconds (8 allocations: 8.011 MiB, 32.60% gc time)

real	0m1.099s
user	0m1.012s
sys	0m0.068s

which also suggests that the result of first call to @time is too optimistic.

Looking at the implementation of the macro it seems that this line has been changed:

I suspect that could be the reason based on the comment next to it.

1 Like

Thank you @WschW, I think this is the answer. What is the consequence of this change? I do not understand this simple code :no_mouth:

It seems that it uses a heuristic method to make the compiler compile the block before the actual timing begins, such that the compilation times is not included in the measured time. Apparently, the presence of a while loop triggers the compiler?

It’s a nice idea, it probably reduces the number of posts complaining that someone’s initial tryout of julia was slower than python :stuck_out_tongue:

1 Like

Thank you for the simple explanation @baggepinnen.
I guess this change only affects the result of initial call to @time and for subsequent calls it still can be used for performance measurement (putting aside possible noises when the measurement is being done).

I guess the reason for the addition of the heuristic is that also the initial call to @time should be a fair estimate of the runtime without compilation. It’s still a good idea to measure a couple of times to reduce noise though…

1 Like

Using the “blame” feature of Github, you can find the relevant PR. This is a welcome side-effect, but I thin the target was something else.

1 Like