In this case, for the second call measuring TTFX yes. I can’t remember where this discussion took place but looking at the docs is helpful Essentials · The Julia Language for the time macro.
In some cases the system will look inside the @time expression and compile some of the called code before execution of the top-level expression begins. When that happens, some compilation time will not be counted. To include this time you can run @time @eval ....
… is the relevant section. So using just time in the second call doesn’t include some of the compile time.
julia> @time using Bessels
0.022755 seconds (22.24 k allocations: 2.378 MiB)
# main difference
julia> @time besselj(1.2, 8.2)
0.000004 seconds
0.21735942469289246
It’s pretty much the perfect example. There’s a bunch of code that was perfectly inferred and has no invalidation. All of the methods are concrete so there’s no reason at all to need to recompile anything.
Yep. You might wonder if that might be possible only with fairly small examples that are “pure” functions. But we even got the same 1000x-scale speedup for ImageFiltering.imfilter, which is a pretty complicated function (basically the entire 1500-line source file src/imfilter.jl, though not all dispatch chains touch every method in that file). Use SnoopPrecompile for precompilation by timholy · Pull Request #255 · JuliaImages/ImageFiltering.jl · GitHub. And there might be actual computation setting the floor of the benefit; imfilter does actually take execution time…
Of course, I’ve stamped out invalidations in much of “my” code base a long time ago, which is why you get such dramatic improvements when you can flip the switch on native compilation.