Hello!
So, I’ve stumbled upon a performance difference between two situations, and I don’t understand the reason behind this difference. The difference in computation time occurs between 1) passing a function to a variable and 2) using a function as part of another function.
Here is what I mean:
Scenario 1
I create a function f
, and then pass it to another variable g
.
f(x) = x^2
g = f
When I evaluate g, for a certain value, I will get the following computation time data:
julia> @benchmark g(2.1)
BenchmarkTools.Trial: 10000 samples with 995 evaluations.
Range (min … max): 23.920 ns … 581.970 ns ┊ GC (min … max): 0.00% … 91.54%
Time (median): 25.078 ns ┊ GC (median): 0.00%
Time (mean ± σ): 26.954 ns ± 9.373 ns ┊ GC (mean ± σ): 0.50% ± 1.57%
▅█▆▅▇▆▄▂ ▂▁▁▃▅▄▂▁▁▂▂▃▃▃▃▂▁ ▂
█████████▆▄▄▄▄▄▄▅▅███████████████████▇▆▄▆▃▄▃▄▃▃▂▂▄▄▅▅▅▆▅▅▄▃▄ █
23.9 ns Histogram: log(frequency) by time 39.4 ns <
Memory estimate: 16 bytes, allocs estimate: 1.
Scenario 2
I create a function f
, and then use it in the definition of a function h
.
f(x) = x^2
h(x) = f(x)
When I evaluate h, for a certain value, I will get the following computation time data:
julia> @benchmark h(2.1)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
Range (min … max): 0.051 ns … 0.097 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 0.056 ns ┊ GC (median): 0.00%
Time (mean ± σ): 0.059 ns ± 0.006 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
▆ ▃ ▆ █
▃▁▁▇▁▁▁█▁▁█▁▁▁█▁▁▅▁▁▁▂▁▁▂▁▁▁▂▁▁▁▁▁▁▁▁▁▁▂▁▁▂▁▁▁▅▁▁█▁▁▁█▁▁▆ ▃
051 ns Histogram: frequency by time 067 ns <
Memory estimate: 0 bytes, allocs estimate: 0.
Why is there such an important computation time difference between these two protocols?
Thank you!