I recently asked a Stack Overflow question about interpolating into benchmarking expressions. But I still have some lingering confusion about the impact on performance of passing global variables as arguments to functions. The Julia manual in the Performance Tips section says
A global variable might have its value, and therefore its type, change at any point. This makes it difficult for the compiler to optimize code using global variables. Variables should be local, or passed as arguments to functions, whenever possible.
So it seems like I should be fine performance-wise if I’m passing a global (non-constant) variable as an argument to a function. After all, how am I ever supposed to do anything if I can’t call functions at the top level? However, the answer posted in the above linked Stack Overflow question includes the following example:
julia> x = 0.5; # non-constant global julia> @btime sin(x); 20.106 ns (1 allocation: 16 bytes) julia> @btime sin($x); 5.413 ns (0 allocations: 0 bytes)
This seems to indicate that performance is impacted even when passing a global as an argument to a function! Or maybe it’s just an idiosyncrasy of the black-box benchmarking macro that requires the interpolation in order to properly set up the benchmark?
In other words, suppose I run the following in the REPL:
julia> x = 0.5; julia> sin(x);
Will the run-time for the
sin(x) call be 20 ns or 5 ns?