I was optimizing some code in the hot path and was surprised to see that constants written as 10^6 were being evaluated at run-time. Isn’t evaluating constants like this one of the simplest optimizations? Why wouldn’t Julia simplify it?
julia> function f(x)
f (generic function with 1 method)
julia> @code_native f(1)
; ┌ @ REPL:1 within `f'
movq %rdi, %rbx
; │ @ REPL:2 within `f'
; │┌ @ none within `literal_pow'
; ││┌ @ none within `macro expansion'
; │││┌ @ intfuncs.jl:273 within `^'
movabsq $power_by_squaring, %rax
movl $10, %edi
movl $6, %esi
; │┌ @ int.jl:87 within `*'
imulq %rbx, %rax
I think the assembler code is evidence enough, but here you go:
julia> using BenchmarkTools
julia> function f1(x)
f1 (generic function with 1 method)
julia> function f2(x)
f2 (generic function with 1 method)
julia> @btime f1(x) setup=(x=rand()) evals=1;
32.000 ns (0 allocations: 0 bytes)
julia> @btime f2(x) setup=(x=rand()) evals=1;
29.000 ns (0 allocations: 0 bytes)
So, it basically saves you ~ 10% of the function eval to use a fixed constant. when the function is a trivial multiply function. But of course most people write functions where much more occurs than multiplying by a constant. When the function has say 11 steps, the calculation of a constant at the beginning will involve maybe 1% instead of 10%.
Okay so what is the upside for loosing that 1%? I mean what does the compiler gain in not converting it to a constant? Granted that assumes this was done for a reason. Maybe the compiler developers feel that we should multiple our own *** ***** constants?
Yeah, this is sometimes a little annoying, but one option you always have is to write a macro that just evals an expression inside the macro body and then never use it on expressions involving local variables and/or impure operations:
let x = Ref(4)
f1(x) = x * 10^6
f2(x) = x * 1_000_000
f3(x) = x * @eval_at_parse_time 10^6