It does not make sense to examine lowered code to “prove” that “something new is created”. Variables in julia are just labels with defined semantics. How they appear in lowered code, be it as seen in Meta.@lower, @code_lowered, @code_typed, @code_warntype, @code_llvm or @code_native, has no bearing on this. Even entire loops can disappear lower down:
function sumN(N)
local s = 0
for i in 1:N
s += i
end
s
end
The function sumN(N::Int) compiles to basically three instructions, addition by one, multiplication, and shift right by one bit (N*(N+1) ÷ 2), though with some provisions for overflow and negative N. Based on this, we could claim that neither s, nor i, nor 1:N is “created”. On the other hand, sumN(N::Float64) compiles to an entire vectorized mouthful.
julia> @btime sumN(1_000_000_000)
2.204 ns (0 allocations: 0 bytes)
500000000500000000
Likewise, in
function fun(N)
local s = N
for i in 1:50
s += i^2
end
s
end
fun(N::Int) compiles to N + 42925. No s, no i, no squaring, no loop.
julia> @code_llvm fun(1)
; Function Signature: fun(Int64)
; @ REPL[12]:1 within `fun`
define i64 @julia_fun_6586(i64 signext %"N::Int64") #0 {
top:
%0 = add i64 %"N::Int64", 42925
; @ REPL[12]:5 within `fun`
ret i64 %0
}
Whereas, for fun(::Float64), the loop is unrolled:
julia> @code_llvm fun(1.0)
; Function Signature: fun(Float64)
; @ REPL[17]:1 within `fun`
define double @julia_fun_6603(double %"N::Float64") #0 {
top:
; @ REPL[17]:4 within `fun`
; ┌ @ promotion.jl:433 within `+` @ float.jl:495
%0 = fadd double %"N::Float64", 1.000000e+00
%1 = fadd double %0, 4.000000e+00
%2 = fadd double %1, 9.000000e+00
%3 = fadd double %2, 1.600000e+01
...
%49 = fadd double %48, 2.500000e+03
ret double %49
}
One could claim that the %1, %2, etc. is s1, s2 etc, i.e. that a new s variable is “created” for each iteration of the loop. This is a consequence of the “single assignment” format. However, when stuff is further translated to native code, it’s just a single register which is updated.
Unless you do @fastmath to loosen up the IEEE float semantics:
function fun(N)
local s = N
@fastmath for i in 1:50
s += i^2
end
s
end
julia> @code_llvm fun(1.0)
; Function Signature: fun(Float64)
; @ REPL[21]:1 within `fun`
define double @julia_fun_6628(double %"N::Float64") #0 {
top:
%0 = insertelement <4 x double> <double poison, double 0.000000e+00, double 0.000000e+00, double 0.000000e+00>, double %"N::Float64", i64 0
; @ REPL[21]:4 within `fun`
; ┌ @ fastmath.jl:274 within `add_fast` @ fastmath.jl:167
%1 = fadd fast <4 x double> %0, <double 8.636000e+03, double 9.200000e+03, double 9.788000e+03, double 1.040000e+04>
%2 = call fast double @llvm.vector.reduce.fadd.v4f64(double 4.901000e+03, <4 x double> %1)
; └
; @ REPL[21]:5 within `fun`
ret double %2
}
(Note that if you add the double constants in there, 8636+9200+9788+10400+4901, you get the old 42925 from the Int version).
It doesn’t make sense to consider various optimizations and transformations to judge what some piece of source code actually “means” in the language.