It will often not exist at all. Take this example:
function hits_to_kill(hp = 100)
weapon_damage = 20
cld(hp, weapon_damage)
end
Yields:
julia> @code_typed hits_to_kill()
CodeInfo(
1 ─ return 5
) => Int64
But what if it can’t find the answer at compile time?
julia> @code_typed hits_to_kill(100)
CodeInfo(
1 ─ %1 = π (20, Core.Compiler.Const(20, false))
│ %2 = Base.checked_sdiv_int(hp, %1)::Int64
│ %3 = Base.slt_int(0, hp)::Bool
│ %4 = (%3 === true)::Bool
│ %5 = Base.mul_int(%2, %1)::Int64
│ %6 = (%5 === hp)::Bool
│ %7 = Base.not_int(%6)::Bool
│ %8 = Base.and_int(%4, %7)::Bool
│ %9 = Core.zext_int(Core.Int64, %8)::Int64
│ %10 = Core.and_int(%9, 1)::Int64
│ %11 = Base.add_int(%2, %10)::Int64
└── return %11
) => Int64
We still have a 20. What if we compile further?
#julia> @code_native debuginfo=:none syntax=:intel hits_to_kill(100)
.text
movabs rcx, 7378697629483820647
mov rax, rdi
imul rcx
mov rax, rdx
shr rax, 63
sar rdx, 3
add rdx, rax
test rdi, rdi
setg al
lea rcx, [4*rdx]
lea rcx, [rcx + 4*rcx]
cmp rcx, rdi
setne cl
and cl, al
movzx eax, cl
add rax, rdx
ret
nop
Now the 20 and division have disappeared, replaced with an equivalent series of operations involving multiplication and bitshifts that are faster than division.
Of course, another simpler example where we see a difference:
julia> @code_typed hits_to_kill(100.0)
CodeInfo(
1 ─ %1 = Base.div_float(hp, 20.0)::Float64
│ %2 = Base.ceil_llvm(%1)::Float64
│ %3 = Base.mul_float(20.0, %2)::Float64
│ %4 = Base.sub_float(hp, %3)::Float64
│ %5 = Base.sub_float(hp, %4)::Float64
│ %6 = Base.div_float(%5, 20.0)::Float64
│ %7 = Base.rint_llvm(%6)::Float64
└── return %7
) => Float64
Where now we have 20.0
instead of 20
.
For understanding what answers code will produce, you need to know the semantics of the language.
But, internally “under the hood”, the compiler has some freedom to make it do very different things from what you wrote, so long as it produces an identical answer.