Variable binding: (re-) assignment, argument passing, `let`, scope

function f()
x= 234 *2342
y=908 - x
y= 234 +1001
z= 3*(y+ 23 -23 -23 -23 +23 +5)/(3*1000*y)
return z
end

Now we need to compile that, instruction by instruction. Since you don’t know assembly, I’ll stick with quasi-julia. First, compound expressions need to be broken up, and we need temporary variables:

julia> @code_lowered f()
CodeInfo(
2 1 ─       x = 234 * 2342                                                  │
3 │         y = 908 - x                                                     │
4 │         y = 234 + 1001                                                  │
5 │   %4  = y + 23                                                          │
  │   %5  = %4 - 23                                                         │
  │   %6  = %5 - 23                                                         │
  │   %7  = %6 - 23                                                         │
  │   %8  = %7 + 23 + 5                                                     │
  │   %9  = 3 * %8                                                          │
  │   %10 = 3 * 1000 * y                                                    │
  │         z = %9 / %10                                                    │
6 └──       return z                                                        │
)

Next we can constant-fold: We have a dependency graph between assignments, and if all inputs are known at compile time, we can compute the output at compile time. This depends on assumptions: no side-effects of multiplications, etc. This, in turn depends on types (what is integer, what is floating point, what is matrix, etc). That step is called inference.

julia> code_typed(f, (); optimize=false)
1-element Array{Any,1}:
 CodeInfo(
2 1 ─       (x = 234 * 2342)::Const(548028, false)                          │
3 │         (y = 908 - x::Const(548028, false))::Const(-547120, false)      │
4 │         (y = 234 + 1001)::Const(1235, false)                            │
5 │   %4  = (y::Const(1235, false) + 23)::Const(1258, false)                │
  │   %5  = (%4 - 23)::Const(1235, false)                                   │
  │   %6  = (%5 - 23)::Const(1212, false)                                   │
  │   %7  = (%6 - 23)::Const(1189, false)                                   │
  │   %8  = (%7 + 23 + 5)::Const(1217, false)                               │
  │   %9  = (3 * %8)::Const(3651, false)                                    │
  │   %10 = (3 * 1000 * y::Const(1235, false))::Const(3705000, false)       │
  │         (z = %9 / %10)::Const(0.000985425, false)                       │
6 └──       return z::Const(0.000985425, false)                             │
) => Float64

We see that the pure julia steps of the compiler, pre-optimization, were enough to get the result, even before hitting the powerful llvm: The amount of runtime computation necessary for your example is zero.

julia> f()
0.000985425101214575
julia> @code_native f()
	.text
; Function f {
; Location: REPL[1]:2
	movabsq	$139649067102312, %rax  # imm = 0x7F029509B068
	vmovsd	(%rax), %xmm0           # xmm0 = mem[0],zero
	retq
	nop
;}

I am no big fan of ASTs and prefer to think about code_lowered. Unfortunately I was recently informed that lowering is an implementation detail, and AST is the spec. So I am doing things wrong, with the real consequence that some compiler updates break my mind and possibly code.
I am thankful that Yuyichao and Stefan told me that I’m wrong (I will continue to do it wrong and pay the price for sticking to an unsupported abstraction).

2 Likes