# Variable binding: (re-) assignment, argument passing, `let`, scope

I don’t think that anyone is posing as an authority in this thread, so framing this discussion as such is not very useful. In case you missed this, @yuyichao is helping you.

Also, in general, while everything can of course be debated, doing this in a meaningful way requires a lot of background when you want to talk about compiler internals.

Finally, as many have pointed out, compiler internals are orthogonal to understanding (not implementing) scope (which was the original question). You can become a very proficient Julia programmer with a perfect understanding of binding and scope without having a detailed understanding of what happens under the hood (especially since it keeps evolving ).

1 Like

Gladly so, otherwise I’d be screwed

1 Like
``````function f()
x= 234 *2342
y=908 - x
y= 234 +1001
z= 3*(y+ 23 -23 -23 -23 +23 +5)/(3*1000*y)
return z
end
``````

Now we need to compile that, instruction by instruction. Since you don’t know assembly, I’ll stick with quasi-julia. First, compound expressions need to be broken up, and we need temporary variables:

``````julia> @code_lowered f()
CodeInfo(
2 1 ─       x = 234 * 2342                                                  │
3 │         y = 908 - x                                                     │
4 │         y = 234 + 1001                                                  │
5 │   %4  = y + 23                                                          │
│   %5  = %4 - 23                                                         │
│   %6  = %5 - 23                                                         │
│   %7  = %6 - 23                                                         │
│   %8  = %7 + 23 + 5                                                     │
│   %9  = 3 * %8                                                          │
│   %10 = 3 * 1000 * y                                                    │
│         z = %9 / %10                                                    │
6 └──       return z                                                        │
)
``````

Next we can constant-fold: We have a dependency graph between assignments, and if all inputs are known at compile time, we can compute the output at compile time. This depends on assumptions: no side-effects of multiplications, etc. This, in turn depends on types (what is integer, what is floating point, what is matrix, etc). That step is called inference.

``````julia> code_typed(f, (); optimize=false)
1-element Array{Any,1}:
CodeInfo(
2 1 ─       (x = 234 * 2342)::Const(548028, false)                          │
3 │         (y = 908 - x::Const(548028, false))::Const(-547120, false)      │
4 │         (y = 234 + 1001)::Const(1235, false)                            │
5 │   %4  = (y::Const(1235, false) + 23)::Const(1258, false)                │
│   %5  = (%4 - 23)::Const(1235, false)                                   │
│   %6  = (%5 - 23)::Const(1212, false)                                   │
│   %7  = (%6 - 23)::Const(1189, false)                                   │
│   %8  = (%7 + 23 + 5)::Const(1217, false)                               │
│   %9  = (3 * %8)::Const(3651, false)                                    │
│   %10 = (3 * 1000 * y::Const(1235, false))::Const(3705000, false)       │
│         (z = %9 / %10)::Const(0.000985425, false)                       │
6 └──       return z::Const(0.000985425, false)                             │
) => Float64
``````

We see that the pure julia steps of the compiler, pre-optimization, were enough to get the result, even before hitting the powerful llvm: The amount of runtime computation necessary for your example is zero.

``````julia> f()
0.000985425101214575
julia> @code_native f()
.text
; Function f {
; Location: REPL[1]:2
movabsq	\$139649067102312, %rax  # imm = 0x7F029509B068
vmovsd	(%rax), %xmm0           # xmm0 = mem[0],zero
retq
nop
;}
``````

I am no big fan of ASTs and prefer to think about code_lowered. Unfortunately I was recently informed that lowering is an implementation detail, and AST is the spec. So I am doing things wrong, with the real consequence that some compiler updates break my mind and possibly code.
I am thankful that Yuyichao and Stefan told me that I’m wrong (I will continue to do it wrong and pay the price for sticking to an unsupported abstraction).

2 Likes

It’s spelled `-fsanitize=undefined`

2 Likes

When you think about `x = 1; y = x; x = 2` , however, you should not interpret `x = 2` as putting `2` in the location that `x` points at. That would amount to mutating the (immutable) value `1` and would result in `y == 2` , which is not what happens.

? One question though (to you or someone else who wants to answer). Is the semantics, and this mental model, of `y=x` above , same as for argument passing as in

``````x=object
f(y)= do_with_object(y)
f(x)
``````

when `y` gets bound to “same” object as `x`? And again regardless of mutability of `object`?
I think it’s yes.
It seems I could summarize the semantics of both by saying that:

• in `x=object` , provided `object` is literal and not just name, assignment makes `x` bind to a new object (with new “reference” hence identity).
• in `y=x`, assignment makes `y` bind to the same object as `x` is bound to (with same “reference” hence identity)
• in `f(x)` above, it happens same thing as in `y=x` (except that `y` name is local to the function)

If yes, for the sake of ease of learning/understanding (simpler semantics), I wish Julia offered a way to check the “reference” of even immutable objects.

You are right – and not just based on the recent replies you wrote, but I was simply making no sense not much sense with that particular region of reply: “Then you misunderstood what I believe true and what not…” here , I’ve striken it out and edited – please read if you want.

Perhaps because of that, my concept of “memory location” for values seemed inconsistent to you. Other than that, I don’t see where I ever changed my definition for it.

You started as `x = 2` in REPL overwriting a “value-location” with 2, which is simply not something that ever happen in this case.

True (or…really? ever? ) but let’s not dig into what I wrote so many replies ago . I’ve learnt many things by now, I don’t believe everything I wrote there anymore.

I think you’ll be pleased to see how my interpretations read now: Variable binding: (re-) assignment, argument passing, `let`, scope

Believe me, I did a lot of consideration (all these replies took me an incredible amount of time). If you noticed, I actually tried to quote and follow official definitions from documentation whenever I could.
You often did not give reference, did not explain in detail, and I did not know you: why would I take your words as “official definition”?
(And such situations is when one should question and not believe blindly)

Indeed. I was actually not thinking at all about compiler optimizations. I had in mind compiler executing my instructions word for word, as if one at a time in REPL.
Also, I had taken (long ago) an intro C course. There, teacher told us that re-assignment literally meant re-writing the memory location with a diff value, and never told us that “it depends, may or may not” (so he lied to me then because he didn’t mention possible optimizations )

The only way I can think of (and tried) to help you resolve these issues is to avoid talking about your "value-location"s at all. If you want to keep it, I really have no idea how to build a consistent picture with that.

I see how you wanted to protect me against extra confusion( Edit: thanks for that intention). But with a person like me, just “don’t” doesn’t work well… I needed to know “Why”… I think now I understand.

1 Like

Even executing `x=1` at the REPL doesn’t mean that we now have a box named `x` holding the value `1` .

But – in case it’s just that `x=1` followed by `enter` – I think, we have a box holding `1` , linked to “x” through the address of that box (and perhaps one or more intermediate addressess)

Yes, that is technically true, but it’s not really exposed to the user — you can’t use any first-class Julia constructs to find that location.

2 Likes

I don’t think that anyone is posing as an authority in this thread, so framing this discussion as such is not very useful. In case you missed this, @yuyichao is helping you.

Also, in general, while everything can of course be debated, doing this in a meaningful way requires a lot of background when you want to talk about compiler internals.

The things I “debated” are elementary notions of memory locations which are taught in almost any intro programming , or comp. science class. It often takes just a couple sentences or examples, well formulated, to clear up confusion that dozens of reply exchanges may not.

Finally, as many have pointed out, compiler internals are orthogonal to understanding (not implementing) scope (which was the original question).

Scope was not the original question; neither compiler internals.
(Is “SCOPE” in title why so many readers/ viewers in this thread ? )

You can become a very proficient Julia programmer with a perfect understanding of binding and scope without having a detailed understanding of what happens under the hood (especially since it keeps evolving ).

I agree

Thank you for your time to write such a detailed reply. I will study it.

We have a dependency graph between assignments, and if all inputs are known at compile time, we can compute the output at compile time.

We see that the pure julia steps of the compiler, pre-optimization, were enough to get the result, even before hitting the powerful llvm: The amount of runtime computation necessary for your example is zero.

Wow. So the historical notion of “execution” after "compilation has really become a misnomer. It’s just divided execution, in 2 phases, compile time and runtime (and often just compile time)

Edit: maybe better called “computation”? so that to include both the computation during compile time, and during runtime , the latter being called “execution” time as well, from what I read around

Just FYI, this is known as constant folding and constant propagation and really isn’t anything new or unique to Julia. gcc also does this as well as many, many others.

A quick search finds references to this for multiple languages since at least 1991, so this has been known since before then. It’s even described as a “well known technique” in that papers abstract.

3 Likes

If memory serves me correctly, constant folding was common in several vendors’ compilers (e.g., DEC VAX FORTRAN) in the early-mid 1980s. It was a pitfall for naïve benchmarking exercises and, along with other optimisations, made single-stepping with a source-code debugger somewhat entertaining . I suspect the optimisation had been around for a while by then.

2 Likes

By looking at the internal compilations steps as exposed in this reply above , especially

``````julia> code_typed(f, (); optimize=false)
1-element Array{Any,1}:
CodeInfo(
2 1 ─       (x = 234 * 2342)::Const(548028, false)                          │
3 │         (y = 908 - x::Const(548028, false))::Const(-547120, false)      │
4 │         (y = 234 + 1001)::Const(1235, false)                            │
5 │   %4  = (y::Const(1235, false) + 23)::Const(1258, false)                │
│   %5  = (%4 - 23)::Const(1235, false)                                   │
│   %6  = (%5 - 23)::Const(1212, false)                                   │
│   %7  = (%6 - 23)::Const(1189, false)                                   │
│   %8  = (%7 + 23 + 5)::Const(1217, false)                               │
│   %9  = (3 * %8)::Const(3651, false)                                    │
│   %10 = (3 * 1000 * y::Const(1235, false))::Const(3705000, false)       │
│         (z = %9 / %10)::Const(0.000985425, false)                       │
6 └──       return z::Const(0.000985425, false)                             │
) => Float64
``````

I can see that this compile-time computation is still creating and addressing objects in memory locations – even though that’s not run-time.

Even for a source code like:

``````julia> function g()
x=1
x=2
end
g (generic function with 1 method)

julia> code_typed(g, (); optimize=false)
1-element Array{Any,1}:
CodeInfo(
2 1 ─     (x = 1)::Const(1, false)
3 │       (x = 2)::Const(2, false)
└──     return 2
) => Int32

``````

It seems like, during compilation, value of x `1` will be stored somewhere in memory : Else, how is it looked and determined that it’s `Const(1, false)` ?

It is not completely ignored, as I meant would happen to `x` in my original analogy here

Am I right?

Of course, during compilation, the constants have to be stored somewhere. When you talk about memory location of objects you are talking about runtime.

2 Likes

No! When I talk about “memory location of objects”, I talk about memory location of objects, nothing more, nothing less, and nothing else!
It does has a meaning regardless of whether it’s about compile or runtime (both phases are part of an overall computation of a program)

My Goodness… There has been SO much misunderstanding in this WHOLE thread, from the beginning…

From the beginning, by “value location” I simply meant any mem. location with a bit pattern corresponding to that value we see on screen – not necessarily at runtime.

Also, I acknowledge, I did not know then that computation can happen at compile time - I had in mind compilation in the oldest sense as simply translating word for word to assembly then machine code, without execution of any operation/command in my code happening at compile time.

Note that “execution” also, in general, can mean any computation of an instruction, not just “compile time” run-time.

I think @yuyichao always had in mind run-time when he was contradicting me, while I had in mind the general computation (compile + run time).
And when he would say “execute” I’d just interpret in a general/loose sense of “compute”, not specifically after compilation. EDIT: like next (which also marked a turning point in the conversation)

I was at least partially right before striking out the text in this reply to him – and not just because everything in a source code (in REPL or not) get’s loaded in memory even before compilation
(in that sense, any value get’s a place in memory, obviously – but it’s not the memory location I defined from the beginning, an addressable one, i.e, a cell designated to hold that particular value, not just a string for the whole program).

I wish I stopped this thread here or here

This reply with analogy partly confused things further, but partly led to the realization I have right now…

I’m also a bit frightened now to continue to study the docs, of Julia or of other langs , with this danger of words meaning not directly what they seem to mean…

From the beginning, by “value location” I simply meant any mem. location with a bit pattern corresponding to that value we see on screen – not necessarily at runtime.

Julia happens to represent most literal values as themselves in AST but there’s no inherent reason it has to be like that. The AST representation of `1` could be `Expr(:integer_literal, "1")` and that expression could be dead-code eliminated before the value `1` is ever created anywhere in the computer’s memory. So it’s still not necessarily true that the value `1` has to exist somewhere in memory at some point. The fact that Julia represents the literal `1` in AST in memory with the integer value `1` is a fairly incidental implementation detail, albeit one that comes in handy for metaprogramming.

I think @yuyichao always had in mind run-time when he was contradicting me, while I had in mind the general computation (compile + run time). And when he would say “execute” I’d often just interpret in a general/loose sense of “compute”, not specifically after compilation.

I think you’ll find that pretty much all programming language people will interpret things this way. When you talk about “execution” it means at run time; when you talk about parsing, lowering, type inference, inlining, constant propagation and other code transformations and optimizations, it’s at compile time.

I’m also a bit frightened now to continue to study the docs, of Julia or of other langs , with this danger of words meaning not directly what they seem to mean…

People manage to talk to each other about programming languages well enough all the time so once you’re aware of what people in the field generally mean by things, it’s no less precise than how people talk about most fields and much more so than many. We try very hard to use standard, established terminology when possible.

5 Likes

To add on to this, Julia doesn’t really show this distinction to someone using the language since it’s compiled JIT (just-in-time, or right before execution). Other languages like C show a much cleaner separation of the two, since a) there’s more than one way to compile a given file containing C source code to a runnable executable (meaning, there are multiple independent compilers for C) and b) since running the actual binary requires another step beside compilation, namely the actual call to some system provided function that’s bootstrapping execution. As far as I know, the second step doesn’t exist in Julia, because of the JIT compilation.

To stay with the example of C, both gcc and clang (the two most popular compilers for C, as far as I know) both support and actively do constant propagation and the like, but this is nothing codified into the C language specification - it’s more of an optimisation that’s not forbidden by the spec which is done to reduce execution time and (more or less minutely) increase compilation time. Julia also does this, but because the first call to a function is needed as “hey, compile and then run this”, it of course includes compilation time and after that also the execution time. Subsequent calls don’t need to be compiled again, so those calls don’t include compilation time, just the execution time. That’s also why it’s wrong to include compilation when talking about execution - it simply in general is not relevant at the time of execution what happens during compilation.

EDIT:
There is of course also precompilation of modules in julia and there are ongoing efforts to save precompiled functions and modules as a binary object to reduce function call overhead on the first call, but those efforts are largely in their infancy as far as I can tell and also don’t really change the overall point here, rather they support it.

4 Likes

As in any field, the words used typically have some specific meaning. For example, talking first to a programmer and then to a farmer in the 14th century about “execution”, and they might interpret these things differently.

In this thread, a lot of effort has gone to trying to to teach you what words in a programming context means. There are two approaches to take.

1. The accepting approach – “Ah, thats how the word is used, thanks now I know”.
2. The “stubborn” approach – “I have my own definition of words and these will not change”.

If you want to be able to talk to other programmers (this applies to any field) and make yourself understood it is important to learn what terms tend to mean in that field. The only way to do that is by experience and while gaining that experience it is important to have an open and receptive approach.

4 Likes

Thank you very much for the last 3 replies. It is VERY valuable information about general compilation, for me
(and probably for anyone coming from a different field and who knows something about programming but wants to understand more, not just in terms of using the language).

I also appreciate your personal feedback. (I don’t usually spend time on forums and the like)

I’m not stubborn(EDIT: maybe a bit ) ; but I like precision and clarity. I wish people, in general, to have higher standards of clarity when expressing themselves. Sometimes a few extra words, which would be redundant to people in your exact field, would help alot other people. I think this is especially important for a mixed audience that I think Julia community, at large, is .

Great to have this cleared up. This would mean that approximately 1/256 of my machine’s memory is the “value location” for any `rand(UInt8)` (I suspect it is not uniform though and `UInt8(0)` is somewhat overrepresented).