Lexical scope vs. Dynamic scope

What are the advantages of the lexical scope in general compared to dynamic scope? Isn’t inheriting from the parent scope similar to global variables?
Why did Julia chose this approach and what are the perfomance differences between inheriting variables from parent scope vs. passing them as function arguments?
This is not a rant, nor a call for changing things, just my curiosity.

Modern languages don’t really do dynamic scope. The only vaguely recent language I can think of that has it is Perl 5, which has both, and that’s not really a modern language, and I think using dynamically scoped variables is generally considered a bad idea even in Perl. Perhaps there’s some confusion about what dynamic and lexical scope mean. When you say “parent scope” what do you mean by that? Do you mean the scope of the caller? Or the surrounding scope in the code?


Assuming there isn’t a disconnect about terminology here, I could give some thoughts about why dynamic scope was a historically bad idea and fell out of favor, but choosing lexical scope is about as standard as it gets. It would require some justification to choose dynamic scope as that would be a pretty unusual design choice these days.


Emacs Lisp is still dynamically scoped by default, but also there it’s considered a bad idea nowadays, and there is a big push to write new code with lexical scope (it’s opt-in at the moment) to eventually phase out dynamic scoping completely.

Having written dynamic scoped code for an Emacs plugin, I must say that it was somewhat useful to magically get some variables “out-of-nowhere”. On the other end, reading code with variables defined nowhere isn’t exactly fun, and makes reasoning about the code harder.


created in 1985 probably disqualifies it as being modern.


Julia has a dynamic-scope construct called task_local_storage you can use when you need it.

1 Like

I meant the surrounding scope(I think…). Let’s take this example:


The performance tips suggest to pass arguments to functions as it clarifies the inputs (among other reasons). Does this mean the example f()=2x is less performant? If true, why allow this when it’s less perfomant and makes code more unclear. Is there a larger benefit?

That depends on the context. For instance, this is a fairly typical and very useful syntax:

julia> function inner(f,x)
inner (generic function with 1 method)

julia> function outer(a,x)
           g(x) = a * x 
           return inner(g, x)
outer (generic function with 2 methods)

julia> outer(2., 3.)

Note that inner receives as parameter a function with a single argument. Thus, the a argument of g is not explicit (it is not g(a,x)), and a is “closed over” by g. That allows one to use the inner function without modification, because it expects f to have a single argument.

That is very important, because it the way one passes parameters, for instance, to optimizers. Optimizers generally expect to receive a function that given x returns the function value. Of course many functions depend on many parameters. To pass the parameters to the optimizer cleanly, that pattern is used.

And there there is no performance problem, because everything is occurring in the scope of the outer function. The problem would be if a was a non-constant global variable.

By the way, normally one would write that simply as:

function outer(a,x)
    return inner( x -> a*x, x)

where x -> a*x is the anonymous function that closes over a.

This blog post was written at the time when the transition from Emacs 23 to 24 introduced the option to use lexical scoping. It is the best resource I know on this topic. IIRC, along with the fact that lexically scoped code is more local and easier to reason about, one of the most convincing arguments made at that time was the possibility to create “real” closures (if you think about it, it does not make a lot of sense to talk about closures in a dynamically scoped language). I think other arguments were made afterwards, involving performance considerations or linting abilities (but I think that was much more recent and not one of main driving points).

I think it’s interesting to note here that dynamic scoping is IMO what makes emacs so customizable: you can modify any variable in your init.el configuration file, and it will affect the behaviour of any function referring to a variable of that name. (Of course, this non-locality is also what makes it very hard to reason about code). So even though Emacs 24 introduced the possibility to opt-in for lexical scoping in a given source code file, it also introduced a way to keep the dynamic scoping of so-called “special” variables.

I’m no expert but I’d say that the performance issues caused by global variables in Julia are in some ways akin to what could potentially happen everywhere with dynamic scoping.

Let’s take this code, meant to illustrate the performance issues occurring with global variables:

julia> f() = a+1
f (generic function with 1 method)

julia> a = 1; f()

julia> a = 1.0; f()

The symbol a in the definition of f always unambiguously refers to global variable Main.a. But the value to which Main.a is bound can change at any time, meaning that the compiler can’t know in advance what + will have to operate on (and can therefore not optimize things).

In a hypothetical, dynamically scoped, Julia-like language (let’s call it HDSJL), we’d have things like

hdsjl> f() = a + 1

hsdjl> const a = 1; f() # `a` refers to `Main.a`

hsdjl> let a = 1.0
           f()          # `a` refers to a local variable

hsdjl> f()              # `a` refers again to `Main.a`

In this case, there is again no way for the compiler to know what + operates on. But the reason for that is not exactly the same: it isn’t because Main.a was rebound to another value; it’s because the symbol a occurring in the definition of f does not always refer to the same variable.


Ok, so that clarifies your question and also that’s not what dynamic scope means. Sorry to be a pedant about this stuff, but you know I’m a programming language nerd and I can’t help myself. This doesn’t really matter since it’s not what you were asking about, but I might as well clarify terms here. Dynamic scope means that the callee inherits variables from the caller. So an example of that using Julia syntax would be this:

f() = 2*x
function g(y)
    x = 3*y
    return f() # f gets called with x from here inside of g
function h()
    return f() # f will error because x doesn't have a value

This is how dynamic scope would work: local variables from g are visible inside of f but only when f is called from g. If f is called from h which doesn’t define x then there’s an undefined variable error. Yes, this is confusing and super error prone. What fundamentally is wrong with it? Just looking at f you can’t really tell what the heck is going on. And that’s the direction modern languages have evolved: towards making it easier to understand strictly locally what some code does.

Now, here’s lexical scope:

function g(y)
    x = 3*y
    f() = 2*x # refers to x defined in g
    return f()

This exaample looks dumb—why introduce f at all?–but it shows that f inherits x from the lexically, as in syntactically, containing scope. Why is this better? Because you can look at the code and see where x comes from. Compare this with dynamic scope where you cannot possibly know where x inside of f comes from because it could come from anywhere!

These examples are all with local variables. If we delete the g shell around the last example, we get an example of accessing a global variable from inside of a function body:

y = 5
x = 3*y
f() = x*2 # refers to global x

Your question seems to be: why allow this at all if it’s a performance problem. Suppose we didn’t and required x to be an argument. Then the code would look like this:

y = 5
x = 3*y
f(x) = x*2

Would the code work then? It would not. Why? Because it still refers to a global variable: *. That’s right. In Julia, * is not an operator, it’s just a global variable that happens, by default, to be bound to a generic function that performs multiplication. In a fresh REPL (or module) you can bind it to anything you want:

julia> * = "foo"

julia> 1 * 2
ERROR: MethodError: objects of type String are not callable
 [1] top-level scope
   @ REPL[2]:1

The syntax 1 * 2 is equivalent to *(1, 2) and it fails for the same reason that doing "foo"(1, 2) would fail—you can’t call strings like functions.

You can actually change that if you want by defining a call method for strings:

julia> (s::String)(a, b) = "$a $s $b"

julia> "foo"(1, 2)
"1 foo 2"

julia> 1 * 2
"1 foo 2"

Useless, but fun!

So that’s why you can’t disallow accessing global variables: because if you did, then you wouldn’t be able to use any functions, not even arithmetic operators. However, functions are given constant bindings, so they aren’t a performance issue like other globals are. What about requiring globals to be constant? We could do that, but that’s not how any other languages work. People expect to be able to do things like this in the REPL or in scripts:

x = 123
x += 5

Even static languages allow you to change the values of globals, they just don’t allow changing the type. Ok, so what about that—disallowing changing types of globals? That’s getting more plausible and is something I even proposed for the language pre-1.0, but ultimately, it’s too far from how people expect their dynamic languages to work. People expect to be able to do x = 123 in the REPL and then later move on to some other work and do x = "Hello!" and have that work.

There are other variations one could try like: you can have non-constant globals but you’re not allowed to use them from functions. Ultimately, that’s just not Julia’s style. The language generally lets you do whatever you want and won’t give you a hard time about it and assumes you know what you’re doing. We try to educate people about what is and isn’t a good idea in different situations, but Julia just isn’t a nanny state kind of language. Sometimes you don’t care about performance and want to use a non-constant, untyped global, and who are we to stop someone from doing that?


One minor addendum to this is that as of 1.8, you can say x::Int=123 in the global scope, and it will then be type stable to use in a function.


Yes, this is a lovely new feature. Really rounds out the language.


Is there any reason to make global variables const instead of type-annotated? I would’ve thought so if const actually prevented a variable from being reassigned, but it doesn’t, which can allow for some weird bugs because inlined values are not updated (or maybe this is going to be amended, too, idk).

When the global variable is truly const (in the julian sense of const)?

A const variable should not be reassigned, and the compiler works based in this assumption, if this happens is because someone is doing something wrong.


However correct me if I’m wrong, but for example it can be a mutable struct. So it’s always the same struct, but certain fields in that struct could change through time. Correct?

1 Like

Yes, correct, being mutable and being constant are two independent concepts in Julia, and can coexist. Mostly because they are modifiers for different things: an object cannot be const only a binding can be const; a binding cannot be mutable, only an object can be mutable. So, you can have a mutable object referred by a const binding.


Yes, definitely! Reading from a global still has a nonzero cost and if you are writing to the binding, whatever you assign to it needs to be heap-allocated. In the case of constants, the compiler will just inline them into the function’s IR and also use that information to do constant propagation through your program.


Just to be thorough, a mutable object assigned to a const variable won’t be inlined and the read cost will still be there, right?

It won’t be used for constant prop at the Julia level, but it will still be optimized to a constant pointer in codegen:

julia> const x = Ref(1)

julia> f() = x[]
f (generic function with 1 method)

julia> @code_llvm f()
;  @ REPL[2]:1 within `f`
define i64 @julia_f_243() #0 {
; ┌ @ refvalue.jl:56 within `getindex`
; │┌ @ Base.jl:37 within `getproperty`
    %0 = load i64, i64* inttoptr (i64 140286954152880 to i64*), align 16
; └└
  ret i64 %0

Compare that to the typed global case, where it needs to load the pointer to the ref itself first:

julia> y::Base.RefValue{Int} = Ref(1)

julia> g() = y[]
g (generic function with 1 method)

julia> @code_llvm g()
;  @ REPL[15]:1 within `g`
define i64 @julia_g_3296() #0 {
  %0 = load atomic i64*, i64** inttoptr (i64 139960920160280 to i64**) unordered, align 8
; ┌ @ refvalue.jl:56 within `getindex`
; │┌ @ Base.jl:37 within `getproperty`
    %1 = load i64, i64* %0, align 8
; └└
  ret i64 %1

To elaborate on this, there are two conceptual memory locations involved (in the general case there are two actual memory locations but they might get eliminated in the more constrained cases): the reference to the object and the contents of the object itself. If the binding is const the compiler can assume that the former cannot change—the binding always refers to the same object. If the object is immutable the compiler can assume that the contents of the object cannot change, but not necessarily that which object is referred to won’t change. In the completely unconstrained case this is a the work that needs to be done to use a global:

  1. Look up the binding and see what object it refers to
  2. Look at the object’s type tag to see what kind of object it is
  3. Look at the object’s actual contents
  4. Pick an operation based on the type

If the binding is const then 1 & 2 can be skipped. If the binding is typed but non-const then 2 can be skipped but 1 still need to be done; since the type is know the operation in 4 can be decided in advance. If the object is immutable then 3 can be skipped if you already know you have the right object, but that’s not super helpful if you have no idea what kind of object it is, but once you have it and are doing stuff with it it’s very helpful. If the binding is constant and the value is immutable then you can just do all of it at compile time. Keep in mind that these things need to happen every single time you do something with a global binding. ths compiler can’t just look up a global biding once at the start of a function or loop or whatever because the binding could be changed at any time. You can write the code like that if you want to, but they’ll behave differently if someone changes it so the compiler is not allowed to make that change. That’s why passing a global as a function argument makes things faster: it does exactly that optimization since the local variable won’t change even if someone changes the global once the function starts executing.