Remove the soft scope altogether and make it global

I see that the Julia community are currently debating on the new scope rules in v1.0, in an attempt to return to the deprecated soft/hard local scopes in v0.6. In my opinion, neither solution is ideal. Here, let me give my thoughts from a teacher and data scientist’s perspective. In the following, I will

  • illustrate the mechanism of hard/soft local rules,
  • explain why neither solution is ideal, and
  • propose the removal of the soft scope.

Mechanism of hard/soft local rules

The mechanism can be illustrated in the following two figures:

The Global scopes view each other through import. The local scope and the nested local scope communicate with each other freely (in absence of the local keyword). The difference between the hard local and the soft local is that the hard one can only view the global variables (in absence of the global keyword) while the soft one can also modify them.

Some users complained that the distinction between the hard local and the soft local is too complicated, so v1.0 replaced every soft local with the hard local. Now, other users are complaining the omnipresent global keyword is too awkward, so we are returning to v0.6.

Neither solution is ideal

Neither solution is easy to teach. The hard/soft one needs to explain why some local variables cannot modify global variables; the all-hard one needs to explain why the global keyword has to be in every loop. Both are counter-intuitive.

Neither solution supports code copying between prototyping and developing. The global/soft local duality is not equivalent to the hard local/nested local duality. The all-hard setting needs to remove all global when copying code into functions. Both are annoying.

Remove the soft scope

It seems that we all forgot another (obvious) solution: instead of replacing the soft local with the hard one, we can alternatively replace it with the global scope. Then the prototyping environment will be equivalent to the development one.

The hence simplified version will be easy to teach and convenient for copying code. Indeed, it will make it much like Python. Everyone will be happy. If this does not cause any performance issue, I believe that it will be the best solution.

The only downside I can think of is the emerge of many global variables, which can 1) pollute the global scope and 2) be visible to other modules. The first problem can be easily solved by taking caution in the naming of the various variables with long or short lifespan; the compiler can even rename it internally.

The second problem is only annoying when using editors’ auto-completion feature after tying the module name as the qualifier. It can be solved via some keyword like export, which makes it invisible via the module qualifier, and this kind of device is much easier to teach than the loop awkwardness taught at the beginning.

we’re not, I think?

not really and what about performance and cleaness of coding. being able to infer the locality of variable just by looking the few lines above and below is a nice thing to have.

there’s performance issue, as expplained by


If you are talking about

then you may be misunderstanding something: that is proposal for a relatively minor change in a very specific context (the REPL). The rest of the language would be unaffected.


I see you worry particularly about the performance and the variable locality inference. Let me analyze it in real-life scenarios and show you that your worries will not happen.


Let us consider the following loop

s = 0
for i in 1:9 
    temp = i*i  # intend to be local
    s += temp   # intend to be global

Making this loop the global scope will only sacrifice the performance of temp; the global variable s will suffer no matter the hard scope or the soft scope is in use. If you really care about the performance, you would probably have wrapped it in a function.

The point here is that it is a bad deal to trade prototyping convenience for slight performance improve in half-baked non-production code.

Variable locality inference

This inference can happen in three scenarios:

  1. REPL, IJulia, notebook;
  2. Functions.
  3. Scripts;

In REPL, IJulia, notebook, the kind of “variable locality” you mentioned is nonexistent.

In functions, since everything is local, the variable locality is by default.

Concerning scripts, if the programmer uses it like notebooks via, say, Juno, we are reduced to the 1st scenario. If he uses it in the traditional sense and has difficulties in debugging, probably he is doing neither OOP or functional programming. Long scripts are always difficult to debug no matter which scope rule is used. We’d better not hijack the scope decision in favor of some bad programming practice.

Then how about making an overhaul in v2.0?

I think that will be decided when 2.0 is coming up based on a cost/benefit analysis. My understanding is that the benefit has to be really large to make breaking changes to things you cannot simply deprecate, which includes scoping rules.

FWIW, I hope that scoping will remain unchanged from now on; we are slowly reaching the point where the design space is pretty well-explored and suggested solutions to the problem that some people perceive turn out to have disadvantages that are worse than the original problem.

But please note that I am one of the people who is very happy with the 1.0 scoping semantics. Others may have a very different opinion.


I’m increasingly convinced that the key source of insatisfaction about the current rules is that they differ from Python, so they trouble: (a) people whose programming habits are tied to Python workflows (or similar languages in this regard), and (b) teachers whose syllabus is adapted to Python’s rules, such that they can teach for loops without having into account scope-related aspects, etc.

Therefore, since any sinificant change will receive complaints of those who are happy with the current rules, perhaps the only change of the status quo that would increase the satisfaction of a sufficiently large user population to be worth the trouble, would be making it behave exactly like Python.

If this perspective is agreeable, the debate could be simplified to: “are Python scoping rules acceptable for Julia?”, and development decisions would be simpler to make.


You may find this relevant:

In short Python “sovles” this issue by loops not introducing scope. They don’t introduce scope anywhere, however: not just in global scope but also in functions. This makes loops totally different in scope behavior from comprehensions and introduces its own host of controversial scope problems which Python has struggled with throughout the years.

The discussion in is converging on a warning when you implicitly shadow a global in a top-level loop (which is a bad idea anyway since it’s at the best confusing and at the worst a bug), with the REPL being a bit more lenient and assuming that rather than shadowing the global, you want to assign to it (i.e. soft scope).


As much as I appreciate scoped loops, I’m curious what the practical issues are without them. Isn’t it just when reusing variable names in two ways inside a function

which is a bad idea anyway since it’s at the best confusing and at the worst a bug


What do you want this to do?:

fns = [(x = i^2; ()->x) for i = 1:10]

# later

x = 123

# later still

for f in fns


  1. Print 1, 4, 9, 25, 36, etc.
  2. Print 123 ten times.

1, obviously. Is it bad to have new scope just for comprehensions?

What do you want this to do?:

fns = Funtion[]
for i = 1:10
    x = i^2
    push!(fns, ()->x)

# later

x = 123

# later still

for f in fns


  1. Print 1, 4, 9, 25, 36, etc.
  2. Print 123 ten times.

I do think it’s nice that for, let, comprehensions, and higher-order functions all work the same way. It makes it much easier to parallelize and refactor code. If nothing else, we wanted to do what’s better for parallelism by default anywhere we could.


Just in case it helps, you can see julia code as a tree of data (because of lisp inheritance, quote, expr), the mechanism to resolve variable consists in ascending the tree until you find where the variable is defined. I I’m not mistaken it’s not hard to teach.

Le mar. 14 janv. 2020 à 21:53, jeff.bezanson via JuliaLang a écrit :


There was a very enlightening (for me) post that such a loop is not one “piece of code” but two. If you are not aware of this post, then… here it is.


I don’t imply anything; I just want to share my mindset when I was learning comprehension: when I saw that in Python, I instinctively regarded the comprehension as a function that return an array (or a generator).

With that example, I wanted to convince jling that it’s a scope issue, not a performance one.

But thank you all the same for bringing up Jeff’s argument, which casts more light on this “piece of code”. I understand why he considered the following as “two” pieces of code:

x = 0
for i = 1:n
    x += i

However, I don’t even consider them as “pieces of code”. All what I see is two agents (in the global scope) dialoguing with the main memory. Maybe that’s because I no longer see program as “linear” after fully trained by functional programming used to notebooks.

The parallelism you mentioned can be a decisive one (that turns down my proposal). Could you elaborate on it?

Edit: BTW, Stefan’s argument:

This makes loops totally different in scope behavior from comprehensions and introduces its own host of controversial scope problems which Python has struggled with throughout the years

is not a decisive one. I can argue on it if wished.

Regardless of their merits or disadvantages, since Python-style scopes are definitely breaking, my understanding is that they would not even be considered until 2.0. So I am not sure this is the best time for this discussion.