I am not sure why you think so. I am merely disagreeing with you about modifying variables in closures being dangerous, or Julia’s implementation being different from other languages that actually have closures.
In fact, it is quite standard, and has been around since the 1970s (early Scheme had it, then various Lisps).
AFAIK Python did not have closures until recently, so this is understandable.
Rust? If you define a nested function in the “natural” way it’s not a closure, if you define it as an anonymous function with “||” it’s a closure.
This doesn’t really affect your point at all, just provides an example of one such language where there’s a difference between inner functions and closures.
The global keyword allows changing the value of global variables from within a function.
X = 1
def f():
global X
X = 2
f()
assert X == 2
global works only on module-scoped variables, not on variables defined in an enclosing function.
PEP 3104, created in 2006 and implemented in Python 3.0, introduced the nonlocal keyword, which allows changing the value of a variable defined in enclosing scope from within an inner function.
def f():
x = 1
def g():
nonlocal x
x = 2
g()
return x
assert f() == 2
Before that, an inner function could refer to variables in an enclosing function
def f():
x = 1
y = 10
def g():
x = 2
return x + y
z = g()
return x, y, z
assert f() == (1, 10, 12)
but there was no way to change them.
def f():
x = 1
def g():
x += 1
g()
return x
f() # UnboundLocalError: local variable 'x' referenced before assignment
Thanks for the concise review of Python’s closure behavior. It seems like Python is the main language that people are coming from when they find Julia’s closure behavior to be unexpected, especially since Julia’s closure behavior is, as @Tamas_Papp and I have both noted, completely standard for Lisp and most other programming languages that allow inner functions that capture local variables — it is the standard notion of what a “closure” is. When people over the decades have complained that functional programming in Python is hard because it doesn’t have closures, this is what they’re complaining about.
It’s interesting that Rust makes a distinction between an inner function and a closure based on syntax. I can see the logic behind that, but it adds complexity to the language: in Julia there is only one kind of inner function / closure and the do syntax is a purely syntactic convenience; in Rust there are at least two different kinds of inner function that behave differently and that programmers have to understand.
I find this kind of behavior is really helpful to work with optimization packages when I had a really complicate object to be optimized. I just pass that object to other’s optimizers using an anonymous function (with many extra arguments in the original objective function) and get my optimized version object back.
Note that although MATLAB has closures and non-closing inner functions they’re the opposite way around to in Rust: the thing that looks like a regular function definition is non-closing in Rust and closing in MATLAB.
It can be annoying that Julia closures make it easy to accidentally use an outer name, especially when using threads and closures (when you will get all sorts of nonsense). I think cases without parallelism are mostly harmless, though.
Indeed! that’s one of the few things I liked when programming in C++…
Here’s a simple example for people unfamiliar with C++:
#include <iostream>
#include <functional>
void call_f(std::function<void()> f) {
// Just call f
f();
}
int main() {
int x = 5;
// This lambda can read `x`
call_f([x]{ std::cout << x+10 << "\n"; });
// This lambda can change `x`
call_f([&x]{ x = 4; });
std::cout << "x: " << x << "\n";
return 0;
}
// Test:
$ g++ a.cc && ./a.out
15
x: 4
It makes it clear which variable might be modified by the lambda.
We can also write[&]{ ... } to capture all necessary variables by reference (approximately equivalent to the Julia behavior), and [=]{ ... } to capture them by copy.
While I don’t want to echo Dijkstra’s comment about BASIC in relation to Python 2 (it deserves much better), a lot of its design choices make transitioning to languages that mesh better with a functional style difficult.
I mean, I get the design philosophy of Python: take anything that might be confusing to someone and just disallow it. Reassigning an outer local form an inner function body? Nope, you have to use nonlocal. Scopes smaller than functions? No, if a local variable is visible somewhere in a function, it’s visible everywhere. That causes issues with closures capturing variables in loops, you say? Oh well. We didn’t really want people using closures anyway. It’s a point of view. But it makes for what feels to me like a very fussy, unsmooth programming experience. Moving the same code between a for loop and a comprehension? Might change what it does. Moving code in or out of an inner function? Also might change what it does. That’s just the way Python is: the meaning of code is very context dependent, you can’t just freely move things around and expect them to behave the same. This is true of imports as well: order matters very much in Python, whereas in Julia, if you change the order of your imports, as long as there are no warnings, it does the same thing both ways. The Python behavior is simpler but more brittle; the Julia behavior is more subtle but carefully designed so you can rearrange your code without problems.
Being able to smoothly move code around and have it mean the same thing is a pretty important principle in the design of Julia. One not-entirely-obvious reason that’s important is macros. Why? Well, when you write @foo x = f() you don’t know exactly how x = f() is going to be evaluated. Maybe @foo just adds a few expressions before and after x = f() like the @time macro does. Or maybe it makes x = f() the body of a closure and then calls that closure. You can’t tell. And it mostly doesn’t matter since x = f() behaves the same either way. If Julia behaved like Python, you’d have to know how @foo is implemented since you’d have to change it to be @foo nonlocal x = f() instead if it was implemented with a closure. Maybe there could be some macro tool to rewrite the expression to add nonlocal to any modified captures. But that’s hard to do correctly in all cases. (How does the macro know if something is global or nonlocal, for example?) And it’s another gotcha that macro writers need to watch out for and are likely to get wrong. The closure behavior in Julia (like Lisp, from which it was copied) means that you don’t have to worry about this. (Python’s solution: no closures or macros. Except it’s got closures now, they’re just awkward to use, and it’s edging towards macros and this is a gotcha that makes metaprogramming harder than necessary.)
julia> function f1()
x = 1
g() = (x = 2; x)
@show g()
@show x
end;
julia> f1();
g() = 2
x = 2
julia> g() = (x = 2; x);
julia> function f2()
x = 1
@show g()
@show x
end;
julia> f2();
g() = 2
x = 1
Note that the requirement of Python’s nonlocal, Rust’s mut and C++'s & can prevent this type of “bug.”
I wouldn’t argue that Python’s design of inner function is the best approach, but I’d stress that it has nice properties (which could just be a coincidence) that is worth analyzing, especially when it comes to concurrent and parallel programming. I believe there are something beyond beginner friendliness like you concluded.
I also don’t think it’s a good idea to look at Scheme etc. and conclude that environment-mutation is the defining property of inner functions. For example, it’s pretty obvious that Clojure emphasize closures but it also focuses on immutability. You have to use something like Julia’s Ref to communicate back the mutation in the closures to the outer function. I can understand the perspective that it works only because Clojure is a functional language that focuses on immutability, as you mentioned in the comment above. But Rust also shows that you can learn lessons from purely functional languages even if you are designing an imperative language (if it emphasizes concurrency and parallelism).
I think I see how these two sentences are throwing OP off.
Explicit declaration works in Julia too: in any local scope, writing local x declares a new local variable in that scope, regardless of whether there is already a variable named x in an outer scope or not.
This part is pretty straightforward. local x declares a new variable in its scope, different from other variables named x in outer scopes.
Declaring each new local like this is somewhat verbose and tedious, however, so Julia, like many other languages, considers assignment to a new variable in a local scope to implicitly declare that variable as a new local.
This following sentence seems to suggest that local x is an optionally explicit keyword for something that Julia already does by default…if the reader doesn’t pick up on a subtle detail. Elsewhere in these two sentences, “new variable”/“variable as a new local” meant the variable was new to the declaration or assignment’s scope, but “new variable in a local scope” meant the variable was also new to all local scopes containing the assignment’s scope.
I think it’s pretty understandable for someone unfamiliar with all this terminology to read “this variable is local to its scope” and think “local variable” implies a context specific to the variable’s home scope. Maybe it’s simpler to tell people:
The global scope contains local scopes that can contain other local scopes
we call a variable in the global scope a global variable
we call a variable in any local scope a local variable
inner scope can access a variable that it does not declare or assign from an outer scope
inner local scope assigns to existing variables from outer local scopes by default
local scope does not assign to global variables except in interactive contexts e.g. REPL because unlike local scopes, the global scope is not contained to one file.
It’s certainly not the defining characteristic of an inner function — that’s just that the function definition is syntactically inside the body of another function, there’s no semantic implication (to me). It could be completely separate from the outer function or it could be a closure. The defining characteristic of a closure is that it “closes over” outer local variables. That means that if you read an outer local from a closure, it will have the binding it had in the outer scope. In a language that allows reassignment of locals at all — which Clojure does not — if you reassign an outer local from a closure, it changes the outer binding. Since Clojure doesn’t allow reassignment of locals at all, its inner functions are closures in this sense. If it allowed local assignment but assignment from an inner function was disallowed or did something different like creating a new local, that would be a different story, but it’s not allowed at all.
Again, in purely functional languages that don’t allow reassignment of locals, there’s no way to distinguish capture by reference or value, so they cannot possibly argue for one or the other. The logical structure is that some people are saying “if I do A then B should happen” while others are saying “if I do A then C should happen”. The pure functional languages are not saying either of these things: they are saying “we don’t allow A in the first place”. You cannot say that this position in in favor of B or C — it’s equally in favor of and against both since it’s against A being possible at all.
Yeah, you are right. The word “inner” is purely a syntactic property. It does not specify the concept I wanted to bring up. I probably should’ve said value-or-variable-capturing-maybe-stateful-inner-function or something. I don’t really know a good word for this…
I think I understand this and I thought I could bring up a different point.
I was bringing up Clojure as an example language that defaults to “capture by value” where “capture by reference” is optionally possible (i.e., it can do Scheme-like impure “functional” programming), even though it’s a functional language focusing on immutability. I thought it was interesting because Rust came to a somewhat similar design when approaching from the side of imperative programming; i.e., you need to annotate the closure with mut. It’s a convergence of the design from both directions.
Using your analysis of the logical structure, I think the question is more about “which one should be the default, A => B or A => C?” Julia, Clojure, Rust, Python, etc. all let you write (roughly speaking) two types of inner functions: (1) variable/object updates in it are observable by the outer function and (2) the values defined in the outer function are “copied” to the inner function. I think the main difference is the default behavior, rather than which one is possible (unless you are using Haskell).
Theoretically we could have our cake and eat it: Capture-by-value could be considered a subset of capture-by-Core.Box julia. The idea would be that some people opt into a strict-mode – either by linter or by core language support – that turns capture-by-Core.Box into an error. People could opt into strict mode either on a per-file basis or on a per-begin-end-block basis. (one would like to support both ways, for modules that include other files, with varying strictness)
I’m not sure a linter would be able to do this well, because one needs to evaluate code (because of macros and eval and @generated); naively looking at source-code is not enough.
For this, we would need to identify and communicate a reasonable set of sufficient conditions that prevent Core.Box emission, commit to guaranteeing that these won’t emit Boxes, and then check that during lowering (of course, some patterns that are errors in strict-mode might still be compiled into box-free code).
A simple rule could be: A closure / inner function / anon function is valid if and only if every closed over variable is single static assignment (lexically! We don’t have dom-trees during lowering, so x = 1; x = 2; foo() = x; would be invalid, as would be @label jump_here; x = 1; foo() = x;) and this single static assignment “lexically dominates” the closure creation. “Lexically dominates” e.g. means that there is no @label / @goto jumptarget between the assignment and its use; we would need to look at the current implementation (in femtolisp) and identify a communicable rule operating on macro-expanded expression trees (pre-lowered code) that is sufficient.
This strict mode would only restrict valid julia code; it would be guaranteed that code that lowers under strict-mode without errors has identical behavior and emitted machine-code as non-strict code.
A bit off topic, but it feels like there are many places where one would like to change the default semantics of julia code (similar to C/Fortran compiler flags). Not only in this proposal but also in @fastmath, @simd, @inbounds, @changeprecision, @optlevel, @views… Currently this is mostly done with macros that are somewhat inconsistent in where they need to be applied (loop, function, module…), some require additional levels of begin/end, and chaining them feels dangerous. It would be nice if there was a more consistent way of doing it. Something like
module MyModule
@compiler_opts simd=true opt_level=2 views=true precision=Float32
...
end