Could performance of globals be improved by a 265 like approach?

As I understand it, the fundamental problem with performance of variables in the global scope is that dynamic checks and then dispatch have to be done, since the types of globals can change after definition. What would happen if, instead the compiler assumed that globals would never change type, store which globals were referenced, and recompile affected functions when this assumption was violated, as is done to solve 265. (
To me, this seems like it would solve the problem in the common case where globals don’t have their type re-defined, and would give a better story for why breaking the rule is slow.

P.S. I’m sure this is stupid because it seems like a really obvious solution, and hasn’t been done. I’m mainly wondering why it is stupid.


I think of bad performance of globals as a feature, not a bug :smile:



Because it encourages you to avoid globals. Which coincidentally makes code more readable, more testable, easier to reason about etc.


To some extent, this can work. You can simulate this behavior by defining your global as a function rather than a value:

julia> my_global() = 100
my_global (generic function with 1 method)

julia> function do_stuff_with_global()
         return my_global() + 1
do_stuff_with_global (generic function with 1 method)

julia> do_stuff_with_global()

julia> my_global() = 2
my_global (generic function with 1 method)

julia> do_stuff_with_global()

But there’s a problem here: in order to change the “value” of my_global, you need to redefine that function at the top level. If you want to do that inside another function, that means you need to use eval(), and once you do that you need invokelatest to get the re-defined value within that function:

julia> function modify_global()
         x = my_global()
         @eval my_global() = 200
         return Base.invokelatest(my_global) + x
modify_global (generic function with 1 method)

julia> modify_global()

But invokelatest is an escape hatch: it tells the compiler that it’s okay to give up on figuring out the return type of invokelatest(my_global), so the resulting function is type-unstable:

julia> @code_warntype modify_global()
  #self#::Core.Compiler.Const(modify_global, false)

1 ─      (x = Main.my_global())
│   %2 = Core.eval::Core.Compiler.Const(eval, false)
│   %3 = $(Expr(:copyast, :($(QuoteNode(:(my_global() = begin
          #= REPL[18]:3 =#
│        (%2)(Main, %3)
│   %5 = Base.invokelatest::Core.Compiler.Const(Base.invokelatest, false)
│   %6 = (%5)(Main.my_global)::Any
│   %7 = (%6 + x::Core.Compiler.Const(200, false))::Any
└──      return %7

That means that any advantage of these function-like globals is lost as soon as you try to modify them, which kind of sinks the whole approach.

The point here is that the solution to issue 265 relied on the observation that you rarely need to re-define a method and then call that re-defined method inside the same function. But you might actually want to change the value of a global variable and access that value within the same function, so that approach doesn’t make as much sense for global variables.


I guess a different task might also change the type or value of a global during a function call.

AFAIK the plan is to eventually get typed globals that are performant, eg. my_global::Int = 10. But since you can already have exactly that with const my_global=Ref(10), at the minor cost of writing my_global[] everywhere, I doubt we will even see that in 2.0.

1 Like

The solution to 265 was actually to flip it around, and make that assumption true. It defined that functions couldn’t change definitions, then provided an escape hatch (eval and invokelatest) to allow running in a new context with a different set of constant definitions. This could be applied to global constants by applying essentially the transform @rdeits described above, though I’m worried it could be rather confusing and regress performance in some cases.

This might actually be an approachable first-issue for someone wanting to dive into the core and implement. It’s not trivial, but also not unreasonably coupled to other systems. It should be possible to incrementally teach various parts to observe the type information and optimize from it. Thus, no need to wait for 2.0 (it’s not breaking).

1 Like

@jameson are there cases this would cause regressions other than when a good was assigned a different type?

It depends on what assumptions you’re willing to make on the user, and how complex of behavior you’re willing to potentially accept.