How to Bypass the World Age Problem

I finally managed to move some of my codes to v0.6. The World Age problem was a shock as it clashes with one of the main design principles of my code. I’ve read the documentation, and all the discourse topics I could find but I still don’t know how to get around it. Here is a typical code (MWE, the real use case is of course much more complex) :

genfun() = eval( :(r -> exp(r)) )    # generate a julia function (model) from an expression
workwithfun(f) = f(1.0)              # do some work with the function (model)
f = genfun()
workwithfun(f)

This works fine, but it fails as soon as I try to execute a function instead of a script, I run into the World Age Problem:

genfun() = eval( :(r -> exp(r)) )    # generate a julia function (model) from an expression
workwithfun(f) = f(1.0)              # do some work with the function (model)
function test()
   f = genfun()
   workwithfun(f)
end 
test()   
# ERROR: MethodError: no method matching (::##3#4)(::Float64)
# The applicable method may be too new: running in world age 21827, while current world is 21828.
# Closest candidates are:
#   #3(::Any) at REPL[12]:1 (method too new to be called from this world context.)
# Stacktrace:
#  [1] test() at ./REPL[16]:3

This is in fact exactly the problem described at the beginning of Redefining Methods.

To explain my application: In genfun I would normally generate a function describing a model symbolically, then differentiate it symbolically, wrap it into a type that specifies other details of the model. This is returned to the caller where the function (model) is then used in various ways. (if anybody is curious, they can look at this issue)

What really struck me is that if I use FunctionWrapper, then the problem seems to go away?

using FunctionWrappers: FunctionWrapper
genfun() = eval( :(r -> exp(r)) ) |> FunctionWrapper{Float64, Tuple{Float64}}
workwithfun(f) = f(1.0)
function test()
   f = genfun()
   workwithfun(f)
end 
test()   

This is probably my best workaround for now, but it comes with (1) non-generic code (maybe this could be fixed though); (2) some performance penalty e.g. no inlining???; (3) it just feels like a hack

Is there no better way to achieve what I want?

CC @ettersi

2 Likes

It’s been pointed out to me that I could use a macro instead of eval in genfun. The main issue with this is that in practise I will also have parameters, e.g.,

genfun(a) = eval( :( r -> exp($a * r) ) )

and this doesn’t seem to work with macros???

The problem is that the world in which your new method is defined (with eval) is younger than the world which is running the rest of the code, and hence uncallable. I believe that’s a fundamental restriction without which you couldn’t have both a) well-optimized code and b) coherent redefinition semantics. Older Julia versions sacrificed b) to have a).

In any case, using @eval tends to solve these issues, since whatever it runs runs in the “latest world age”:

genfun() = eval( :(r -> exp(r)) )    # generate a julia function (model) from an expression
workwithfun(f) = f(1.0)              # do some work with the function (model)
function test()
   f = genfun()
   @eval workwithfun($f)
end 
test()

If you’re relying on this a lot, you might want to reconsider how your code works.

Why not genfun(a) = r-> exp(a * r)? Or have your macro expand into that code. What kind of symbolic manipulation do you do?

interesting, thank you. I’ll need to think about whether this will help.

only symbolic differentiation really: I use Calculus.jl to differentiate twice, then eval the original expression and its derivates to create three functions that I store in a type. The “model” is this type or a composition of several of these types.

Base.invokelatest() let’s you explicitly invoke a method as it’s defined in the current world age. Drawbacks:

  • Base.invokelatest doesn’t accept keyword parameters, so all parameters should be positional
  • a little bit slower call (although it should still be faster than eval)

In XGrad.jl, where I also generate a symbolic derivative and then make a function from it, I also used a wrapper function that caches generated functions and calls Base.invokelatest() under the hood. This way, users don’t have to bother about world ages and simply call a higher-level function.

I didn’t consider invokelatest to be a solution :). Do you use it for function derivatives (as opposed to expression derivatives)? Thanks for he suggestion.

What is the overhead for very simple functions? (I will benchmark it of course but would be good to hear your experience)

Incidentally, I’ve been thinking of switching to XDiff when i have time. Tests are failing so would you say I should wait a bit?

I only found one occurrence of invokelatest in XDiff, which is in runtests.jl so how are you using it exactly?

I did some initial benchmarks in this gist, invokelatest is indeed very slow. I can probably temporarily at least live with FunctionWrappers, but they also incur a performance penalty.

Results

Reference Timing
  18.198 ms (0 allocations: 0 bytes)
AnalyticPotential( diff(Expr) ) from REPL
  20.093 ms (0 allocations: 0 bytes)
AnalyticPotential, slightly optimised
  17.876 ms (0 allocations: 0 bytes)
Function Wrappers
  33.105 ms (0 allocations: 0 bytes)
Invokelatest variant
  194.572 ms (8000000 allocations: 122.07 MiB)

MORE QUESTIONS:

  1. I still don’t understand why the world problem is not a problem for FunctionWrappers? Is the reason that I am specifying the input and output type so the compiler doesn’t need to infer them? Would it be possible to do this without FunctionWrappers so the function call overhead is removed?

  2. The more I think about it the less I understand why my situation creates a problem. I understand about redefining functions but I am not doing that here. All I am doing is defining a function for the first time using some very basic meta-programming (??) i.e. generating a function from its symbolic expression. And I am not allowed to use it immediately. It seems to me this is a significant restriction of the current implementation?

  3. Is it possible to define the new function via eval of an expression in the current scope? And would that get around the world problem?

xdiff function currently has 2 methods - one that takes an expression and outputs expression and another that takes a function and outputs a function (the second one uses the first one under the hood). Only the second one generates new callables, so Base.invokelatest() is applicable only for it.

I only found one occurrence of invokelatest in XDiff, which is in runtests.jl so how are you using it exactly?

It’s in XGrad.jl, not XDiff.jl. I haven’t announced it yet, but in short, XGrad.jl is a simpler and more stable version of the previous package. It also provides a wrapper function xgrad which uses Base.invokelatest() to simplify things for end user.

What is the overhead for very simple functions?

From my tests overhead for using Base.invokelatest() is constant and takes something about 40-50 nanoseconds. Since my primary use case is tensor algebra with a single function call taking milliseconds, cost of using invokelatest() is negligibly small for me. But after seeing your results with FunctionWrappers I consider switching to it to improve run time for simple functions like yours.

To use XGrad, checkout latest master of Espresso.
If you want to use older XDiff, checkout version v3.0.0 of Espresso.

I can probably temporarily at least live with FunctionWrappers, but they also incur a performance penalty.

One nice thing about symbolic derivatives is that you can actually copy-and-paste generated code into a “manually” written function. For example, I’m currently working on a package that will, among other things, provide a set of loss functions like mean squared error, cross entropy, etc. There’s no reason to calculate derivatives of such functions every time the package is loaded. Instead, I’m going to get derivative expression, manually optimize if needed and put right to the file with source code. If you have more or less known set of functions, you can do the same thing and get maximum performance without any overhead at all.

The more I think about it the less I understand why my situation creates a problem.

As far as I understand, world age solves infamous #256 issue. In short, to compile a function f() that calls another function g() into efficient code, you need to link to a specific implementation of g(). If later g() is redefined, f() should be recompiled. But how to understand that one of f()'s dependencies has changed? World age assigns a “version” to each implementation of g() (or rather each compilation cycle), making it easy to understand when f() is outdated. Defining a new function g() is not much different from redefining existing one in this regard.

1 Like

Maybe I should ask 2 differently: why exactly does my code execute ok as a script but not as a function. i guess the answer is one is interpreted the other compiled? Ok, but it is very counter-intuitive!

Maybe I should ask 2 differently: why exactly does my code execute ok as a script but not as a function. i guess the answer is one is interpreted the other compiled?

Yes. It’s necessary. If you run:

foo(x)
total = 0
for i in 1:100000
    total += sin(i)
end

If foo(x) calls eval, it can redefine anything: iteration over ranges, the definition of sin(::Int), and the definition of addition. It’s impossible to generate efficient code for the loop unless the compiler assumes that foo(x) does no such thing. The world age counter is just a mechanism to achieve that.

In Julia, eval is to be used only in very rare circumstances. Macros and generated functions are better ways of generating code on the fly.

I don’t know, but its author is a core contributor to the compiler, and he undoubtedly possesses knowledge beyond our mortal comprehension.

Conceptually you are correct, but the way the compiler works, it’s the same thing. In genfun(foo)(2), the compiler wants to know what object comes out of genfun in order to reason about this call, and the compiler cannot reason about the output of eval. In an alternate universe, it could be done differently.

1 Like

No worse than before.

It uses cfunction.

You are defining new method.

Only if the “current scope” is global scope.

Code in global scope is allowed to have unknown sideeffects that are visible immediately. Not inside a function since otherwise the inference can’t infer anything about a function.

Thank you @yuyichao for chiming in.

why exactly does my code execute ok as a script but not as a function.

Code in global scope is allowed to have unknown sideeffects that are visible immediately. Not inside a function since otherwise the inference can’t infer anything about a function.

Suppose I don’t care about inference in my function, is there a way I can tell it to execute as if in global scope? Maybe actually in global scope? Would @eval do that?

ok - I will need to revisit macros. Thank you for your comments.

That’s exactly what invokelatest is for.

If you look at the first post: undone care about performance in test but I do care about performance in workwithfun.

Invokelatest does not help me there.

If you look at my previous post: the performance is “No worse than before.”

Invokelatest is exactly what you were using before implicitly and is what you should use now.