d = Dict(:a => :(y+1), :b => :(y+1))
y = 100
NamedTuple{eval(Expr(:tuple, :(keys(d)...)))}((eval(x) for x in values(d)))
I try @generated
d = Dict(:a => :(y+1), :n => :(y+10))
@generated function gf(y)
t = Expr(:tuple, :(keys(d)...))
v = Expr(:tuple, :(values(d)...))
return :(NamedTuple{$t}(eval.($v)))
end
So I can’t understand how it exactly should work is any way to do something like this?
eval and all related functions and macros execute expressions in the global scope; this restriction exists because executing arbitary code into a local scope would certainly stop any chance of optimizations and would likely break the method depending on the expression. The error is thus easy to explain: your expressions can’t find a global variable y, and it’s completely unrelated to the local argument y. If you want to use that argument, you could interpolate it and :(y+1) into a bigger expression, preferably a let block to avoid assigning global variables that can’t disappear.
It’s also worth noting that you’re not doing generated functions in the Julia sense until the @generated example, and there’s no need for that to be a generated function. It’s not clear what you actually want to do, but we’d probably call it something different, and I’ll withhold further suggestions in case my guesses are confusingly wrong.
function modelfunc(mexpr, pexpr, args, params)
pex = Expr(:tuple, params...)
aex = Expr(:tuple, args...)
x = quote
function ($aex...)
$(mexpr)
$(pexpr)
$pex
end
end
eval(x)
end
Okay it’s clearer now you want to make callables to call later. There’s several layers to this that I don’t fully know, but I’ll try to stick to the bigger points and make it digestible. Right now, you’re evaluating new anonymous functions with 1 method each into the global scope. This will perform like named functions, which comes with the typical downsides:
Every method needs to be compiled anew even if other anonymous functions’ methods shared its body. The more methods you make, the more compilation you spend time on, and the fewer times each method is called, the less time that compilation saves. In the worse cases, interpreting code (JuliaInterpreter.jl) actually saves time.
When you’re still inside a local scope after the eval call, you can’t call the method yet even though the function exists because the method definition only exists in a newer, updated world age. You need to reach the global scope’s updated world age to allow calls to use that method. World age contributes a bizarre lag to dynamic evaluation (opt out with @invokelatest), but it allows method calls to be interactively optimized at all (invokelatest is unoptimizable).
Functions (or rather their types) live forever in global scopes, even the anonymous ones or the ones defined from local scopes. This is done for compiler optimizations (basically the compiler needs const names). Making an arbitrarily large number of functions will keep depleting your memory, unlike some other dynamic languages where functions can be garbage-collected like any other objects. Methods’ compiled code can be made obsolete by overwriting the methods, but they live on in previous world ages for any tasks running a method call that may still rely on it.
To summarize, interactive and optimizable functions come with drawbacks that make evaluating methods scale poorly. There have been many ways to address these, but I’m only somewhat familiar with 2:
RuntimeGeneratedFunctions.jl evades the world age issue (2). Instead of eval-uating new methods for possibly new functions, it caches and pastes the method body into 1 @generated function’s method for compilation. Same performance, but it sacrifices many features of normal functions like calling each other by a name or multimethods, though this is intended for cases where those aren’t needed.
DynamicExpressions.jl builds expressions without any corresponding functions, methods, or even compilations of such, evading the excessive compilation times (1), world age issue (2), and most importantly the memory buildup (3). Only the manually specified, individual operators are compiled, which sacrifices optimizations over the whole expressions, but it can still get close to top-speed for simple expressions (like your example) where there’s not much to optimize (note that the comparative timings in the package’s README are deceptive due to the non-equivalent function and poor use of benchmarking). These Expressions take more effort to write, are awkward to call on inputs, and also lack normal function features, but it’s well worth evading all the above drawbacks when it matters.