Even if you got this to work, the performance would be terrible, the maintainability would be worse (the external caller would have to know the implementation details of the function they are calling), and in general this sounds like a classic xy problem.
What is your ultimate goal here? What kind of functionality do you want to achieve?
My ultimate goal is quite complex and well beyond the scope of a single question
I’m searching, using Evolutionary Algorithm, for different implementations for a function to achieve a certain objective.
My search candidates are pieces of code represented as Expr objects, which work on a predefined set of variables of known types and names. So the “external caller” can be assumed to be fully aware of the “internal” implementation, that’s not a problem.
In the simple code example above, the variables are [v::Integer], but in the real use-case there are many variables of different names and types.
And the example candidate Expr is v=v+1, and the objective is for v to be 100.
Again, of course in the real case, there are more variables and a more complex and realistic optimization problem.
I want to evaluateex_v() with different hook candidates, and get the “score” (return value) for each of them.
Please let me know if this explanation makes things more clear. Thanks for the help!
In other words, your Expr objects are the body of a function (v1,v2,...) -> body? where v1,v2,... are some set of known variables, and the function body returns some value, or maybe the new values of the variables?
One way is to let f = eval(:((v1,v2) -> $body)) to get a function f(v1,v2) and then work with that subsequently. Or f = eval(:((v1,v2) -> ($body; return v1,v2))) if you want to return the new values of the variables.
This way you get a function object that can be compiled and run quickly if you want to run it many times for different values of the parameters if needed.
(Conversely, if you only want to evaluate each expression a few times, it might not be worth paying the cost of compiling the expression and you might want to write a little interpreter instead, or use something like JuliaInterpreter.jl.)
julia> function ex_v(v_init::Integer, hook::Expr)
f = eval(:(v -> ($hook; v)))
return Base.invokelatest(f, v_init)
ex_v (generic function with 1 method)
julia> ex_v(0, :(v = v+1))
Again, however, this is a pretty slow way to evaluate an expression unless are calling f many times and not just once as is done here, and you’d probably be better off with an interpreter if you only need a few evaluations.
More generally, the Expr data structure is not really designed for rapid evaluation and manipulation. In a GA situation like you describe, you’d likely be better off devising a custom byte-code representation for the specific types of expression you want to search, and work with that.
Important note about Genetic Algorithms (GA): While defining new function (GA ‘solution’) should be fast, it will be executed many times, so execution/evaluation is far more important.
Well, yes I could do something like writing LLVM directly. However, in my use-case, I would like to use Julia’s compilation and optimization for the evaluated code – it will start from an Expr so no expensive parsing needed, but many more further processing is actually welcome.
TL;DR: I’ve progressed a lot further, but still need to get dynamic execution much faster. See all details below.
The most simple code that accomplishes what I was looking for:
function get_ex_v(v_init::Integer, hook::Expr)
v = $v_init
x = 10
y = 5
hook1 = :(z = v + x*y)
hook2 = :(z = v + x/y)
@assert eval(get_ex_v(1, hook1)) == (1 + 10*5)
@assert eval(get_ex_v(2, hook2)) == (2 + 10/5)
However, as @stevengj mentioned very correctly, the performance is horrible when using eval() many times to run the code:
v = v_init
x = 10
y = 5
z = v + x*y
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 118.164 μs … 593.284 μs ┊ GC (min … max): 0.00% … 0.00%
Time (median): 123.804 μs ┊ GC (median): 0.00%
Time (mean ± σ): 126.664 μs ± 25.473 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
118 μs Histogram: frequency by time 168 μs <
Memory estimate: 3.22 KiB, allocs estimate: 61.
So actually I should call eval()once (for each GA candidate)
function define_ex_v_func(hook::Expr, funcname)
v = v_init
x = 10
y = 5
z = 400
@assert ex_v1(1) == (401 + 10*5)
@assert ex_v2(2) == (402 + 10/5)
I want to create many dynamic functions in runtime.
So, let’s create a random function name
And refer to its symbol
So I got to this:
new_func_name = Symbol("ex_v_" * randstring(12))
v = v_init
x = 10
y = 5
z = 400
return getfield(Main, new_func_name)
ex_v_f1 = create_ex_v_func(hook1)
ex_v_f2 = create_ex_v_func(hook2)
@assert ex_v_f1(1) == (401 + 10*5)
@assert ex_v_f2(2) == (402 + 10/5)
Finally resolved? Not quite…
There’s a big difference between the performance of calling the function directly by name, and invoking it through a pointer:
Those timings are fictitious and an effect of the compiler having constant folded the function and immediately returning the pre-computed result. Sanity check: look up your processor’s clock frequency and compute how many clock cycles it has time for in the reported time. If this seems unreasonably low compared to the computations you are doing, something is likely off with the benchmark.
You are probably better off just using anonymous functions.
That’s an effect of using non-const globals and in particular that it inhibits the constant folding. This is maybe a better illustration (and also a demonstration of using anonymous functions):
julia> using BenchmarkTools
julia> f1 = x -> x^2
#1 (generic function with 1 method)
julia> const f2 = x -> x^2
#3 (generic function with 1 method)
julia> x = 5
julia> @btime f1($x)
13.077 ns (0 allocations: 0 bytes)
julia> @btime f2($x)
1.123 ns (0 allocations: 0 bytes)
However, I want to search for candidate functions, so let’s just put the dynamic function creation into a loop, shall we?
mutable struct state
f = eval(
state -> begin
for t in 1:$param
state.x = state.x + 1
desired_score = 50
for i in 1:100
candidate_func = generate_candidate_func(i)
s = state(0)
# ---> error here
# MethodError: no method matching (::var"#1#2")(::state)
# The applicable method may be too new: running in world age 31336, while current world is 31337.
# Closest candidates are:
# (::var"#1#2")(::Any) at In:10 (method too new to be called from this world context.)
candidate_func(s) # mutates s
score = abs(desired_score - s.x)
The applicable method may be too new?
Everything works fine when outside the for loop, so probably so magic parallel optimization happening behind the scenes…