Pass expression to manipulate local variable

Hi! I want to pass an Expr to a function, so it will be able to interact with local variables inside the function.

What is the correct way to construct such an expression?

Here’s the basic structure of my function that accepts the expression “hook”:

function ex_v(v_init::Integer, hook::Expr)
    v = v_init
    eval(hook)
    return v
end

Tried the following, but I must be missing something basic, as none of them work.

ex_v(0, :(v = v+1)) # UndefVarError: v not defined
ex_v(0, :(:v = :v+1)) # syntax: invalid assignment location ":v"
ex_v(0, :($v = $v+1)) # UndefVarError: v not defined

According to an answer in another thread

eval always happens at global scope

So the question remains: how to pass an Expr to work on local scope?

Some other similar questions:

You basically can’t do that but depending on what you need to accomplish, the suggestions here might be useful: Passing a string into function as a expression - #14 by GunnarFarneback.

1 Like

Trying to build upon the solution suggested in Declaring top level variables in Julia using metaprogramming - Stack Overflow, but not getting expected result, the Expr inside the function inside the macro doesn’t seem to be executed. What am I doing wrong?

@eval function f(t, hook)
    x = 0
    for i in 1:t
        esc(hook)
    end
    return x
end

f(100, :(:x = :x+1))

This returns 0 and not 100, meaning the hook isn’t being called.

Trying to adapt it to my use-case, and can’t get it to work:

function f(t, expr_str)
    ex = Meta.parse(expr_str)
    return @eval begin
        x = 0
        for i in 1:$t
            $ex
        end
        return x
    end 
end

f(100, ":(:x = :x+1)")

It returns 0 rather than 100, which means the expression doesn’t manipulate the local x variable.

Can you please help me adjust the code so the Expr accesses the variables in the local scope?

Even if you got this to work, the performance would be terrible, the maintainability would be worse (the external caller would have to know the implementation details of the function they are calling), and in general this sounds like a classic xy problem.

What is your ultimate goal here? What kind of functionality do you want to achieve?

8 Likes

My ultimate goal is quite complex and well beyond the scope of a single question :sweat_smile:

I’m searching, using Evolutionary Algorithm, for different implementations for a function to achieve a certain objective.

My search candidates are pieces of code represented as Expr objects, which work on a predefined set of variables of known types and names. So the “external caller” can be assumed to be fully aware of the “internal” implementation, that’s not a problem.

In the simple code example above, the variables are [v::Integer], but in the real use-case there are many variables of different names and types.

And the example candidate Expr is v=v+1, and the objective is for v to be 100.

Again, of course in the real case, there are more variables and a more complex and realistic optimization problem.

I want to evaluate ex_v() with different hook candidates, and get the “score” (return value) for each of them.

Please let me know if this explanation makes things more clear. Thanks for the help!

1 Like

In other words, your Expr objects are the body of a function (v1,v2,...) -> body? where v1,v2,... are some set of known variables, and the function body returns some value, or maybe the new values of the variables?

One way is to let f = eval(:((v1,v2) -> $body)) to get a function f(v1,v2) and then work with that subsequently. Or f = eval(:((v1,v2) -> ($body; return v1,v2))) if you want to return the new values of the variables.

This way you get a function object that can be compiled and run quickly if you want to run it many times for different values of the parameters if needed.

(Conversely, if you only want to evaluate each expression a few times, it might not be worth paying the cost of compiling the expression and you might want to write a little interpreter instead, or use something like JuliaInterpreter.jl.)

1 Like

For example:

julia> function ex_v(v_init::Integer, hook::Expr)
           f = eval(:(v -> ($hook; v)))
           return Base.invokelatest(f, v_init)
       end
ex_v (generic function with 1 method)

julia> ex_v(0, :(v = v+1))
1

Again, however, this is a pretty slow way to evaluate an expression unless are calling f many times and not just once as is done here, and you’d probably be better off with an interpreter if you only need a few evaluations.

More generally, the Expr data structure is not really designed for rapid evaluation and manipulation. In a GA situation like you describe, you’d likely be better off devising a custom byte-code representation for the specific types of expression you want to search, and work with that.

4 Likes

Thanks for all the helpful comments!

Important note about Genetic Algorithms (GA):
While defining new function (GA ‘solution’) should be fast, it will be executed many times, so execution/evaluation is far more important.

Well, yes I could do something like writing LLVM directly.
However, in my use-case, I would like to use Julia’s compilation and optimization for the evaluated code – it will start from an Expr so no expensive parsing needed, but many more further processing is actually welcome.


TL;DR: I’ve progressed a lot further, but still need to get dynamic execution much faster. See all details below.


The most simple code that accomplishes what I was looking for:

function get_ex_v(v_init::Integer, hook::Expr)
    quote
        v = $v_init
        x = 10
        y = 5
        $hook
        return z
    end
end

hook1 = :(z = v + x*y)
hook2 = :(z = v + x/y)

@assert eval(get_ex_v(1, hook1)) == (1 + 10*5)
@assert eval(get_ex_v(2, hook2)) == (2 + 10/5)

However, as @stevengj mentioned very correctly, the performance is horrible when using eval() many times to run the code:

using BenchmarkTools

function f1(v_init)
    v = v_init
    x = 10
    y = 5
    z = v + x*y
    return z
end

@benchmark f1(1)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  0.010 ns … 0.031 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     0.020 ns             ┊ GC (median):    0.00%
 Time  (mean ± σ):   0.020 ns ± 0.002 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▁                           █  ▃                        ▁ ▁
  █▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ █
  0.01 ns     Histogram: log(frequency) by time     0.03 ns <

but

ex1 = get_ex_v(1, hook1)

@benchmark eval(ex1)

gives

BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min … max):  118.164 μs … 593.284 μs  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     123.804 μs               ┊ GC (median):    0.00%
 Time  (mean ± σ):   126.664 μs ±  25.473 μs  ┊ GC (mean ± σ):  0.00% ± 0.00%

   ▁█▃  ▁▃▄▁                                                     
  ▃███▅▅████▆▄▄▃▃▂▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂ ▃
  118 μs           Histogram: frequency by time          168 μs <

 Memory estimate: 3.22 KiB, allocs estimate: 61.

So actually I should call eval() once (for each GA candidate)

function define_ex_v_func(hook::Expr, funcname)
    quote
        function $funcname(v_init::Integer)
            v = v_init
            x = 10
            y = 5
            z = 400
            $hook
            return z
        end
    end
end

eval(define_ex_v_func(hook1, :ex_v1))
eval(define_ex_v_func(hook2, :ex_v2))

@assert ex_v1(1) == (401 + 10*5)
@assert ex_v2(2) == (402 + 10/5)

and then execute it directly, which is fast:

@benchmark ex_v1(1)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  0.010 ns … 0.291 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     0.020 ns             ┊ GC (median):    0.00%
 Time  (mean ± σ):   0.021 ns ± 0.004 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

                              █  ▃                        ▃ ▁
  ▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ █
  0.01 ns     Histogram: log(frequency) by time     0.03 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

But, wait!
I want to create many dynamic functions in runtime.

So, let’s create a random function name

And refer to its symbol

So I got to this:

using Random

function create_ex_v_func(hook::Expr)
    new_func_name = Symbol("ex_v_" * randstring(12))

    eval(quote
        function $new_func_name(v_init::Integer)
            v = v_init
            x = 10
            y = 5
            z = 400
            $hook
            return z
        end
    end)

    return getfield(Main, new_func_name)
end

ex_v_f1 = create_ex_v_func(hook1)
ex_v_f2 = create_ex_v_func(hook2)

@assert ex_v_f1(1) == (401 + 10*5)
@assert ex_v_f2(2) == (402 + 10/5)

Finally resolved? Not quite…

There’s a big difference between the performance of calling the function directly by name, and invoking it through a pointer:

@benchmark ex_v_f1(5)
BenchmarkTools.Trial: 10000 samples with 999 evaluations.
 Range (min … max):  11.182 ns … 18.785 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     12.085 ns              ┊ GC (median):    0.00%
 Time  (mean ± σ):   12.035 ns ±  0.415 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▂▁     ▁   ▁▁ ▂▅  ▃▃                             ▂▁ ▅██  ▇▆ ▂
  ██▃▄▁▁▁██▄███▃███▃██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▃██▄███▇███ █
  11.2 ns      Histogram: log(frequency) by time      12.2 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

While calling it directly is much faster:

@benchmark ex_v_Vc1fDWJYMnSR(5)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  0.020 ns … 0.031 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     0.020 ns             ┊ GC (median):    0.00%
 Time  (mean ± σ):   0.021 ns ± 0.003 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █                                                          
  █▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃ ▂
  0.02 ns        Histogram: frequency by time       0.03 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Complete and Unfinished… :melting_face:
Help performance Performance

The following code works for me

using BenchmarkTools
using Random

function create_ex_v_func(hook::Expr)
    new_func_name = Symbol("ex_v_" * randstring(12))

    eval(quote
        function $new_func_name(v_init::Integer)
            v = v_init
            x = 10
            y = 5
            z = 400
            $hook
            return z
        end
    end)

    return getfield(Main, new_func_name)
end

const hook1 = :(z += v + x*y)
const hook2 = :(z += v + x÷y)

const ex_v_f1 = create_ex_v_func(hook1)
const ex_v_f2 = create_ex_v_func(hook2)

@assert ex_v_f1(1) == (401 + 10*5)
@assert ex_v_f2(2) == (402 + 10÷5)

@benchmark ex_v_f1(5)

yielding

BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  1.300 ns … 30.900 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     1.300 ns              ┊ GC (median):    0.00%        
 Time  (mean ± σ):   1.346 ns ±  0.394 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █                           
  █▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▆ ▂
  1.3 ns         Histogram: frequency by time         1.4 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

Edit: benchmarking with non const globals?

Perhaps there are relevant insights to learn from the paper behind

They are performing a similar search over functions to fit data.

1 Like

Thanks for the great resource!
It looks interesting, I’ll dive into it.

However, I’m building something more generic that can search for more unstructured code, so I want to be able to evaluate a dynamic function efficiently.

Those timings are fictitious and an effect of the compiler having constant folded the function and immediately returning the pre-computed result. Sanity check: look up your processor’s clock frequency and compute how many clock cycles it has time for in the reported time. If this seems unreasonably low compared to the computations you are doing, something is likely off with the benchmark.

You are probably better off just using anonymous functions.

That’s an effect of using non-const globals and in particular that it inhibits the constant folding. This is maybe a better illustration (and also a demonstration of using anonymous functions):

julia> using BenchmarkTools

julia> f1 = x -> x^2
#1 (generic function with 1 method)

julia> const f2 = x -> x^2
#3 (generic function with 1 method)

julia> x = 5
5

julia> @btime f1($x)
  13.077 ns (0 allocations: 0 bytes)
25

julia> @btime f2($x)
  1.123 ns (0 allocations: 0 bytes)
25
2 Likes

Thanks @GunnarFarneback for the heads up about constant folding, I do realize that my example was so far from being realistic that the compiler optimized away relevant details.

Per my initial question, the goal of the injected code is to modify a variable in scope external to it.

So let’s look at this:

using BenchmarkTools

mutable struct state
    x::Int64
end

f1 = eval(
    quote
        state -> begin
            state.x = state.x + 1
        end
    end
)

const f2 = eval(
    quote
        state -> begin
            state.x = state.x + 1
        end
    end
)

And then the timing starts to look very similar:

s1 = state(0)
@btime f1(s1)
#   13.870 ns (1 allocation: 16 bytes)

s2 = state(0)
@btime f2(s2)
#   13.388 ns (1 allocation: 16 bytes)

However, I want to search for candidate functions, so let’s just put the dynamic function creation into a loop, shall we?

using BenchmarkTools

mutable struct state
    x::Int64
end

function generate_candidate_func(param)::Function
    f = eval(
        quote
            state -> begin
                for t in 1:$param
                    state.x = state.x + 1
                end
            end
        end
    )
    return f
end

function search_candidate_func()
    desired_score = 50
    for i in 1:100
        candidate_func = generate_candidate_func(i)
        s = state(0)

        # ---> error here
        #
        # MethodError: no method matching (::var"#1#2")(::state)
        # The applicable method may be too new: running in world age 31336, while current world is 31337.
        # Closest candidates are:
        # (::var"#1#2")(::Any) at In[1]:10 (method too new to be called from this world context.)
        #
        candidate_func(s) # mutates s
        
        score = abs(desired_score - s.x)
        println(score)
    end
end

search_candidate_func()

The applicable method may be too new?
Everything works fine when outside the for loop, so probably so magic parallel optimization happening behind the scenes… :grimacing:

Base.invokelatest it.

1 Like

That’s again a non-const global in your benchmark.

julia> const s3 = state(0)
state(0)

julia> @btime f2(s3);
  1.830 ns (0 allocations: 0 bytes)

or you can use BenchmarkTools interpolation feature to isolate your benchmark from that effect.

julia> @btime f2($s2);
  1.850 ns (0 allocations: 0 bytes)
1 Like

Time to revisit Passing a string into function as a expression - #14 by GunnarFarneback. Notice the call to invokelatest.