Thank you.I’ll try the package next time I do symbolic calculation.

This is really good, 1 allocation more and almost comparable time.

```
using SymPy
using SyntaxTree
@vars x
expr=integrate(sin(x))
f=eval(parse("x->$(repr(expr))"))
h=SyntaxTree.genfun(parse("$(repr(expr))"),[:x])
@time f(1.0)
@time h(1.0)
# 0.000003 seconds (5 allocations: 176 bytes)
# 0.000005 seconds (6 allocations: 192 bytes)
```

Note that the input to `SyntaxTree.genfun`

doesn’t need to be a function, it should only be the expression of the function that is added, so you might want to use `parse("$(repr(expr))")`

instead, since it automatically turns it into the function using the arguments from the list.

Any consequences for `invokelatest`

?

My fault, sorry. I noticed this and forgot to modify it. Changed.

It’s slow and can’t be inlined because it has to do a full dynamic dispatch.

Just as a note, those functions are fast enough that `@time`

is not going to be giving you a reliable or informative benchmark (you’re only running the function once, possibly also measuring compilation time, and you’re operating at global scope, which is slow). BenchmarkTools.jl solves all of those problems for you by running the function multiple times and gathering statistics. In general, `@time`

is probably good enough for something which takes a few seconds, but for a function that takes less than a second you really want to use BenchmarkTools.

Using `@btime`

from BenchmarkTools:

```
julia> using BenchmarkTools
julia> @btime subs($expr, $x => 1.0)
139.558 μs (79 allocations: 2.34 KiB)
-0.540302305868140
julia> @btime $f(1.0)
11.564 ns (0 allocations: 0 bytes)
-0.5403023058681398
julia> @btime $g(1.0)
115.019 ns (2 allocations: 32 bytes)
-0.5403023058681398
```

Besides, I thought building something from string should not be a very uncommon thing…

Actually it’s not common in Julia precisely because it’s much easier to manipulate expressions directly in Julia compared to other programming languages. That means that we can treat Julia code as objects in Julia, manipulating or modifying them in code, without ever having to treat them as strings. For example, I can construct the expression `x + y`

by quoting it:

```
julia> :(x + y)
:(x + y)
```

Or I can construct an Expression type by hand:

```
julia> Expr(:call, :+, :x, :y)
:(x + y)
```

and get the same result.

I can also modify an expression to get a new expression:

```
julia> ex = Expr(:call, :+, :x, :y)
:(x + y)
julia> ex.args[1] = :- # change that + to a -
:-
julia> ex
:(x - y)
```

In the above case I modified a Julia expression without having to do any string manipulation or invoking of `parse()`

.

But you have to `eval`

these expressions to generate a function. Here my main concern is how to construct a function from expressions without calling `eval`

. String is not the most difficult part as `parsing`

can do it perfectly.

I can’t tell from the discussion what exactly you are trying to accomplish (besides not wanting to use `eval`

). But here is a stab in a somewhat different direction.

```
macro foo(partial_expr)
expr = :(x->x)
expr.args[2] = partial_expr
expr
end
function bar()
(@foo(x+2)(3), @foo(x*2)(3), @foo(x^2)(3))
end
```

```
julia> bar()
(5, 6, 9)
```

I created a separate thread here summarizing generally what I want:

I believe you posted in the wrong thread. To avoid confusion, I suggest you delete this post and post again in the correct thread: How do I create a function from an expression - #6 by Chong_Wang

The type instability is because of the `Base.invokelatest`

, if you want to avoid it, you need to specify the output type like this `Base.invokelatest(...)::Float64`

, but you already need to know your type.

Would be great if `Base.invokelatest`

could be type stable.

The reason why `invokelatest`

works at all is also what prevents it to be type stable.

Alright, so I have fixed this issue with a new implementation of `genfun`

in `SyntaxTree`

package

```
using SyntaxTree
f = SyntaxTree.genfun(:(-cos(x)), [:(x::Float64)], Float64)
julia> @btime $f(1.0)
148.596 ns (2 allocations: 32 bytes)
-0.5403023058681398
```

while with the old version I get

```
julia> @btime $f(1.0)
199.548 ns (2 allocations: 32 bytes)
-0.5403023058681398
```

So there is an improvement by adding the type stability option into `SyntaxTree.genfun`

.

```
"""
genfun(expr, args::Array, typ=Any)
Returns an anonymous function based on the given `expr` and `args`.
"""
function genfun(expr,args::Array,typ=Any)
gs = gensym()
eval(Expr(:function,Expr(:call,gs,args...),expr))
list = Symbol[]
for arg ∈ args
push!(list,typeof(arg) == Expr ? arg.args[1] : arg)
end
eval(:($(Expr(:tuple,list...))->Base.invokelatest($gs,$(list...))::$typ))
end
```

This now appends `::DataType`

at the end of the `Base.invokelatest`

call.

In my new version of `SyntaxTree.genfun`

described in previous post,

```
julia> @code_warntype f(1.0)
Variables:
#self# <optimized out>
x::Float64
Body:
begin
return (Core.typeassert)((Core._apply_latest)(SyntaxTree.##713, (Core.tuple)(x::Float64)::Tuple{Float64})::Any, Float64)::Float64
end::Float64
```

So what is the difference between `Base.invokelatest`

and `Core._apply_latest`

?

Let’s rewrite that as

```
x = (Core._apply_latest)(SyntaxTree.##713, (Core.tuple)(x::Float64)::Tuple{Float64})::Any
y = (Core.typeassert)(x, Float64)::Float64
```

So `__apply_latest`

infers to `Any`

and then you have a type assert that errors if the result is not a `Float64`

and from there on we can assume that `y`

is a `Float64`

.

Hmmm, after restarting my REPL (was using Revise) the performance for both versions is the same

```
julia> g = SyntaxTree.genfun(:x,[:x])
julia> @btime $g(1.0)
106.034 ns (2 allocations: 32 bytes)
1.0
julia> f = SyntaxTree.genfun(:x,[:(x::Float64)], Float64)
(::#15) (generic function with 1 method)
julia> @btime $f(1.0)
109.066 ns (2 allocations: 32 bytes)
1.0
```

So it seems that the type assertion for `Any`

vs. `Float64`

actually does not make a difference, but the new version of `genfun`

seems to be faster than the original version anyway…

The type assert will only be beneficial if you use the result of the function in the same function as it was called.

Just tacking on a type assert will not make the call itself faster (in fact a bit slower since the type has to be checked).

Ah, that makes sense, thanks for clarifying. With `SyntaxTree.genfun`

you will have both options.

As discussed in the other thread, in the `SyntaxTree`

package there is now an updated `genfun`

and `@genfun`

for creating functions, and it now does not require `Base.invokelatest`

, so is faster