Performance of `Meta.parse` and `eval`

Hi all,

I’m a relatively new and now enthusiastic Julia user. While exploring the metaprogramming capabilities I came to the question of this topic. Probably I am missing something or misinterpreting the use intention for Meta.parse and eval, but it looks like there is a big penalty for using them:

julia> @btime Meta.parse("1+1")
  36.095 μs (10 allocations: 416 bytes)
:(1 + 1)

julia> @btime eval(:(1+1))
  66.499 μs (35 allocations: 2.34 KiB)

julia> @btime eval(Meta.parse("1+1"))
  113.640 μs (43 allocations: 2.61 KiB)

Why are these calls so expensive? is there a better way to evaluate an expression represented as a string?

The most basic answer is that you should almost never represent expressions as strings. Unless you are trying to write an interpreter, there is almost always a better way to solve the problem.


If you want to represent a piece of code you would be better off with Exprs. That removes the need to ensure correct syntax, and a lot of string processing (which would probably use regex, so not very performant anyway). Of course, if you are receiving string expressions from user input or files you can use Meta.parse, and from then work only with expressions.

As for the performance of eval and Meta.parse, I would say you typically don’t need to use those in a performance-critical part of the program and so the penalty is almost invisible. If you do, you should probably rethink your design.


Thanks for the answers!

I thought that both eval and Meta.parse were pretty common in metaprogramming techniques. I understand now that Meta.parse could be expensive due to string processing (although "1+1" doesn’t seem too demanding). However, intuitively (and with no expertise whatsoever) I’d say that the eval operation is quite expensive. Does this mean that metaprogramming techniques are typically avoided in performance-critical programs?

To the contrary, eval and parse are some of the least-used tools in Julia’s metaprogramming arsenal. When eval is used, it’s done so at top level to —for example — define a whole slew of methods and functions over a loop of types and names. This is done up-front, where a few hundred microseconds or even milliseconds is nothing.

Far more common are macros — and macros are invaluable for eeking out the very top performance.


Hi again.

After a break I’d like to come back to this topic with a couple more questions.
If I define this simple macro:

macro parse(s)

then my naive understanding of macros would lead me to believe that running @parse "1+1" would be equivalent to eval(Meta.parse("1+1")). However I observe a dramatic performance improvement:

julia> @btime @parse "1+1"
  0.021 ns (0 allocations: 0 bytes)

@btime eval(Meta.parse("1+1"))
  113.640 μs (43 allocations: 2.61 KiB)

I’m having troubles trying to understand this from the metaprogramming section of the docs. Can someone please explain it to me?

I’m also trying to create a version of this simple @parse macro that accepts variables and not just string literals, i.e. a macro that could make something like this work in a performant way:

x = "1+1"
@parse x

I’ve made several attempts to interpolate x within the macro, but I didn’t manage to make it work. Does anyone have a suggestion?

I agree with @Oscar_Smith that this shouldn’t be the right approach to solve the problem. But at this point these questions are just a way to attempt to better understand metaprogramming and macros.

It’s not so much about performance as about when this happens. The macro gets expanded during parsing, exactly once, and thereafter it’s as if you’d written its result yourself, which is fast:

julia> @macroexpand @parse "1+1"
:(1 + 1)

julia> @btime macroexpand(Main, :(@parse "1+1"))
  49.511 μs (16 allocations: 976 bytes)
:(1 + 1)

julia> @btime 1+1
  0.040 ns (0 allocations: 0 bytes)

In @parse x, the macro only sees the symbol :x, which it can’t usefully transform into anything.


I see. So, do I understand correctly that I was misinterpreting the measurement of @btime @parse "1+1"?
If I understand it correctly now, the actual parsing happens only once in the macro expansion, and the result of @btime is in effect like directly calling @btime 1+1.Is that correct?

1 Like

Yeah. That’s why macros are the recommended way of doing metaprogramming. They take the expensive parsing stuff and move it to compile time.

1 Like

When seeing benchmark timings of less than a nanosecond (like you had above), it is a sign that whatever is being benchmarked does not perform any computation at all. It usually happens because the result can be deduced right away from the expression you’re giving it.

Even if you’re not familiar with LLVM and assembly, @code_llvm and @code_native may give you some insight about the amount of work being done.


Metaprogramming is something you do at parse/compile-time, not at runtime for time-critical code. If you are thinking of parse and eval as runtime techniques, you are doing it wrong.

Probably you are reaching for metaprogramming when you should be using another tool, like higher-order functions. (Don’t use expressions or strings to represent functions. Use functions!)

We could give you more concrete advice if you described a specific application (not a programming technique — tell us the end, not the means) of interest to you.


Thank you all for your answers!

I wasn’t really trying to solve a real problem. If you are curious about what triggered the question: I was solving Advent of Code in order to get more familiar with Julia, and I came up with an interesting solution to Day 18 that involved eval and Meta.parse. Then I was a bit disappointed that it was slower than my initial uglier solution and tried to understand why. But really, the end to me was not important, it only triggered me to explore Julia’s metaprogamming techniques and that in turn triggered the questions in this post.