I would like to write a Julia program that does the following:
Build up a big, complicated Expr object using metaprogramming
Compile that expression into a Julia function
Call that function repeatedly with different arguments
My main reason for doing things this way is performance, so I really want the compilation step, not just repeated calls to eval. It seemed like the following should do this, but it gives an error (below) - what is the correct way to achieve what I want?
Here’s the minimal test code:
function make_expression()
return :(x+2)
end
function main()
expr = make_expression()
@eval f(x) = $expr
println(f(10))
end
main()
and here’s the error:
$ julia test.jl
ERROR: LoadError: MethodError: no method matching f(::Int64)
The applicable method may be too new: running in world age 25574, while current world is 25575.
Closest candidates are:
f(::Any) at [...]/test.jl:11 (method too new to be called from this world context.)
Stacktrace:
[1] main() at [...]/test.jl:13
[2] top-level scope at none:0
[3] include at ./boot.jl:326 [inlined]
[4] include_relative(::Module, ::String) at ./loading.jl:1038
[5] include(::Module, ::String) at ./sysimg.jl:29
[6] exec_options(::Base.JLOptions) at ./client.jl:267
[7] _start() at ./client.jl:436
in expression starting at [...]/test.jl:17
Note that the error goes away if I put the contents of the main function in a global scope instead. But I don’t really want to do that.
Thank you @kristoffer.carlsson, that’s most helpful indeed. So it seems that the following is a solution:
function make_expression()
return :(x+2)
end
function main()
expr = make_expression()
@eval f(x) = $expr
println(Base.invokelatest(f,10))
end
main()
But it seems also that this is not necessarily the best way to do it, and it might be better to think about whether I can do the same task using macros instead. (It’s not immediately obvious how in my case, but I’ll think about it.)
From the code you have posted so far the best way to do it would be
function make_expression()
return :(x+2)
end
expr = make_expression()
@eval f(x) = $expr
function main()
println(f(10))
end
main()
Unless you need to JIT a function based on runtime values of your “main” then just do it outside so that the world age can update. If you need to do it dynamically, then perhaps you don’t need full fledged julia functions but if you instead could write a small “interpreter” for the type of functions you need. And if you do need the full julia language, then you are going to need some optimization barrier like `invokelatest´.
invokelatest will call your function correctly but you should also know that the function it calls will look like a black box
A macro is a very different thing from a function. A macro is basically just a tool to allow you to write a piece of syntax to then generate another piece of syntax. In some sense, it is purely a convenience tool and it doesn’t give any extra “powers” since in the end it just gives some syntax that will be evaluated and you could just have written that syntax yourself in the first place.
Unless you need to JIT a function based on runtime values of your “main”
I do need to do that. Or at least, it was the plan. (The real make_expression function is much more complicated and takes parameters at runtime.)
A macro is a very different thing from a function. […]
Sure, I understand that. I imagine the idea (referred to in the thread you linked) is to use macros to build up the function f, instead of constructing it as an Expr explicitly.
The other thing I got from that thread is that something like the following might actually work fine, and be fast in Julia. I need to think about whether it will work, and profile it, but if it works out it will be much simpler:
function make_func()
return x -> x+2
end
function main()
f = make_func()
println(f(10))
end
main()
My previous implementation is a horrible mess of Python code that generates C++, so I guess I might just have been a bit stuck in the mindset of thinking I need to generate code and then compile it, instead of just constructing a function.
Perhaps you can give a bit more info about what problem you are actually trying to solve. It feels like the original small example right now is a bit too simple to be representative of what you want to do.
functions generated at runtime (with `eval`, for example) are not available for use until control flow returns to the module's top-level scope because of the compiler's optimization strategies.
Ok, as requested, here’s some more details on what I’m actually trying to do.
Basically I’m implementing a sort of CAD-like system, using implicit functions. This is purely for a hobby project. The user (i.e. me) will write code to specify geometrical objects, along the lines of intersection(sphere([0,0,0],1),cube([1,1,1]),rounding=0.1). This will be used to construct a function that’s positive inside the object and negative outside. Then a mesh will be generated from that function using marching cubes or some other algorithm.
The input to make_expression is generated by the user code, and takes the form of a data structure representing the object the user has specified. Its output should be a function of n real numbers, representing spatial dimensions. The function it outputs has to run fast, because it will be called many times in generating the mesh. Actual user code may create many thousands of geometrical objects, and hence generate very complicated functions, and I want that to work as well as possible.
The only slight complication is that it may sometimes be necessary to differentiate the resulting function, either for the purpose of mesh generation or to generate certain features such as offsets or fillets. In my Python/C++ implementation I was doing this symbolically using sympy, but I don’t actually need full symbolic differentiation - it’s enough to be able to evaluate the derivative at a given point - so it might be possible to do it using normal Julia functions and Flux.jl or similar. (Of course I will have to profile that.)
So in my mind there are a few more questions at this point:
How good is the compiler at optimising chains of anonymous functions, as opposed to functions generated from expressions in the ‘normal’ way? For example, can I trust it to turn x->(x->x+2)((x->x+2)(x)) into x->x+4? If so, I might not need to generate expressions at all.
Is it feasible to use Flux.jl (or some other existing automatic differentiation package) on huge chains of anonymous functions like this?
If no to either of the above, then I’m probably better off generating Expr objects as I initially planned. In that case it seems I’m advised to have the main part of my code run in a global scope (!), but I’m having trouble seeing how I can arrange for that to happen after the user runs their code. As far as I know, code in a global scope normally runs as soon as the module is imported, but in this case it needs to happen at a later point, in response to something invoked by the user. Is there some way to achieve that, and what would be the sensible way to structure the project, if that’s the way it has to work?