[ANN] New package: CallableExpressions v1.0.0

The CallableExpressions package is being registered: New package: CallableExpressions v1.0.0 by JuliaRegistrator · Pull Request #107598 · JuliaRegistries/General · GitHub

A Julia package for representing, manipulating and evaluating trees of expressions.

The design is quite different from that of DynamicExpressions.jl and SimpleExpressions.jl.

Once the package is registered it’d have a JuliaHub package page at CallableExpressions.jl.

Git repository on Gitlab:

Suggestions are welcome

The package isn’t registered yet, so any kind of suggestion is welcome.

6 Likes

I wonder if you’d be interested in Home · FixArgs.jl (it needs some love)

julia> using FixArgs

julia> f = @xquote (x, y) -> y*sind(x-5)
FixNew(FixArgs.Arity{2, Nothing}(),Some(*),FrankenTuple((arg_pos(2, 0), Call(Some(sind), FrankenTuple((Call(Some(-), FrankenTuple((arg_pos(1, 0), Some(5)), NamedTuple())),), NamedTuple()))), NamedTuple()))

julia> f(17.9, 2)
0.4465002320219027

# instead of storing 5 "dynamically", store it statically

julia> f2 = @xquote (x, y) -> y*sind(x-5::::S)
FixNew(FixArgs.Arity{2, Nothing}(),Some(*),FrankenTuple((arg_pos(2, 0), Call(Some(sind), FrankenTuple((Call(Some(-), FrankenTuple((arg_pos(1, 0), Val{5}()), NamedTuple())),), NamedTuple()))), NamedTuple()))

julia> f2(17.9, 2)
0.4465002320219027
1 Like

Nice! How does this compare in terms of type stability and performance of evaluation to a normal function? I am just curious because I‘ve cooked up some implementations of this myself and rely heavily on packages like ModelingToolkit.jl and or DynamicExpressions.jl .

1 Like

The package is very much designed for enabling maximum performance of evaluation, and for preserving type stability. EDIT: AD (at least with Zygote.jl) is currently not at all performant, and it’s not inferred as type-stable either. Looking into this currently.

Preliminary fix released with CallableExpressions.jl v1.1.0 (registered). The AD performance is only good for Julia at least v1.11 (so it’s necessary to use the current beta release of Julia for good AD performance). Currently only reverse mode is well-tested. The AD performance fixes currently only apply to expressions with less than two variables.

@aramend can you test this out, see what’s missing/broken, xref Performance optimizations with runtime dispatch

1 Like

It seems you’re using DifferentiationInterface.jl for your AD tests? Very nice to see it pop out in the wild! Do reach out if you run into any trouble

1 Like

The AD performance (using Zyogte again) definitely improved using 1.11.0-beta2, but is still in the 10s of microseconds for even simple expressions.

But to apply this in symbolic AI, we have to be aware that hundreds of thousands or even millions of different expressions will be created and evaluated. The static approach means compile time overhead for every new expression, which more than offsets the improved evaluation speed. Not to mention that since the expressions live in the type level, method tables gradually explode as well.

Yeah, CallableExpressions.jl probably isn’t a good choice in that case.

But if you are expecting to run only a few expressions (that you don’t know before runtime) many many times, this is the way to do it :slight_smile:

The minimal LLVM code is very nice and pretty much human-readable.

1 Like