Nice! How does this compare in terms of type stability and performance of evaluation to a normal function? I am just curious because I‘ve cooked up some implementations of this myself and rely heavily on packages like ModelingToolkit.jl and or DynamicExpressions.jl .
The package is very much designed for enabling maximum performance of evaluation, and for preserving type stability. EDIT: AD (at least with Zygote.jl) is currently not at all performant, and it’s not inferred as type-stable either. Looking into this currently.
Preliminary fix released with CallableExpressions.jl v1.1.0 (registered). The AD performance is only good for Julia at least v1.11 (so it’s necessary to use the current beta release of Julia for good AD performance). Currently only reverse mode is well-tested. The AD performance fixes currently only apply to expressions with less than two variables.
The AD performance (using Zyogte again) definitely improved using 1.11.0-beta2, but is still in the 10s of microseconds for even simple expressions.
But to apply this in symbolic AI, we have to be aware that hundreds of thousands or even millions of different expressions will be created and evaluated. The static approach means compile time overhead for every new expression, which more than offsets the improved evaluation speed. Not to mention that since the expressions live in the type level, method tables gradually explode as well.