Pre-evaluating Symbolics / ModelingToolkit expressions for a package

I have some very large expressions (many pages of latex) that I code up manually as functions. But these functions also get processed by Symbolics and/or ModelingToolkit to create other functions and ODE problems that I want to build into a package. Because these expressions are quite large, constructing the functions and problems takes several minutes, even a second time around (so that all the “constructing” code has been compiled). What’s worse: this will likely get much longer soon as I add more complicated expressions. Even if done as part of a pre-compile stage, this seems like it would be needlessly slow.

But the expressions don’t change (often) once I’ve coded them up, so in principle it seems like I should be able to just evaluate them myself and distribute the results as part of the package. My only ideas have been tending toward ugly github-actions hacks, and jld2 artifacts or things like that, which would obviously not be fun. Is there some nice provision for this?


I’ve run into a similar issue with robot manipulator kinematics and astrodynamics equations. I haven’t tried this solution yet, but…

Have you tried “caching” the generated expressions as functions? If you generate a ModelingToolkit.ODESystem or ModelingToolkit.ODEFunction, I think you can pass them to string to get a Julia-code representation of each value.

In other words, I think the MTK folks defined a string method for their types! The output is a Julia function which re-generates the object.

You can add this to your package as a “build” step. This is called once when your package is installed. Maybe you write the Julia functions to your package’s scratch space, and then in “init” you can load the files you cached in scratch space.

@ChrisRackauckas is this the recommended approach to this kind of problem?

Edit: to be clear…

# sys is an ODESystem
open(“/path/to/scratchspace/sys.jl”, “w”) do file
write(file, string(sys))
1 Like

JLD2 and other serializations won’t work. The reason is because the slowest part here is almost always the LLVM codegen and passes by a wide margin, which means caching everything on the Julia side still won’t make a dent in the timing (which is probably what you’ve seen). The only way we have to prevent more LLVM time is:

  1. Build a system image. A heavy handed way to do it, but it should work.
  2. Potentially, precompilation in the nearish future will be able to cache the generated native code, in which case using precompilation via generating the expressions and putting them into a module would be a solution. But that doesn’t quite exist right now.
  3. Generate the function to a binary via StaticCompiler.jl and then link to it.

I haven’t actually tried 3, but in theory for in-place ODEs the generated code should just work with StaticCompiler.jl. It might be interesting for someone to take a quick look at seeing if a StaticCompilerTarget is possible (done like CTarget with gcc).

1 Like