Problem Statement: I am reporting a critical bottleneck regarding the compilation and loading stability of functions generated by ModelingToolkit.jl. My experience shows that the complexity of the generated symbolic Jacobian can lead to an “infinite” compilation hang, where performance is no longer correlated with the system’s dimensionality.
Evidence of Performance Instability: I compared two different system configurations, and the results demonstrate that MTK’s symbolic compilation path is highly unpredictable:
-
Case 1 (Sparse, ~2000 dims): A high-dimensional system with a sparse structure. MTK handles this reasonably well; the
ODEProblemconstruction and the firstsolvecall (Jacobian compilation) finish within an acceptable timeframe. -
Case 2 (Dense, ~200 dims): A much smaller system but with a dense/highly-coupled structure. Despite having 10x fewer variables, the compilation of
prob.f.jac(triggered during the firstsolve) runs for over 30 minutes and fails to complete, consuming excessive memory. -
The Argument: This inconsistency suggests that the symbolic expression swell generated by MTK for coupled systems creates an AST so complex that it overwhelms the LLVM backend.
-
Compilation vs. Runtime: While MTK aims for fast runtime, the “compilation wall” for dense systems makes the workflow unusable. The time saved during integration is irrelevant if the function takes 30+ minutes just to load.
-
Dimensionality is a Poor Metric: Currently, users cannot predict whether a model will compile based on its size. A smaller, coupled model is significantly harder for MTK to “load” than a larger, sparse one.
Conclusion: The performance of the symbolic compilation path is currently a “black box.” We need a more robust way to handle or detect when symbolic expansion becomes a liability. Is there a plan to make the generated function overhead more predictable, or to provide a stable fallback when the symbolic Jacobian size exceeds a reasonable complexity threshold?