ModelingToolkit.jl performance for large models with similar components

I cannot make sense of conflicting information about the performance of ModelingToolkit.jl for very large models. On the one hand, it seems to be performant according to the workshop at JuliaCon for example. On the other hand, there seem to be memory issues that lower performance (worse than for example using OpenModelica).
So I have heard that it might be better to use simply the foreign function interface of DifferentialEquations.jl and just call the same function many times when there are a large number of similar model components.
Can somebody put my up to date? Is there even a general answer to this or is this strongly dependent on the model structure? My application is a large system of sparse DAE. I’d like to know whether it is worth trying to implement this in MTK.
Thanks :slight_smile:

The largest system we’ve done so far has been around 50,000 DAEs. The runtime is pretty good, compile time is a bit stuck in LLVM time but we have some pretty major overhauls.

Into the thousands of equations, we do extremely good. That was OpenModelica.jl: A modular and extensible Modelica compiler framework in Julia targeting ModelingToolkit.jl | Modelica Conferences

It’s once you get above 10’s of thousands, registers are overloaded etc. and Yingbo, Keno, Shuhei, etc. are working on that.

The tests in those papers aren’t great right now. For example, they use Rodas5 for large systems, though the docs say to not use it above 100 ODEs and instead use FBDF. The issue that they see on the auto-switch methods is just that it was doing a dense Jacobian by default instead of sparse, and it was eagerly generating it for the potential switch instead of lazily. Etc. I’ll be flying to go talk to them later tonight though, so it’s in motion.

Until we get loop rerolling, one thing to note is that MTK works with serialization, so you can serialize the compiled code and just reuse it between sessions. We use that quite a bit.

Kind of, depends on how much structural_simplify will do. If it’s removing essential equalities to improve numerical stability, then MTK would do a lot better in runtime. Also maybe you need ODAEProblem, etc. At this point, it depends on what you’re doing: manual loop rerolling can help scaling, but MTK does a lot of other things which are helpful.

But the main thing is, the rerolling is coming and we are using a 1 million equation set as our benchmark, it’s just taking a bit to get the new tooling together.


Thanks for prompt reply. Sounds promising.
Can you give a rough estimate when rerolling will be available in MTK (6,12,18 months)? I couldn’t find related open issues. Background is that I’m the advocate of Julia in my group otherwise dominated by OpenModelica and I would need to know when I can “deliver” competitive results for very large systems.
The other issues you mention (register overloading etc.) are not directly connected to MTK but general JuliaLang issues as I understand. But I think they become less critical as soon as rerolling is used.

It’s on the 6-12 months timeframe.

1 Like

One other thing to note is that if you want optimal performance for big functions in Julia at the moment, you will be served much better by the nightly releases than the stable releases. There has been a lot of work recently to improve compile time for large code.