What are the advantages of using ModelingToolkit to make and solve ODEProblems?

For context, I come from an R background using the deSolve package.

The first examples for ModelingToolkit make use of a specific structure of model definition, using the @parameters, @variables, @named macros. The general workflow seems to be:

  1. define @parameters and @variables
  2. define the time differential D = Differential(t)
  3. define a @named ODESystem of eqations involving [D(x) ~ ...]
  4. convert the ODESystem into an ODEProblem with the non-symbolic parameters and state variables included
  5. solve the problem

My question is: what are the advantages of this workflow? Below are shown two versions of the same code (from adapted from the ODE problem library.

Firstly the macro-y approach:

function sim_lotkavolterra1()
    
    @parameters t p[1:4]
    @variables x(t) y(t)
    D = Differential(t)

    lv = [
        D(x) ~ p[1] * x - p[2] * x * y,
        D(y) ~ p[4] * x * y - p[3] * y
        ]

    @named sys = ODESystem(lv)

    u0 = [1.0, 1.0]
    tspan = (0.0, 1.0)
    p = [1.5, 1.0, 3.0, 1.0]

    prob = ODEProblem(sys, u0, tspan, p)

    return solve(prob)
end;

The benchmark yields:

 Range (min … max):  1.392 ms …  11.593 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     1.703 ms               ┊ GC (median):    0.00%
 Time  (mean ± σ):   2.038 ms ± 984.789 μs  ┊ GC (mean ± σ):  2.33% ± 6.36%

  █▆▅▄▄▄▄▄▄▃▃▂▂▁▂▁▁▁▁                                         ▁
  █████████████████████████▇▆█▇█▆▆▇▆▆▄▆▅▄▁▃▄▄▃▁▃▃▁▅▁▁▁▃▃▁▃▁▁▄ █
  1.39 ms      Histogram: log(frequency) by time       6.5 ms <

 Memory estimate: 371.75 KiB, allocs estimate: 6197.

If I use the more “deSolve-y” approach:

function sim_lotkavolterra2()

    function lv(du, u, p, t)
        x, y = u
        du[1] = p[1] * x - p[2] * x * y
        du[2] = p[4] * x * y - p[3] * y
    end

    u0 = [1.0, 1.0]
    tspan = (0.0, 1.0)
    p = [1.5, 1.0, 3.0, 1.0]

    prob = ODEProblem(lv, u0, tspan, p)

    return solve(prob)
end;

The @benchmark yields:

 Range (min … max):  25.800 μs …   6.690 ms  ┊ GC (min … max): 0.00% … 97.68%
 Time  (median):     28.800 μs               ┊ GC (median):    0.00%
 Time  (mean ± σ):   33.048 μs ± 103.203 μs  ┊ GC (mean ± σ):  5.28% ±  1.70%

  ▇█▆█▇▅▄▃▃▂▁▁            ▁▁                                   ▂
  █████████████▇▆▆▇▇█▇█▇▇█████▇▇▇▆▅▇▆▆▆▆▅▆▆▆▅▆▆▆▆▅▅▄▄▅▅▄▅▅▅▄▄▄ █
  25.8 μs       Histogram: log(frequency) by time      79.9 μs <

 Memory estimate: 18.19 KiB, allocs estimate: 261.

The second approach is about 60x faster and 20x more memory-efficient? Maybe I made a mistake or there is some advantage to the former approach that I’m not understanding? It seems like it also takes time to make things symbolic so maybe there are some analytical-based reasons e.g., computing something precisely rather than numerically?

In short: Maybe the first slide of this talk (and the rest if you like) answers your question best: Causal vs Acausal Modeling By Example: Why Julia ModelingToolkit.jl Scales (Chris Rackauckas)


To answer your question directly: If you want to solve a particular (not too complicated) ODE, then using DifferentialEquations.jl directly is usually the fastest approach, if you use it right. (For people who don’t know much Julia, ModelingToolkit might be a convenient way to implement an ODE without risking some of the typical allocation/global variables mistakes.)

However, one should maybe differentiate between the total runtime and the time needed for the solve command. I would bet that both a equally fast for the solve call in your example, since both will end up with kind of the same ODEProblem.

The idea behind ModelingToolkit is to construct models on a different level, not as plain functions and values, but as mathematical objects that can be further manipulated, combined, and optimized. The extra freedom of the symbolic representation can be worth it if the generated code is optimized both mathematically (e.g. index reduction) and in terms of implementation details (parallel, SIMD, …).

Essentially it depends on your application, if you can use the symbolic approach at some point down the road, the initial computational cost might pay off in the end (either in runtime or less time coding repetetive things).

7 Likes

@SteffenPL 's answer is a good one, but I’ll expand.

There’s all sorts of reasons. But they essentially boil down to this: the symbolic approach (ModelingToolkit) is a model building approach, but the code always generates DifferentialEquations.jl code in the end. ModelingToolkit though has many ways to manipulate the model before generating the code, including:

  1. Code optimizations (tearing of nonlinear systems, alias elimination)
  2. Build models acausally using high level connector semantics (https://www.youtube.com/watch?v=ZYkojUozeC4) and pre-built physical component libraries (GitHub - SciML/ModelingToolkitStandardLibrary.jl: A standard library of components to model the world and beyond)
  3. Things that improve numerical stability of solvers Automated Index Reduction of DAEs · ModelingToolkit.jl
  4. Transformations into simpler problem types GitHub - augustinas1/MomentClosure.jl: Tools to generate and study moment equations for any chemical reaction network using various moment closure approximations
  5. Automatic removal and handling of equations that don’t necessarily need to be solved (observed equations). As mentioned at JuliaCon, this is also getting better in the near future with GitHub - x3042/ExactODEReduction.jl: Exact reduction of ODE models via linear transformations.

It allows you to calculate symbolic things like sensitivity matrices, compute structural identifiability to know if your data is sufficient for parameter estimation (Parameter Identifiability in ODE Models · ModelingToolkit.jl), and the list keeps growing. So basically if you want to really do a lot more with your model, using ModelingToolkit is a way to do that.

But okay, are there any reasons to do this on simple differential equations? Well there’s the parameter identifiability stuff etc. but if you’re just talking about simulation, the DAE benchmarks are dominated by approaches which use MTK to transform the model before solving.

IDA is a good solver, but you can see from those (and we do need to add more) that the MTK modified versions which then use ODAEProblem or mass matrices just dominate the fully implicit stuff even though IDA is a good solver. We see this time and time again: MTK for DAEs is pretty much the way to go.

But okay, what if you have a simple straight line equation which is an ODE, and you don’t want to transform your model, and you just want to simulate? Well in many cases it may be an easier way to get performance. I mentioned this in the first video on ModelingToolkit:

In this video you find examples of how it’s used as a code optimizer. In basic terms:

If you write the code yourself, you have to worry about code performance issues (allocations, type-instabilities). If you ModelingToolkit writes the code, you only have to worry about the model that is being created as it performs the codegen

Yes, you can probably beat its codegen in cases. I know how to do it, but many people who use Julia don’t so we’ve found most use cases to perform better by using it as the codegen step.

[One of the ways I fix people’s code is sometimes to simply run modelingtoolkitize(prob) to transform their manual code into symbolic, and then just ODEProblem regenerate it.]

As a fun side note, GPU accelerating R ODE solves is done by translating the R code to ModelingToolikit:

Now, your benchmark above.

Note that almost all of what you’re measuring there is the compilation process. ModelingToolkit itself is a compiler run at runtime: it takes symbolic expressions and then generates code from them. The codegen process is at the prob = ODEProblem(sys, u0, tspan, p) expression. If you think of it as the act of generating the code:

    function lv(du, u, p, t)
        x, y = u
        du[1] = p[1] * x - p[2] * x * y
        du[2] = p[4] * x * y - p[3] * y
    end

    u0 = [1.0, 1.0]
    tspan = (0.0, 1.0)
    p = [1.5, 1.0, 3.0, 1.0]

    prob = ODEProblem(lv, u0, tspan, p)

This step is non-trivial. It will cause overhead. We tell people that if they really need the top performance that you should move any codegen out of the loop and only generate the model’s code once. More specifically, it’s here in the FAQ:

We do have some caching so it’s not the worst thing in the world, and we’re continually improving this time, but as is always the case, not recompiling is always going to be better than recompiling.

Now what if you prefer to just write the model code? That’s great, do it. We maintain DifferentialEquations.jl as fast ODE solvers which take in Julia functions, it’s well-documented for that purpose, use it like that. But we are also putting a lot of effort into the symbolic interfaces as well for the reasons documented above, and whether a use case will do better with one interface or the other is a big “it depends”.

12 Likes

Thanks for all the info Chris and Steffen!

I understand this a bit better now and I think my general usage of the package is just so simple as to not require all of the extra functionality that it has to offer.

It allows you to calculate symbolic things like sensitivity matrices

Is the symbolic approach required for computing the Jacobian matrix?

I would bet that both a equally fast for the solve call in your example, since both will end up with kind of the same ODEProblem.

Note that almost all of what you’re measuring there is the compilation process

Yes, this was intentional - I am generally running lots of basically the same problem but with different parameters and numbers of state variables. The slowest part by far is making the ODEProblem for when there are different numbers of states. On that note, remake() was an absolute game-changer to learn about so thank you so much for that. So many wasted simulation hours before that…

More specifically, it’s here in the FAQ:
https://docs.sciml.ai/ModelingToolkit/stable/basics/FAQ/#Using-ModelingToolkit-with-Optimization-/-Automatic-Differentiation

I mentioned this in the first video on ModelingToolkit:

Very stupid, I know, but I have actually only ever looked at the Documentation and Examples sections. Should’ve thought of FAQs or Youtube videos. My search for answers tends to begin with ?function and end with StackOverflow…

As a fun side note, GPU accelerating R ODE solves is done by translating the R code to ModelingToolikit

The interconnectedness of R, Python, Julia, etc. is very cool - everything seems to be cross-executable and capable of drawing on each others functions now.

If you can think of a good place for us to mention this in the ?ODEFunction or something, please let us know. Discoverability can always be improved, and those who already know something are the worst at knowing where people will look for it.

It’s not required, but it can be helpful. Symbolic sparse Jacobians can be faster than using automatic differentiation approaches, but also just being able to look at the Jacobian (ModelingToolkit automatically generates latex expressions) can be helpful for analyzing a model.

3 Likes

Is the preferred method of doing this to go to the Docs GitHub and make a new documentation-type issue there?

Anywhere is fine. We can always triage it and move issues around as needed. GitHub - SciML/ModelingToolkit.jl: An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations would be the right spot for this.

Well, one big advantage for me when using ModelingToolkit is that you can linearise your model from any input to any output around any operation point… Not sure if that is possible without ModelingToolkit…