DiffEqs: Hybrid (continuous+discrete) system / periodic callback

I want to model the combination of a physical (and thus continuous-time) system with a (discrete-time) controller. The controller algorithm is executed, say, every second. The output of the controller (that thus changes every second) is the input to the physical system.

I tried the straightforward way: Model the physical system as an ODE, then run a loop that evaluates the controller, changes parameters of that model (that represent the controller output), then run the simulation for one second. The problem is that the initialisation phase of the solve command takes about 5ms, whereas the simulation of the physical system over 30 minutes simulated time takes around 60ms in total. So if I call solve 1800 times (every simulated second), this already takes 9s.

What would be a good way to implement this? Basically, it would have to be something like tstops every second, but then pause the solver and let the discrete function (the controller algorithm) execute and change parameters and/or some state variables with zero autonomous dynamics. Is a DiscreteCallback the way to go?

Yup, tstops + DiscreteCallback is a good way to do it. Or just use the integrator interface.

1 Like

Just found out about PeriodicCallback – seems to be exactly what I’m looking for…?
Edit: yes, that’s exactly it. Can even modify the parameters; awesome.

4 Likes

Hrm, one more thing though: CVODE_BDF works exceptionally well for my problem – it’s like 10x faster than any other solver. But it does not implement the callback interface, right?

It does implement the callback interface.

(If you’d like to help donate this as a test problem in the future to help us keep improving benchmarks, we would appreciate it!)

4 Likes

Here’s a “minimal-as-possible” example:

using DifferentialEquations
using Plots

f(x, p, t) = p .- x # the system ("unit first-order lag")
ctrl_f(e, x) = 0.85(e + 1.2x) # the controller (proportional-integral)
ctrl_x = [0.0] # array to record (discrete) state of controller

function ctrl_cb(int) # the callback function
    e = 1 - int.u[1] # control error
    push!(ctrl_x, ctrl_x[end] + e) # integrate the control error
    int.p = ctrl_f(e, ctrl_x[end]) # update controller output
end

pcb  = PeriodicCallback(ctrl_cb, 1.0)
prob = ODEProblem(f, [0.0], (0.0, 8.0), 0.0, callback=pcb)
sol  = solve(prob)

plot(sol, label="system")

ctrl_t = 0:(length(ctrl_x)-2)
plot!(0:(length(ctrl_x)-2), ctrl_f.(1 .- sol(ctrl_t, idxs=1).u, ctrl_x[2:end]),
    color=:black, linetype=:steppost, label="controller")

newplot

When using CVODE_BDF() as solver, the callback is never invoked (and thus still ctrl_x == [0.0]), and the system stays flat at 0.0…

4 Likes

I cannot donate the original problem, sorry – it comprises quite a lot of critical know-how from my job. I can try to come up with a generic example that has similar “numerical features” as the original one, but that will take some time. Basically, the majority of state variables of the problem stem from 1D approximations of flows, both by “sequence of volumes” and spectral methods. Right now autodiff fails on the built-in stiff solvers, but that’s another point I have to look at.

BTW, today I played around with pretty much every solver. TRBDF2 for low accuracy, and KenCarp4 or Cash4 for higher accuracy work well – all within a factor 2 from CVODE_BDF.

1 Like

Sundials doesn’t initialize its callbacks. That should get changed.

Callback initialization was added to Sundials master @asprionj. I’m just going to finish up its progress bar support and add a release.

Awesome! :+1:

Tested the use-case here with CVODE_BDF() from the latest Sundials Release 3.5.0, it works. Thanks again Chris! Will apply it to the original problem tomorrow and see how performance is compared to the native solvers. (Larger, computationally intensive model, and much longer simulation.)

I’m curious, did you consider Modia, and if so, what made you decide against using it for this application? It sounds like your application is exactly the kind of stuff modia aims to handle, but I have not tried it myself yet, and I would be interested in knowing if you looked elsewhere due to lack of features or documentation etc.?

Yes, I did consider it, but did not invest that much time. Frankly, I newer could warm to Modelica, don’t know the exact reason. For this specific task/project, just to have one more language / environment besides the “main language for the everyday technical computing” was something I did not want, and it is quite different from a “normal programming language”, so not that quick to learn.

Ok, Modia is written in Julia, but it still adheres to Modelica’s philosophy (which, again, is perfectly fine in itself) and provides a single way of doing things (e.g. always using the IDA solver, or PyPlot for plotting). Also, it was (and still is) in a somewhat “experimental” state and does not provide any systematic documentation (AFAIK). All of this is no problem for a new, mostly cummunity-driven project trying out new ideas… but it’s not something you’d use to solve a real-world problem as fast as possible and need to be flexible to adapt to whatever may get in your way. So I decided to just use DiffEqs as base simulation framework and build a minimal “framework” around it, providing exactly (and only) what I needed (see this post).

In that post I also mention that ModelingToolkit.jl seems to be a very promising approach to “unify” such modeling frameworks by providing a “core engine”.

1 Like

Did this now. CVODE_BDF is about 5x slower than Cash4 and KenCarp4 on the original hybrid problem for medium accuracies (tolerance 1e-4 to 1e-6). Seems that either

  • enforcing a stop every second (thus limiting the max. step size) and/or
  • introducing a discontinuity (the step change of the control signal) each second, thus exciting the system’s stiffness,

does no good to CVODE_BDF. It is, however, rather insensitive to increasing the accuracy, i.e. lowering the tolerance, whereas the other two take around 10x longer when going from 1e-6 to 1e-8, and thus again are around 2x slower than CVODE_BDF. So the limiting of the step size seems to be the main show stopper for CVODE_BDF.

Since 1e-4 is already sufficient for my application / model / use-case, I’ll probably stick to said native solvers for hybrid-system simulation.

Yes, that is definitely the case. BDF methods are multistep methods, so it needs X previous steps for the X order method. If you keep having discontinuities, then it keeps going back to implicit Euler and shrinking the time steps to accommodate for the low order. This is demonstrated as an example in one of our papers as to why no single integrator is perfect for all cases.

Makes sense… nice paper, could just skim over it, Fig. 2 exactly illustrates the problem at hand. I’ll give a shot at the Rodas* solvers as well :wink:

BTW, would there be a way to enforce some fixed (low) number of steps per interval between discrete events? (Once it is experimentally tested how many usually suffice.) How large could the performance benefits be?

Tstops. It shouldn’t make it perform better, just worse