DiffEqs: Hybrid (continuous+discrete) system / periodic callback

Yeah no worries, I thought I’d lay it all out there because you seem fairly interested in DEDataArrays, and so it’s probably good for me to write down somewhere why I think they may not be a long term solution. But I don’t have a better long term solution, and maybe a simple change is all that’s needed to fix 99.99% of cases, so I’ll probably play around with those ideas I proposed tonight. I really wanted this text somewhere because after I do this PR, I assume that I’ll need to have a post to point to in order to explain to future contributors why it’s so crucial that interpolation uses similar on the right and not the left end point, even though it overwrites all of the values. Writing generic code is fun :wink:

1 Like

You’re too strict on your typing. Always be careful with types if you want AD to work. Here, AD only occurs on continuous variables, so the types of continuous and discrete variables should be separated.

mutable struct CtrlSimTypeScalar{T,T2} <: DEDataVector{T}
    x::Vector{T}
    ctrl_x::T2 # controller state
    ctrl_u::T2 # controller output
end

with this, your test works now, and it also works on out-of-place and implicit methods:

  function f(x,p,t)
      x.ctrl_u .- x
  end
  function f(dx,x,p,t)
      dx[1] = x.ctrl_u - x[1]
  end

  ctrl_f(e, x) = 0.85(e + 1.2x)

  x0 = CtrlSimTypeScalar([0.0], 0.0, 0.0)
  prob = ODEProblem{false}(f, x0, (0.0, 8.0))

  function ctrl_fun(int)
      e = 1 - int.u[1] # control error

      # pre-calculate values to avoid overhead in iteration over cache
      x_new = int.u.ctrl_x + e
      u_new = ctrl_f(e, x_new)

      # iterate over cache...
      if DiffEqBase.isinplace(int.sol.prob)
        for c in full_cache(int)
            c.ctrl_x = x_new
            c.ctrl_u = u_new
        end
      end
  end

  integrator = init(prob, Tsit5())

  # take 8 steps with a sampling time of 1s
  Ts = 1.0
  for i in 1:8
      ctrl_fun(integrator)
      DiffEqBase.step!(integrator,Ts,true)
  end

  sol = integrator.sol
  @test [u.ctrl_u for u in sol.u[2:end]] == [sol(t).ctrl_u for t in sol.t[2:end]]

  prob = ODEProblem{true}(f, x0, (0.0, 8.0))

  integrator = init(prob, Rosenbrock23())

  # take 8 steps with a sampling time of 1s
  Ts = 1.0
  for i in 1:8
      ctrl_fun(integrator)
      DiffEqBase.step!(integrator,Ts,true)
  end

  sol = integrator.sol
  @test [u.ctrl_u for u in sol.u[2:end]] == [sol(t).ctrl_u for t in sol.t[2:end]]

with PRs:

https://github.com/SciML/DiffEqBase.jl/pull/481
https://github.com/SciML/OrdinaryDiffEq.jl/pull/1110

1 Like

@asprionj your issues are now fixed on the latest release.

1 Like

Awesome(ly quick), thanks! :thumbsup:

The out-of-place example doesn’t actually do anything - everything stays at 0.0 and ctrl_fun has no effect on the integrator.

which example?

The out-of-place test in the post to which I replied:

mutable struct CtrlSimTypeScalar{T,T2} <: DEDataVector{T}
    x::Vector{T}
    ctrl_x::T2 # controller state
    ctrl_u::T2 # controller output
end

function f(x,p,t)
      x.ctrl_u .- x
end

ctrl_f(e, x) = 0.85(e + 1.2x)

x0 = CtrlSimTypeScalar([0.0], 0.0, 0.0)
prob = ODEProblem{false}(f, x0, (0.0, 8.0))

function ctrl_fun(int)
    e = 1 - int.u[1] # control error

    # pre-calculate values to avoid overhead in iteration over cache
    x_new = int.u.ctrl_x + e
    u_new = ctrl_f(e, x_new)

    # iterate over cache...
    if DiffEqBase.isinplace(int.sol.prob)
        for c in full_cache(int)
            c.ctrl_x = x_new
            c.ctrl_u = u_new
        end
    end
end

integrator = init(prob, Tsit5())

# take 8 steps with a sampling time of 1s
Ts = 1.0
for i in 1:8
    ctrl_fun(integrator)
    DiffEqBase.step!(integrator,Ts,true)
end

sol = integrator.sol

# nothing happened
getfield.(sol.u,:x) == [[0.0] for i in 1:length(sol)]

I know it’s a test for the interpolation of the discrete variables, but what would make this a fully functional example?

Adding the else statement to ctrl_fun almost makes it work

function ctrl_fun(int)
  e = 1 - int.u[1] # control error

  # pre-calculate values to avoid overhead in iteration over cache
  x_new = int.u.ctrl_x + e
  u_new = ctrl_f(e, x_new)

  # iterate over cache...
  if DiffEqBase.isinplace(int.sol.prob)
    for c in full_cache(int)
        c.ctrl_x = x_new
        c.ctrl_u = u_new
    end
  else
    int.u.ctrl_x = x_new
    int.u.ctrl_u = u_new
    u_modified!(int,true)
  end
end

but the effect is delayed until 1 second.
outofplace

mutable struct CtrlSimTypeScalar{T,T2} <: DEDataVector{T}
    x::Vector{T}
    ctrl_x::T2 # controller state
    ctrl_u::T2 # controller output
end

function f(x,p,t)
    x.ctrl_u .- x
end
function f(dx,x,p,t)
    dx[1] = x.ctrl_u - x[1]
end

ctrl_f(e, x) = 0.85(e + 1.2x)

x0 = CtrlSimTypeScalar([0.0], 0.0, 0.0)
prob = ODEProblem{false}(f, x0, (0.0, 8.0))

function ctrl_fun(int)
    e = 1 - int.u[1] # control error

    # pre-calculate values to avoid overhead in iteration over cache
    x_new = int.u.ctrl_x + e
    u_new = ctrl_f(e, x_new)

    # iterate over cache...
    if DiffEqBase.isinplace(int.sol.prob)
      for c in full_cache(int)
          c.ctrl_x = x_new
          c.ctrl_u = u_new
      end
    end
end

integrator = init(prob, Tsit5())

# take 8 steps with a sampling time of 1s
Ts = 1.0
for i in 1:8
    ctrl_fun(integrator)
    DiffEqBase.step!(integrator,Ts,true)
end

sol = integrator.sol
@test [u.ctrl_u for u in sol.u[2:end]] == [sol(t).ctrl_u for t in sol.t[2:end]]

passes

The test of the interpolation of ctrl_u passes, but the integration itself is incorrect, i.e.

@test all(u[1] == 0 for u in sol.u)

also passes, which is obviously not what we want. The function ctrl_fun in its current form does not currently update the integrator when the problem is out-of-place—what is the correct way to do that?

using OrdinaryDiffEq, Test

mutable struct CtrlSimTypeScalar{T,T2} <: DEDataVector{T}
    x::Vector{T}
    ctrl_x::T2 # controller state
    ctrl_u::T2 # controller output
end

function f(x,p,t)
    x.ctrl_u .- x
end
function f(dx,x,p,t)
    dx[1] = x.ctrl_u - x[1]
end

ctrl_f(e, x) = 0.85(e + 1.2x)

e = 1 - 0.0
x_new = 0.0 + e
x0 = CtrlSimTypeScalar([0.0], x_new, ctrl_f(e, x_new))
prob = ODEProblem{false}(f, x0, (0.0, 8.0))

function ctrl_fun(int)
    e = 1 - int.u[1] # control error

    # pre-calculate values to avoid overhead in iteration over cache
    x_new = int.u.ctrl_x + e
    u_new = ctrl_f(e, x_new)

    # iterate over cache...
    if DiffEqBase.isinplace(int.sol.prob)
      for c in full_cache(int)
          c.ctrl_x = x_new
          c.ctrl_u = u_new
      end
    else
        int.u = CtrlSimTypeScalar(int.u.x,x_new,u_new)
    end
    u_modified!(int,true)
end

integrator = init(prob, Tsit5())

# take 8 steps with a sampling time of 1s
Ts = 1.0
for i in 1:8
    ctrl_fun(integrator)
    DiffEqBase.step!(integrator,Ts,true)
end

sol = integrator.sol
@test [u.ctrl_u for u in sol.u[2:end]] == [sol(t).ctrl_u for t in sol.t[2:end]]
@test any(u[1] != 0 for u in sol.u)

using Plots
# extract solution, plot...
sol = integrator.sol
plot(sol, label="system")

# actual full steps, with steppre line-style ("correct")
plot!(sol.t, [u.ctrl_u for u in sol.u], linetype=:steppre, color=:black, marker=true, label="u[.].ctrl_u")

# interpolated control signal ("wrong")
t_ctrl = 0:0.01:sol.t[end]
plot!(t_ctrl, [sol(t).ctrl_u for t in t_ctrl], color=:red, label="u(t).ctrl_u")

is what you want, and at the same time, further highlights why you don’t want to do this.

Hey @ChrisRackauckas
Are the methods talked about here, the most efficient ones?

I benchmarked 2 code snippets from this thread in the code underneath and they seem to have many more allocations than expected (more than 2000 for only 34 timesteps). This while the integration and callback functions have no allocations.

using DifferentialEquations
using BenchmarkTools

function normal_integration()
    f(x, p, t) = p .- x # the system ("unit first-order lag")
    ctrl_f(e, x) = 0.85(e + 1.2x) # the controller (proportional-integral)
    ctrl_x = [0.0] # array to record (discrete) state of controller

    function ctrl_cb(int) # the callback function
        e = 1 - int.u[1] # control error
        push!(ctrl_x, ctrl_x[end] + e) # integrate the control error
        int.p = ctrl_f(e, ctrl_x[end]) # update controller output
    end

    pcb  = PeriodicCallback(ctrl_cb, 1.0)
    prob = ODEProblem(f, [0.0], (0.0, 8.0), 0.0, callback=pcb)
    sol  = solve(prob)
end


## With DEDataArray

mutable struct CtrlSimTypeScalar{T} <: DEDataVector{T}
    x::Array{T,1}
    ctrl_x::T # controller state
    ctrl_u::T # controller output
end


function dedata_integration()
    function f(dx,x,p,t)
        dx[1] = x.ctrl_u - x[1]
    end

    ctrl_f(e, x) = 0.85(e + 1.2x)

    x0 = CtrlSimTypeScalar([0.0], 0.0, 0.0)
    prob = ODEProblem(f, x0, (0.0, 8.0))

    function ctrl_fun(int)
        e = 1 - int.u[1] # control error

        # pre-calculate values to avoid overhead in iteration over cache
        x_new = int.u.ctrl_x + e
        u_new = ctrl_f(e, x_new)

        # iterate over cache...
        for c in full_cache(int)
            c.ctrl_x = x_new
            c.ctrl_u = u_new
        end
    end

    integrator = init(prob, Tsit5()) # only Tsit5 seems to work

    # take 8 steps with a sampling time of 1s
    Ts = 1.0
    for i in 1:8
        ctrl_fun(integrator)
        step!(integrator,Ts,true)
    end
    integrator.sol
end


@btime normal_integration() #108.900 μs (2026 allocations: 185.73 KiB)
@btime dedata_integration() #160.600 μs (2140 allocations: 186.73 KiB)

It’s not going to be optimal because it’s using a mutable struct in a way that will allocate. ModelingToolkit.jl with removing of observables will be better.

1 Like

The logging of the signals seems not to be the time-consuming part. Modyfing normal_integration() to just use a scalar to store the controller state:

no_log_integration()
...
    ctrl_x = 0.0 # state of controller (error integrator)

    function ctrl_cb(int) # the callback function
        e = 1 - int.u[1] # control error
        ctrl_x += e
        int.p = ctrl_f(e, ctrl_x) # update controller output
    end
...
end

On my machine gives:

@btime normal_integration() #137.600 μs (2044 allocations: 185.88 KiB)
@btime dedata_integration() #187.400 μs (2305 allocations: 205.55 KiB)
@btime no_log_integration() #140.499 μs (2077 allocations: 186.22 KiB)

Running the solve with save_on = false does not drastically reduce the runtime and allocations:

132.301 μs (1858 allocations: 164.41 KiB)

But overall, DEDataArray indeed seems to add quite some overhead.
How would the example here be implemented using ModelingToolkit.jl? Would be interesting to compare that as well! (The implementation itself, and the performance.)

Edit: @ChrisRackauckas, AFAIK there is no integration of such discrete events / systems in ModelingToolkit, correct? So MTK would just be used to model the “physical” / continuous part of the system, then use one of the above approaches to “buckle” the discrete-time controller onto that model?

I think maybe a more natural way of mixing discrete and continuous systems than the above methods is to add the discrete state as an extra state equation to the physical system like: dctrl_x = 0
and then update the ctrl_x and int.p using a PeriodicCallback. Then atleast the ctrl_x is also stored in sol.u and you can use EnsembeModels etc. with this system more easily.

This isn’t very modular though as changing the amount of states in the controller requires you to change the equations of the physical system.
But i guess ModelingToolkit might let you describe difference equations for the discrete systems at some point and do something like this automatically.

Adding additional states: wouldn’t this just result in the same overhead and rather cumbersome “update” as with DEDataArrays? I.e. having to loop over some cache.

@zekeriya.sari: How are discrete (sub-)systems integrated into the overall simulation in Causal.jl? You also rely on DiffEqs.jl for the solution, and allow multiple discrete subsystems with different sampling times, right?

ModelingToolkit.jl: Yes, I’d also really like to know whether there is…

  • Already a way of doing this,
  • Something planned, or
  • An Idea how this could / should be realized, as a starting point?
1 Like

Yeah you are right, it is indeed slower (see code). But that might also partly be because in this case, it changes a scalar ODE to a non-scalar ODE and requires indexing for the system state where it didn’t before.

And thank you @asprionj for making me aware of Causal.jl. I didn’t know it existed.

using DifferentialEquations
using BenchmarkTools

function normal_integration()
    f(x, p, t) = p .- x # the system ("unit first-order lag")
    ctrl_f(e, x) = 0.85(e + 1.2x) # the controller (proportional-integral)
    ctrl_x = [0.0] # array to record (discrete) state of controller

    function ctrl_cb(int) # the callback function
        e = 1 - int.u[1] # control error
        push!(ctrl_x, ctrl_x[end] + e) # integrate the control error
        int.p = ctrl_f(e, ctrl_x[end]) # update controller output
    end

    pcb  = PeriodicCallback(ctrl_cb, 1.0)
    prob = ODEProblem(f, [0.0], (0.0, 8.0), 0.0, callback=pcb)
    sol  = solve(prob)
end


## With DEDataArray

mutable struct CtrlSimTypeScalar{T} <: DEDataVector{T}
    x::Array{T,1}
    ctrl_x::T # controller state
    ctrl_u::T # controller output
end


function dedata_integration()
    function f(dx,x,p,t)
        dx[1] = x.ctrl_u - x[1]
    end

    ctrl_f(e, x) = 0.85(e + 1.2x)

    x0 = CtrlSimTypeScalar([0.0], 0.0, 0.0)
    prob = ODEProblem(f, x0, (0.0, 8.0))

    function ctrl_fun(int)
        e = 1 - int.u[1] # control error

        # pre-calculate values to avoid overhead in iteration over cache
        x_new = int.u.ctrl_x + e
        u_new = ctrl_f(e, x_new)

        # iterate over cache...
        for c in full_cache(int)
            c.ctrl_x = x_new
            c.ctrl_u = u_new
        end
    end

    integrator = init(prob, Tsit5()) # only Tsit5 seems to work

    # take 8 steps with a sampling time of 1s
    Ts = 1.0
    for i in 1:8
        ctrl_fun(integrator)
        DifferentialEquations.step!(integrator,Ts,true)
    end
    integrator.sol
end

## Augmenting the state

function augmented_integration() 
    function f(dx, x, p, t)# the system ("unit first-order lag")
        dx[1] = p .- x[1]
        dx[2] = 0.0 #This isn't really needed i guess if x0[2] = 0
        dx
    end

    ctrl_f(e, x) = 0.85(e + 1.2x) # the controller (proportional-integral)

    function ctrl_cb(int) # the callback function
        e = 1 - int.u[1] # control error
        next_ctrl_x = int.u[2] + e # integrate the control error
        int.p = ctrl_f(e, next_ctrl_x) # update controller output
        for c in full_cache(int)
            c[2] = next_ctrl_x
        end
    end


    pcb  = PeriodicCallback(ctrl_cb, 1.0)
    prob = ODEProblem(f, [0.0,0.0], (0.0, 8.0), 0.0, callback=pcb)
    sol  = solve(prob)
end

@btime normal_integration() #108.900 μs (2026 allocations: 185.73 KiB)
@btime dedata_integration() #160.600 μs (2140 allocations: 186.73 KiB)
@btime augmented_integration() #170.900 μs (1343 allocations: 120.23 KiB)
1 Like

Yes @asprionj , Causal.jl depends on DifferentialEquations.jl for diffeq solving. But, the thing is that Causal.jl is not ready for hybrid systems yet. In its current version, it expects the components of the models be complete continuous-time or discrete-time. The main obstacle in the simulation of hybrid systems in Causal.jl is that in Causal.jl the components of the model share a single clock while evolving. In case of fully discrete-time or fully continuous-time systems, this is what we want: all the components evolve with respect to a unique time reference, i.e. the model clock. However, in hybrid systems, the discrete events (modeled by discrete-time systems) need a separate time reference, which implies a single clock can not be used and the discrete-time and continuous time components need their separate clocks. Simulation of hybrid systems is definitely a feature that must be included in Causal.jl and I’m working on it.

That can be handled via the callback systems, syncing the clock by having the discrete systems add tstops to the others.

I did a full write up here:

Basically, there are ways to work around it right now. We will be fully automating it, but it will take a little bit. And I put a suggestion in there for how to hand code it in a way that’s the most performant

2 Likes

Sorry to resurrect an old topic. I just want to register my findings.

I use DEDataVector a lot. In my case, solutions like extending the state with dummy variables would not be feasible, and that’s why:

The main project here using this framework is a satellite simulator. In this case, we use DiscreteCallback to simulate the full attitude control software. We have more than 2,000 parameters, including variables that are boolean, integers, etc. The idea is that we modeled all the embedded software functionality as precisely as we can. Thus, we have a workspace, which is <: DEDataVector, containing the continuous variables and a lot of other structures like sensors, actuators, controller, etc. that models the embedded software.

Since DEDataVector was marked as deprecated, I was searching for a possible replacement. Right now, ModelingToolkit seems to have some issues that will not allow me to migrate everything to it [1-3]. DEDataVector is working fine as long as you follow the instructions in the documentation and perform a full cache update. I tried with many solvers.

Thinking into the future, if DEDataArrays is really removed from the ecosystem (I might have problems internally way before this happens since this package is marked as deprecated), then I found one solution that seems to work for me. In the parameters, I pass two variables (w::Workspace, vw::Vector{Workspace}). The first one is the latest set of variables updated by the onboard software, and the latter is a vector containing the history of the workspace. Hence, instead of updating the cache at the end of the callback, I push a copy of w into vw, leading to a value for every t in my tstops. I did not see any modification in the solution, and it is way faster, probably due to reduced allocation. The only downside is that the plotting of discrete variables is not as nice as we get using DEDataArrays, which really shows that discrete behavior.

[1] https://github.com/SciML/ModelingToolkit.jl/issues/1194
[2] https://github.com/SciML/ModelingToolkit.jl/issues/1196
[3] https://github.com/SciML/ModelingToolkit.jl/issues/1203

The issue with DEDataArrays is more about correctness. They have embedded data inside of there, so what should happen with x .= y .+ z? This is well-defined on the iterated values, not well-defined on the extra data. So what it does is it just keeps the data of x. There are many integrators for which this setup works for (basically all of the non-stiff ones), which is why it “works”. But it’s not a good general solution because it’s essentially letting the correctness of the program be decided by undefined behavior. For example what should A*x for A a matrix and x a DEDataArray be? Maybe it should propagate the same metadata? But if it’s A a DEDataMatrix and x a DEDataVector, then which metadata do you choose? There are so many of these corner cases that this design is just simply flawed. By “deprecated” I mean, I’ll continue to keep the repo working and the current tests passing, but there won’t be any more development of them. ModelingToolkit has better defined semantics which will give more optimizations and more correctness guarantees, that’s just overall better, and thus that’s the direction this should all be going. That said, anyone is free to continue using DEDataArrays via the DEDataArrays.jl package, but I don’t plan on tracking down hacks for every combination and reordering ODE solver evaluations to make the undefined behavior work in every novel case (in fact, I think that it there’s some cases where there is no ordering which always gives the results someone wants, so it’s just not doable in general).

2 Likes