An interactive notebook for a simple SEIR model

Hi All,

Was helping Alan and David teach the Julia class on Covid modeling.

Thought the one thing missing from online discussions was a simple interactive notebook – so nothing gets hidden from the user.

Therefore without further ado, here is JuliaSEIR.com.

// the widget is at the bottom. you can play with it in-browser

Hope you enjoy!
- djsegal

13 Likes

Really nice! I didn’t know how to use those errors intervals and it was a great example :smiley:

Cool demo :slight_smile: Simulating ODE:s like this one does, however, give very misleading error bars when used with linear uncertainty propagation (UP), both qualitatively and quantitatively. Compare the results for β=1 below, with linear uncertainty propagation

  • The error bars extend over 100% and below 0%.
  • Have a strange dips and bumps in uncertainty, especially in the peak of exposed cases.
  • The mean is wrong, in particular, look at the different shapes of the “exposed” curves.
  • The error bars are dramatically underestimated for some intervals.

The “non-linear” version is produced by sampling the from the distributions of the uncertain parameter 100 times.

I understand this is a simple demo, and you might not care much about the result, but it’s nevertheless important to highlight the problem so that people (like your students) do not learn to trust the result of linear uncertainty propagation just because it’s convenient to perform. Here’s a similar example where the error goes to infinity the longer the simulation is carried out.

up_linear
up_nonlinear

9 Likes

I spent some time yesterday looking at the error bounds computed by Measurements.jl in the notebook. I don’t think there is anything wrong here, at least not in the code of the package :sweat_smile: It’s the linear error propagation method that, being proportional to the first derivative with respect to the uncertain values, gives sometimes unintuitive results – especially when the derivative becomes zero.

As a simpler example, consider the function f(t, \alpha) = \sin(\alpha t), where \alpha is a quantity with a measurement error. The plot below shows the mean value of f(t, \alpha) with error bars, with superimposed the lines for the upper and lower error computed analytically (f_{\alpha}(t, \alpha) = t \cos(\alpha t)).

This is the code to generate the plot:

using Measurements, Plots

f(t, α) = sin(α * t)
error(t, α) = abs(t * cos(Measurements.value(α) * t)) * Measurements.uncertainty(α)
t = -pi:0.05:pi
α = 2.3 ± 0.45
plot(t, f.(t, α); linewidth = 3, label = "Value", alpha = 0.8, fillalpha = 0.1, size = (600, 400))
plot!(t, @.(Measurements.value(f(t, α) + error(t, α))); linewidth = 3, label = "Upper error")
plot!(t, @.(Measurements.value(f(t, α) - error(t, α))); linewidth = 3, label = "Lower errror")

The “exposed” curve in the notebook shows a similar pattern, with the error becoming very small around the peak: probably the derivative with respect to the parameters \alpha, \beta and \gamma becomes very small there.

1 Like

I agree, I thing Measurements.jl is working exactly as advertised. I think it is difficult to have a feeling for not only “how nonlinear” a function is, but also the effect of iterating a nonlinear function several times, which might be why linear uncertainty propagation can sometimes result in surprising effects when solving ODE:s.

Thanks for looking into this.

This graph is very worrying to me – it implies that the linear propagation of errors method is missing something (i.e., missing a lot, actually). Is this something that people have remarked on before?

1 Like

I completely agree with you.

The course was actually mostly about simulating stochastic (individual-based) models, where you clearly see the effect you’re describing.

At the end we talked about ODE models and I showed the method using Measurements.jl.

The problem happening with linear error propagation is that it makes the assumption that 2nd and higher order error terms won’t effect the output. This is only wrong in cases where you are doing numerically unstable computation, but these disease models are incredibly unstable.

The documentation of the Python package uncertainties (which does linear error propagation like Measurements.jl) has some words about this:

3 Likes

I’m not sure iteration is relevant here: when I’ve been able to compare results of an ODE solved by Measurements + DifferentialEquations with an analytical solution only using Measurements, I’ve always found perfect agreement. But non-linearity can definitely be an issue.

Do you have second-order propagation implemented? Should we all be using that instead?

No, I’ve never looked into a second-order propagation. Honestly, I’ve never seen it used in practice as much as linear propagation (when applicable) or Monte-Carlo, but maybe there are fields where it’s more used.

1 Like

Nice, clean presentation of notebook. Two questions:

  • Where is the documentation of SimplePlots? (I find a package SimplePlot in Julia Observer, but that is probably a different package…)
  • The notebook says that \beta is influenced by vaccination. Is that correct? I thought vaccination instead reduced the number of available suceptibles.

I think these epidemiology models are nice examples of dynamic systems. Based on the simpler SIR model, one can easily compute the maximal number of infected (“herd immunity”):

Time until peak infection can be computed for the SIR model using numerical quadrature:


(In my notation, \tau_\mathrm{r} is the recovery time constant, i.e., 1/\gamma in your model?)

It would be instructive to extend your SEIR model to have two compartments (“states”, “countries”) with different mitigation regimes (one slack, another strict) and see how “opening up” = allowing travel between the compartments affects the infection in the compartment with “strict” regime.

This, I’m afraid, is much too simplistic, any function involving a discontinuity such as sign(x) is perfectly numerically robust, but linear uncertainty propagation of a distribution with mass on both sides on the discontinuity will produce a large error.

An iterated nonlinear map is problematic in the sense that it’s very hard to have a intuitive feeling for how errors compound. The pendulum example
https://baggepinnen.github.io/MonteCarloMeasurements.jl/stable/examples/#Differential-Equations-1
is a good example of a quite benign function that gives a very strange result after a long horizon. It shows the same characteristics as those shown in your example above, but if you notice how it compares to the Monte Carlo result you see how this is very different in character. This difference only starts appearing after iterating the map for quite long.

2 Likes

Would you be willing to share the code for your simulation? Is that using MonteCarloMeasurements.jl? I think I just understood for the first time what that package does! :wink:

Indeed, it’s because error isn’t necessarily “additive”. One way to think about it is to think of a process with very little error, and a discontinuity where you are uncertain if you go up by 1 or down by 1. How much uncertainty do you have? If you do a bunch of trajectories, you’ll see that you have two possible trajectories that split, but both are quite certain so error is contained. If you do linear error propogation, you move along and then BOOM! Variance explodes.

This gets even worse if things get off-phase. Looks at this numerical UQ plot from https://diffeq.sciml.ai/latest/analysis/uncertainty_quantification/#Example-1:-FitzHugh-Nagumo-1

Each of the trajectories are pretty clear: you have that distinct curve. If you try and boil it down to a mean and variance, you see exploding errors. However, trajectories can reconverge to a mode, like in

this is the kind of feature that a purely weak form like linear error propogation will have issues with, because it’s measuring probability distributions and not the trajectories, and sometimes trajectories miss information. This is similar to weak vs strong convergence in stochastic differential equations, and how measuring probability distributions is different from measuring the solution to SDEs, though very related.

If you know you don’t have these kinds of effects, then pushing moments makes sense, but distributional forms are not robust to these behaviors in general, at least not without a mixture models, a ton of higher order moments, etc. Even then, they can lose a lot of information: are you constantly switching between two peaks or do you have two distinct sets of trajectories? A probability distribution on the values won’t tell you the answer to that hypothesis.

7 Likes

If you have code that uses measurements pm operator, then you can try to run the same code but switch the Measurements.:± to MonteCarloMeasurements.:∓ and see how it compares. In practice MCM might not be compatible with all the same functions as Measurements, but for the ODE it should work.

2 Likes

The documentation is a little undeveloped right now.

But as mentioned in this discourse post, the syntax basically matches Plots.jl and Interact.jl