Turing package(s) + parameter estimation for DiffEq models?

Are there any illustrative notebooks around on how to do proper model fitting of dynamic models with parameter statistics in Turing tools? Can it be done?

To be more concrete… take the SIR epidemiology model below with boarding school data for measles in North England in 1978 – data and model parameters are taken from Maia Martcheva’s Springer book An Introduction to Mathematical Epidemiology from 2015.

# Packages used
using Plots; pyplot()
using LaTeXStrings
using DifferentialEquations
N = 763   # Number of pupils in boarding school
tm = 3:14   # days of infection data
# Infection data
Id = [25, 75, 227, 296, 258, 236, 192, 126, 71, 28, 11, 7]

SIR model function – S, I, R, N are number of persons (i.e., not per capita) – “my” parameters are \tau_\mathrm{i} for infection time (equal to \tau_\mathrm{i} = 1/(N\cdot \beta) in Martcheva’s book) and \tau_\mathrm{r} for recovery time (equal to \tau_\mathrm{r} = 1/\alpha in Martcheva’s book, or 1/\mu in some other books). See p. 126-129 in Martcheva’s book for data and parameters. The function is:

# Function for SIR model
function sir!(dx,x,p,t)
    S,I,R = x
    tau_i,tau_r,N = p
    dS = -I*S/tau_i/N
    dI = I*S/tau_i/N - I/tau_r
    dR = I/tau_r
    dx .= [dS,dI,dR]

Problem set-up:

I3 = Float64(Id[1])
S3 = N-I3
R3 = 0.
x3 = [S3,I3,R3]
tau_i = 0.553
tau_r = 2.15
RR0 = round(tau_r/tau_i,digits=1) # The "R0" parameter
p = [tau_i, tau_r,N]
prob = ODEProblem(sir!,x3,tspan,p)
sol = solve(prob)

Parameters for model are suggested after some model fitting in Martcheva’s book. The results are as follows:

Question: Suppose I want to redo the model-fit, and also find some description of model uncertainty for the model parameters… Can this be done using Turing? Any notebook examples?

In this particular case, the initial states are probably well known. But in general, one would also want to estimate those.

Any suggestions and help are appreciated.


1 Like

Thanks for link – I’m trying to work my way through it. I have now installed Julia v. 1.5.0, and just updated all packages.

First problem:

There is a problem with package Turing, or perhaps with package MCMCChainsTuring doesn’t find MCMCChains.AbstractChains.

Also, from the linked document: it is not 100% clear which packages I use where. Do I need to import Turing in order to run DiffEqBayes, etc.

In addition to the problem of importing Turing, using DiffEqBayes, etc. due to error messages:

  1. In section “Fitting Lotka-Volterra with DiffEqBayes”, suddenly one parameter shows up which has not been declared: \sigma. It is not clear from the context what parameter \sigma represents.
  2. If I “cheat” and look at the next section, “Direct Handling of Bayesian Estimation with Turing”, in that case, parameter \sigma is defined explicitly. And it is proposed as a prior of the unknown variance (or covariance if you will) of the normally distributed noise in the data (the second parameter in MvNormal is variance/covariance, while strangely, the second parameter in Normal is standard deviation).
  3. From Wikipedia, it is common in Bayes estimation to assume an InverseGamma distribution for a parameter which represents variance. (It is not clear to me why parameters (2,3) are chosen.)

Anyway, it makes sense that \sigma is the variance of the (normally distributed) observation noise.

Question: is my assumption correct? In other words, when I use DiffEqBayes, parameter \sigma is automatically added?

Yes, @Vaibhavdixit02 we should make sure this is a bit more clearly explained in the tutorial. It is measurement noise on the outputs, it’s common to use InverseGamma, those parameters are just “a good prior” (tends to work, just fairly positive with non-zero variance and moves easily), but it’s also just a choice.

1 Like

In section “Fitting Lotka-Volterra with DiffEqBayes”, plotting of the bayesian_result_turing leads to a 5\times 2 matrix of plots.

  1. In the first plot column, there is one plot for each parameter and how it varies with “iteration” (which probably refers to num_samples in the specification of the turing_inference problem).
    Does “iteration” here refer to the iteration of the estimate towards a minimum (as in LS parameter fitting)? Or does it (more likely…) refer to the number of parameter estimates, much like the number of Monte Carlo estimates in Bootstrapping?

  2. I assume that the plots in column 2 – the parameter estimate distributions – are simply a “histograms” of the Sample value vs. iteration in column 1?

It’s the Monte Carlo chain, the MCMC choices. It’s the chain that over time should be well-mixed to give the distribution of the parameters.


1 Like

Excellent. If I now just could get Turing and the other packages working…

@BLI we have been discussing about updating this tutorial on slack.

@Vaibhavdixit02 we should make sure this is a bit more clearly explained in the tutorial.

Yes this should be updated.

If I now just could get Turing and the other packages working…

I think you might be experiencing the same issue as Slack, you might want to try with Turing#master

1 Like

Thanks! Now it works. (On my laptop.)

Question: I ran the “Fitting Lotka-Volterra with DiffEqBayes” example, leading to bayesian_result_turing which I then plotted.

Question: Is there a way to get out the data manually of this structure bayesian_result_turing? Specifically, can I extract the parameter distributions? And the Sample values? If I can get out the Sample values, I can plot them as scatter plots (Sample plot 1 vs. Sample plot 2), and get ideas of correlations in parameter estimates – I’d assume… Or produce 2D histograms/fit surfaces to the 2D histograms…

In Turing (and other languages), the \sim symbol appears…, e.g.

However, the intro to Turing fails to explain which symbol this is. I have tried \sim followed by TAB, and that gives a similar look, but Turing doesn’t understand this symbol.

So… which symbol should I use?

Hi, you can access the values with bayesian_result_turing.value.
to get the accessible field names of a variable you can type: fieldnames(typeof(bayesian_result_turing))

I am in the process of updating the tutorial, with the help of Vaibhav, Chris and others, if you want to take a peak: https://github.com/arnaudmgh/TuringTutorials/blob/master/10_diffeq.ipynb

1 Like

Do you also know which symbol I should use for assigning distributions… symbol similar to \sim, but different. Tutorials are not really readable if it is presumed that the reader knows how to type in such special symbols.

It would also be nice if you include 1-2 examples where:

  • only a subset of the states are observed (can save_idx be used?), and
  • a nonlinear mapping of states are observed, e.g., y=g(x) where x\in \mathbb{R}^n, y\in \mathbb{R}^m, and normally (but not always?) n\ge m. (Would you have to use callbacks for this?)


Is there a simple way to fit a surface to this histogram, and produce a contour plot?

Alternative, with scatter plot:

I think it’s a simple tilda, (shift+`)

I don’t understand quite what you mean. There is a LaTeX symbol \tilde, but if I type this + TAB, I get a different symbol, and it doesn’t work with Turing.

OK – I found it on the keyboard. But in general, it is quite “dangerous” to use very special keyboard symbols, because they may or may not exist on non-English keyboards.

if I remember well alt+N on several non-querty keyboards. Yes, keyboard layouts are a problem in programming, I am using US-QUERTY now, much easier than AZERTY for programming in my experience.

Here is something that puzzles me… I ran both the “Fitting Lotka-Volterra with DiffEqBayes” (solid red + solid blue) procedure and the Turing model formulation (dotted green + dotted orange) twice. To save time, I only ran 1_000 samples for both methods. The resulting parameter distributions are very different:

I have also done this with 10_000 samples – the results may differ substantially each time I run the Bayes fit.


  1. Do I do something wrong?
  2. If I don’t do anything wrong, I don’t see how one can trust the results???

Here is my code… Method 1:

for i in 1:2
    bayes_est = turing_inference(prob,Tsit5(),tvec,data_lv,priors,num_samples=1_000)

…and Method 2:

for i in 1:2
    chain = sample(model, NUTS(.65), 1_000)

Looks like it had a bad initial chain one of the two times.

1 Like