Propagating uncertainty of estimated ODE parameters in turing.jl

Hello all,

I have been using turing to estimate parameters of logistic growth curve ODEs, using data from bacterial growth experiments. Since I have replicate growth curves for each treatment, I am interested in the posterior distribution of the “population” mean of parameter x i.e., what would the mean value of x be if I ran the experiment thousands of times. My approach has been to make an array where each column is all posterior samples of x from each replicate. Then run through a loop, taking a row at a time, feeding it into another turing model that estimates the mean and variance of the normal distribution that these “data” came from. I collect the estimated mean from all samples from each iteration of the loop and assume that this gives me the posterior distribution of the mean of parameter x including all of my uncertainty about each replicate.

Firstly, is my logic even correct that this gives the posterior distribution i’m looking for?
Secondly, Even if it works, I’m sure there must be a better way of doing this!? Bearing in mind that it’s useful for me to store the samples from each replicate, so that I can generate plots with them and see if they give sensible results.

Thanks in advance

Maybe you would find GitHub - baggepinnen/Turing2MonteCarloMeasurements.jl: Interface between Turing.jl and MonteCarloMeasurements.jl helpful, I built it to solve more or less this problem. I haven’t updated it in a while so I’m not sure if it’s working with the latest versions of turing, but it should be fairly easy to update if needed.

1 Like

If you’re only looking for the moments of the solution, you may want to look into the Koopman operator methods we’ve recently demonstrated. There is a tutorial which shows a workflow where Bayesian posteriors from Turing are then used in a UQ analysis.

https://tutorials.sciml.ai/html/DiffEqUncertainty/03-GPU_Bayesian_Koopman.html

Thank you Chris, this looks very cool. Keep up the great work with DiffEqs!

Thanks a lot. It works as far as generating particles from the turing generated chain. But I’m not sure how to then use particles within a turing model. Sorry if I’m being dim!
Should cp.my_param ~ Normal(μ,σ) work? Obviously, looping over the values in the particle would lead to drastically underestimating uncertainty about the posterior.

In the end, I just built a multi-level model, like I should have in the first place. I used generated quantities to pull out the data I needed for plotting purposes.