I have been using turing to estimate parameters of logistic growth curve ODEs, using data from bacterial growth experiments. Since I have replicate growth curves for each treatment, I am interested in the posterior distribution of the “population” mean of parameter x i.e., what would the mean value of x be if I ran the experiment thousands of times. My approach has been to make an array where each column is all posterior samples of x from each replicate. Then run through a loop, taking a row at a time, feeding it into another turing model that estimates the mean and variance of the normal distribution that these “data” came from. I collect the estimated mean from all samples from each iteration of the loop and assume that this gives me the posterior distribution of the mean of parameter x including all of my uncertainty about each replicate.

Firstly, is my logic even correct that this gives the posterior distribution i’m looking for?
Secondly, Even if it works, I’m sure there must be a better way of doing this!? Bearing in mind that it’s useful for me to store the samples from each replicate, so that I can generate plots with them and see if they give sensible results.

If you’re only looking for the moments of the solution, you may want to look into the Koopman operator methods we’ve recently demonstrated. There is a tutorial which shows a workflow where Bayesian posteriors from Turing are then used in a UQ analysis.

Thanks a lot. It works as far as generating particles from the turing generated chain. But I’m not sure how to then use particles within a turing model. Sorry if I’m being dim!
Should cp.my_param ~ Normal(μ,σ) work? Obviously, looping over the values in the particle would lead to drastically underestimating uncertainty about the posterior.

In the end, I just built a multi-level model, like I should have in the first place. I used generated quantities to pull out the data I needed for plotting purposes.