Parameter inference with Turing with Observed Distribution

Lets say I have a model

using Turing, StatsPlots

@model function model(a)
    p ~ Normal(1,1)
    a ~ Normal(p,1)
end

and I do an experiment and I observe a to be equal to 1.5:

Then I can update my prior believe regarding a using

obs = 1.5
chain = sample(model(obs), NUTS(), 1000)

which gives me the posterior distribution of a.

However in reality, we never observe single values, almost always there is some uncertainty assosicated with them. We observe Distributions.
So instead of observing a=1.5 I would observe obs = Normal(1.5,0.1)

How can I model this to find the parameters using turing or any other julia package?

I am new to turing, but this seems quite exiting.

Turing.jl doesn’t really do bayesian updating as you have described.
If you observe new data you will have to re-fit the model using the entire dataset to get the posterior distributions of your parameters.

If you really need online updating there are a few options depending on your model. Maybe something simple like ConjugatePriors.jl could do the trick:

using Distributions
using ConjugatePriors

prior = Normal(1, 1)
sigma = 1.0

obs = [1.5]  # has to be an array and can include multiple values
posterior((prior, sigma), Normal, obs)

# Normal{Float64}(μ=1.25, σ=0.7071067811865475)

where you can use the result from posterior as the prior when you observe new data.

For more complicated models, RxInfer.jl seems to be optimized for updating/streaming data so that might be worth to look into.
It features a similar DSL to Turing.jl.

1 Like

Hi @Fourier!

So instead of observing a=1.5 I would observe obs = Normal(1.5,0.1)

Could you do this then?

using Turing

@model function model(obs)
    p ~ Normal(1, 1)
    a ~ Normal(p, 1) # the 'true' underlying value
    obs ~ Normal(a, 0.1) # the physical measurement
end
1 Like