# Simple model not working in Turing

I’ve been exploring Turing, but so far haven’t had much luck. I’ve written the following very simple model

``````using Distributions, Turing
import Mamba: describe

xs = rand(Normal(3.4, 1.0), 1000)

@model gaussian(xs) = begin
mu ~ Normal(0, 2)

for x in xs
x ~ Normal(mu, 1.0)
end

return mu
end

chn = sample(gaussian(xs), NUTS(1000, 0.65))
describe(chn)
``````

But the model takes about 2 minutes to run, estimates that `mu` has a posterior mean of -0.72, with a large SD and low (14) ESS. The model also seems to consider `x` to be a parameter (depending on the sampling scheme that I choose). However, no sampling scheme converges to the correct value for `mu`. Any idea as to what I’m doing wrong?

Did you ever get this resolved? Just curious, I’ve been wanting to try out `Turing` but not sure what state the package is in.

Hi @joshualeond I’ve made some progress

``````using Distributions, Turing
import Mamba: describe

xs = rand(Normal(3.4, 1.0), 1000)

@model gaussian(xs) = begin
mu ~ Normal(0, 2)

for i = 1:length(xs)
xs[i] ~ Normal(mu, 1.0)
end

return mu
end

chn1 = sample(gaussian(xs), SMC(1000))
chn2 = sample(gaussian(xs), NUTS(1000, 0.65))
describe(chn1)
describe(chn2)
``````

Changing the looping to be by index versus iterating over the vector, has improved the posterior estimates. Here are the results:

``````julia> describe(chn1)
Iterations = 1:1000
Thinning interval = 1
Chains = 1
Samples per chain = 1000

Empirical Posterior Estimates:
Mean               SD                 Naive SE                 MCSE              ESS
mu     3.408908 0.0521018024787957695 0.00164760366033201355 0.01324662929046057334  15.470157
le -1414.123078 0.0000000000018198996 0.00000000000005755028 0.00000000000007579123 576.576577
lp     0.000000 0.0000000000000000000 0.00000000000000000000 0.00000000000000000000        NaN

Quantiles:
2.5%         25.0%         50.0%         75.0%         97.5%
mu     3.3076814     3.3632279     3.4025507     3.4490372     3.4784503
le -1414.1230781 -1414.1230781 -1414.1230781 -1414.1230781 -1414.1230781
lp     0.0000000     0.0000000     0.0000000     0.0000000     0.0000000

julia> describe(chn2)
Iterations = 1:1000
Thinning interval = 1
Chains = 1
Samples per chain = 1000

Empirical Posterior Estimates:
Mean              SD            Naive SE          MCSE         ESS
lf_num     7.358000000    1.10792864×10¹    0.3503577980    1.8537210626  35.72192
mu     3.292585690  4.406610973×10⁻¹    0.0139349274    0.1157008162  14.50563
elapsed     0.032053929  2.440134579×10⁻¹    0.0077163831    0.0108565669 505.17579
lp -7703.078442664    1.31850594×10⁵ 4169.4818711527 6006.8402507160 481.80579
lf_eps     0.023993031 2.6679142717×10⁻²    0.0008436686    0.0031844537  70.18978

Quantiles:
2.5%             25.0%           50.0%          75.0%          97.5%
lf_num     1.0000000000     3.0000000000     3.00000000     7.000000000    47.0000000
mu     2.2687571970     3.3299950598     3.40077973     3.448397104     3.8250589
elapsed     0.0021599250     0.0062511508     0.00850430     0.019544800     0.1496765
lp -3384.4951580265 -1549.2305191448 -1416.33487980 -1412.430003388 -1411.2213349
lf_eps     0.0015783486     0.0051715866     0.02609497     0.026094970     0.0902517

``````

Both samplers estimate the posterior mean close to the empirical mean (`3.41373`). The SMC sampler has credible interval close to what I would expect. However, the NUTS sampler has way too wide of a CI, and both samplers have a weirdly small ESS.

It’s possible that I am still doing something wrong. If you try out Turing and get any further, please let me know!

1 Like

Thanks for the update!