# R equivalent Turing model from Statistical Rethkinking (model 8.3)

I am trying to convert the following example from the book Statistical Rethinking book to a Turing model:

m8.3 <- quap(
alist(
log_gdp_std ~ dnorm( mu , sigma ) ,
mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
a[cid] ~ dnorm( 1 , 0.1 ) ,
b[cid] ~ dnorm( 0 , 0.3 ) ,
sigma ~ dexp( 1 )
) , data=dd )


My attempt is as follows:

@model function m8_3(log_gdp, cid, rugged_st)

for i = 1 : length(log_gdp)
a1 ~ Normal(1.0, 0.1)
b1 ~ Normal(0.0, 0.3)
a2 ~ Normal(1.0, 0.1)
b2 ~ Normal(0.0, 0.3)
σ ~ Exponential(1.0)
if cid[i] == 1
μ = a1 + b1 * (rugged_st[i] - 0.215)
else
μ = a2 + b2 * (rugged_st[i] - 0.215)
end
log_gdp[i] ~ Normal(μ, σ)
end
end

chain = sample(  m8_3(
dd.log_gdp_std,
dd.cont_africa,
dd.rugged_std),
NUTS(),10000)


The expected output in the book is

       mean   sd  5.5% 94.5%
a   0.89 0.02  0.86  0.91
a   1.05 0.01  1.03  1.07
b   0.13 0.07  0.01  0.25
b  -0.14 0.05 -0.23 -0.06
sigma  0.11 0.01  0.10  0.12


However my output are somewhat different

Param     a1     a2      b1     b2  sigma
Mean  0.9954 1.0041 -0.0113 0.0064 0.2646


Is my conversion correct?

Is there a better way to do it?

My original version was using an vector for a and b, but it ran in the following error. And I do not know how to solve it.

I’d do a couple of things:

(1) Lift your alphas, betas, and sigmas out of the loop. I assume it’d be slower, but depending on how Turing translates the model to an actual logpdf, it might also be incorrect (e.g., weighting the prior N times instead of once.) I’m not sure, but I’d be worried.

(2) Since you assign values to mu within the loop, use \mu[i] instead of just \mu. What you are doing now is effectively just creating a single scalar \mu from whatever your last observation i is. Note that this will also require that (outside of the loop) you defined an empty vector \mu, that you then assign to in the loop.

I’m sure a better Turing user than me might have better advice, but that’s what jumps out to me.

1 Like

Your suggestion (1) did indeed make results same as the book

Params       a1        b1      a2          b2         σ
Mean   0.865713 0.0273194 1.07482  -0.0310709  0.111679


I used scalar \mu as thats what was there in the R code, instead of a vector. Besides if i use vector inside Turing model I get the same error

TypeError: in typeassert, expected Float64, got a value of type ForwardDiff.Dual{Nothing, Float64, 5}

as I linked in my original post.

My attempted model :

@model function m8_3(log_gdp, cid, rugged_st)
a1 ~ Normal(1.0, 0.1)
b1 ~ Normal(0.0, 0.3)
a2 ~ Normal(1.0, 0.1)
b2 ~ Normal(0.0, 0.3)
μ = zero(log_gdp)
σ ~ Exponential(1.0)
for i = 1 : length(log_gdp)
if cid[i] == 1
μ[i] = a1 + b1 * (rugged_st[i] - 0.215)
else
μ[i] = a2 + b2 * (rugged_st[i] - 0.215)
end
log_gdp[i] ~ Normal(μ[i], σ)
end
end


Ah, yeah. Since it’s within the for loop, \mu doesn’t need to be a vector, sorry.

But just for future reference, this part of the docs shows how to best initialize an empty array of parameters.