# Turing equivalent of Stan/R model from StatisticalRethinking

Consider the following STAN model from Chapter 12 of book Statistical Rethinking

library(rethinking)
d$gid <- ifelse( d$applicant.gender=="male" , 1L , 2L )
dat <- list( A=d$admit , N=d$applications , gid=d\$gid )
m12.1 <- ulam(
alist(
A ~ dbetabinom( N , pbar , theta ),
logit(pbar) <- a[gid],
a[gid] ~ dnorm( 0 , 1.5 ),
transpars> theta <<- phi + 2.0,
phi ~ dexp(1)
), data=dat , chains=4 )


Below is my equivalent Julia/Turing code:

using Turing, StatisticalRethinking
d.gid = map((gender) -> (gender == "male" ? 1 : 2) , d.gender)

a = Vector{Real}(undef, 2)
ϕ ~ Exponential(1)
θ = ϕ + 2.0
a .~ Normal(0, 1.5)
for i = 1:length(gid)
p̄ = logistic(a[gid[i]])
end
end
sample121 = sample(m121(d.admit, d.applications, d.gid), NUTS(), 2000)
describe(sample121)


However my parameter means are all off

  Parameter          Mean           logit(mean)
1 :ϕ                 0.211097
2 Symbol("a[1]")     1.84039        -
3 Symbol("a[2]")     1.57485        -


As opposed to the one given in the book

ulam posterior: 2000 samples from m12.1
mean   sd 5.5% 94.5%  histogram
a[1]  -0.45 0.41 -1.1  0.21  ▁▁▇▇▂▁
a[2]  -0.34 0.40 -1.0  0.27  ▁▁▃▇▂▁
phi    1.05 0.78  0.1  2.44 ▇▇▅▃▂▁▁▁▁▁▁
theta  3.05 0.78  2.1  4.44 ▇▇▅▃▂▁▁▁▁▁▁

1. What is wrong in my translation?
2. What is Turing equivalent of transpars in STAN?

From the book:

transpars> (transformed parameters) so that Stan will return it in the samples.

Have you looked at

In particular here.

1 Like

While I admit I did miss the beta-binomial.jl file at first, it is not the exact model. model 12.1 uses two alpha parameters for males and females separately. Rest of the models use BinomialLogit so presume they are from previous chapter.

Also none shows a way to implement transpars equivalent.

For that question see this discussion (tldr: There is a generated_quantities() function) :

1 Like

Solved it. R has somewhat different functional parameters. In R function dbetabinom function is parameterized as

p(x) = \frac{C(N,x) \mbox{Beta}(x+\theta p,N-x+\theta(1-p))}% {\mbox{Beta}(\theta p,\theta(1-p))}%

where \theta is given explicitly as “dispersion parameter” and is multiplied to p implicitly. In PyMC3 and Turing, \alpha and \beta are required as shape parameters.

Therefore my implementation should be:

@model function m121(admit, applications, gid)
a = Vector{Real}(undef, 2)
ϕ ~ Exponential(1)
θ = ϕ + 2.0
a[1] ~ Normal(0,1.5)
a[2] ~ Normal(0,1.5)
for i = 1 : length(gid)
p̄ = logistic(a[gid[i]])
admit[i] ~ BetaBinomial(applications[i], p̄ * θ, (1-p̄) * θ)
end
end


which does agree with the book example.

1 Like