Hello everyone,
I’m thinking about translating a course that focuses on WinBUGS to Turing so the students can use something a little more modern if they want. I’m new to Julia and Turing.
Here is the WinBUGS model:
for (i in 1:N) {
Y[i] ~ dnorm(mu[i],tau)
mu[i] <- alpha + beta * (x[i] - x.bar)
}
x.bar <- mean(x[])
alpha ~ dnorm(0, 0.0001)
beta ~ dnorm(0, 0.0001)
tau ~ dgamma(0.001, 0.001)
sigma <- 1.0/sqrt(tau)
}
#-----------------------------
DATA
list(N=5, x=c(1,2,3,4,5), Y=c(1,3,3,3,5)) #-----------------------------
INITS
list(alpha = 0.1, beta = 0.6, tau = 1)
and here is my attempt using Turing:
using Turing, Distributions
using DataFrames
using Random
using Statistics
Random.seed!(0)
data = DataFrame(x = [1, 2, 3, 4, 5], y = [1, 3, 3, 3, 5])
@model function simplereg(x, y)
# intercept prior
α ~ Normal(0, 10e-4)
# coefficient prior
β ~ Normal(0, 10e-4)
# variance prior
τ ~ Gamma(10e-3, 100)
σ = 1/sqrt(τ)
for i in 1:length(x)
y[i] ~ Normal(α + β*(x[i]-mean(x)), σ)
end
end
iterations = 10000
model = simplereg(data.x, data.y)
chain = sample(model, MH(), iterations)
This runs, but the results are wildly different: for Turing, mean alpha is .0001, beta is .00003, tau is .1. For BUGS, mean alpha is about 3, beta about .8, tau about 1.9.
Also, I didn’t see a simple way to provide initial values - how is that handled in Turing?
I saw a way to use burn-in depending on which sampler is selected, but MH doesn’t use it. Why is that?