Hi everyone,
I’m new to Bayesian inference and Turing, so I realize that this is likely user error more than anything. I’m trying to fit a simple model; I’ve tried two versions here
@model function model1(samples)
V0 = 100
σ ~ truncated(Normal(0.01 * V0, 0.15 * V0), 0, Inf)
C0 ~ truncated(Normal(100, 60), 0, Inf)
V ~ filldist(truncated(Normal(V0, σ), 0, Inf), length(samples))
for i in eachindex(samples)
samples[i] ~ Poisson(C0 * V[i])
end
end
@model function model2(samples)
V0 = 100
σ ~ truncated(Normal(0.01 * V0, 0.15 * V0), 0, Inf)
C0 ~ truncated(Normal(100, 60), 0, Inf)
V ~ filldist(truncated(Normal(V0, σ), 0, Inf), length(samples))
for i in eachindex(samples)
samples[i] ~ truncated(Normal(C0 * V[i], sqrt(C0* V[i])), 0, Inf)
end
end
I can barely get Turing to sample the posterior. NUTS complains about not finding valid initial parameters, but providing it with the MLE estimates eventually results in a domain error. PG works ok, but you barely see any sampling happen and it is very slow.
On the other hand, Stan seems to work right out of the box with following translated code
data {
int<lower=1> N; // number of samples
array[N] real samples; // observed samples
}
parameters {
real<lower=0> sigma; // standard deviation parameter, constrained to be non-negative
real<lower=0> C0; // mean parameter, constrained to be non-negative
array[N] real<lower=0> V; // intermediate variable, constrained to be non-negative
}
model {
real V0 = 100; // initial mean for V
// Priors
sigma ~ normal(0.01 * V0, 0.15 * V0);
C0 ~ normal(100, 60);
V ~ normal(V0, sigma); // implicit truncation to positive values due to <lower=0> constraint
// Likelihood
for (i in 1:N) {
samples[i] ~ normal(C0 * V[i], sqrt(C0 * V[i]));
}
}
I was hoping someone could help me figure out how to get the Turing code to work