Hi there!

I’m new to Julia and Turing, but I’m trying to translate some ecological models by Kéry and Royle (2016) from JAGS to Turing. The specific model here is a Binomial N-mixture model (or a Poisson-Binomal mixture model). I have experimented with various samplers to deal with sampling discrete parameters, but unfortunately I can’t seem to find a way to obtain results close to those from the Jags implementation. The code for generating the data in Python and JAGS is as follows:

```
import pyjags
import numpy as np
n_sites = 150
n_surveys = 2
counts = np.zeros((n_sites, n_surveys))
expected_abundance = 2.5
p_detection = 0.4
# Production process
N = np.random.poisson(expected_abundance, size=n_sites)
# Observation process
for survey in range(n_surveys):
counts[:, survey] = np.random.binomial(size=n_sites, n=N, p=p_detection)
code = """
model {
lambda ~ dgamma(0.001, 0.001)
p ~ dbeta(1, 1)
for (i in 1:M) {
N[i] ~ dpois(lambda)
for (j in 1:J) {
C[i,j] ~ dbin(p, N[i])
}
}
}
"""
M, J = counts.shape
model = pyjags.Model(
code=code,
data={"C": counts, "M": M, "J": J},
init={"N": counts.max(1), "p": 0.5, "lambda": 1}
)
_ = model.sample(5000, thin=25, vars=['lambda', 'p'])
samples = model.sample(25000, thin=25, vars=['lambda', 'p'])
```

which produces the following estimates:

```
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
lambda 2.228 0.329 1.653 2.804 0.007 0.005 1978.0 1900.0 2221.0 2870.0 1.0
p 0.465 0.061 0.350 0.578 0.001 0.001 1916.0 1916.0 1992.0 2630.0 1.0
```

For Julia and Turing, I use the following lines of code:

```
using Random, Distributions, StatsBase, Turing
n_sites = 150
λ = 2.5
N = rand(Poisson(λ), n_sites)
p = 0.4
n_surveys = 2
C = Array{Int64, 2}(undef, n_sites, n_surveys)
for j = 1:n_surveys
for i = 1:n_sites
C[i, j] = rand(Binomial(N[i], p))
end
end
@model function binomial_nmixture(C)
n_sites, n_surveys = size(C)
# NOTE Turing uses characterization using shape k and scale θ
λ ~ Gamma(0.001, 1000)
p ~ Beta(1, 1)
N = tzeros(Int, n_sites)
for i = 1:n_sites
N[i] ~ Poisson(λ)
for j = 1:n_surveys
C[i, j] ~ Binomial(N[i], p)
end
end
end
iterations = 10000
proposal, _ = findmax(C, dims=2)
chain = sample(
binomial_nmixture(C),
SMC(),
iterations,
progress=true)
end
```

But the results don’t come close to the true parameters. Increasing the number of iterations doesn’t seem to do much and I have also experimented with combining HMC with PG, but without much improvement. Does anyone have an idea of how to proceed here? Many thanks!