Hello,
I’m currently trying to sample from a posterior with a mixture of differentiable and discrete parameters, some of them I can’t marginalize. I tried to marginalize the one I could using acclogp!, and sample the rest of the discrete parameters with PG.
However I realized that doing this, PG fails to sample from the posterior, and samples from the prior.
Here is an MWE :
@model minimodel(Y) = begin
μ ~ Normal(0, 1)
σ ~ InverseGamma(2, 1)
Turing.acclogp!(__varinfo__, logpdf(filldist(Normal(μ, σ), length(Y)), Y))
end
Y = 2 .+ 0.3*randn(1000)
chn = sample(minimodel(Y), PG(20), 1000)
returns :
Summary Statistics
parameters mean std naive_se mcse ess rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
μ -0.0106 0.9972 0.0315 0.0313 996.6409 0.9990 231.9928
σ 1.0773 2.1186 0.0670 0.0670 970.7092 0.9991 225.9565
By plotting the result, you can easily check that it just sampled the prior. I can go around this issue with trick in the flavour of
L = logpdf(filldist(Normal(μ, σ), length(Y)), Y)
1 ~ Bernoulli(exp(L/length(Y)))
But I’d like to know if it’s a intended behavior, and if so what am I missing ?
Thanks