Is there a way to get more information about when Turing is unable to sample from a model?

For example, I tried a simple (univariate) mixture of gaussians model and when I try to sample from it, I get this error:

```
ERROR: ArgumentError: Right-hand side of a ~ must be subtype of Distribution or a vector of Distributions.
```

It would be helpful to know the right-hand side of which “~” is causing the problem. Maybe something in DynamicPPL could give a line number?

Here’s the model:

```
@model mm(y) = begin
N = length(y)
μ1 ~ Normal()
μ2 ~ Normal()
μ3 ~ Normal()
μ ~ [μ1, μ2, μ3]
ps ~ Dirichlet(ones(3))
k = Vector{Int}(undef, N)
for i in 1:N
k[i] ~ Categorical(ps)
y[i] ~ Normal(μ[k[i]])
end
return k
end
```

[fce5fe82] Turing v0.13.0

[31c24e10] Distributions v0.23.2

I found the problem, but any debug info or tools would be helpful.

```
μ ~ [μ1, μ2, μ3]
```

Should be:

```
μ = [μ1, μ2, μ3]
```

1 Like

Could you please open an issue for this on DynamicPPL. Thanks!

Just as a remark, in case you aim to sample z using particle Gibbs you will need to use TArrays.

Here is a slightly modified version using tzeros to construct a TArray.

```
@model mm(y) = begin
N = length(y)
μ ~ filldist(Normal(), 3)
ps ~ Dirichlet(3, 1.0)
k = tzeros(Int, N)
for i in 1:N
k[i] ~ Categorical(ps)
y[i] ~ Normal(μ[k[i]])
end
return k
end
```

Thanks Martin. I stumbled across the Turing Guide document. That helped quite a bit. I should have read that first.

However, using your model and using a dataset consisting of 3 gaussians generated in this way:

```
y = Vector{Float32}()
λ = [Normal(2), Normal(8), Normal(-1)]
G = Categorical([.3, .5, .2])
for i in 1:500
a = rand(G)
push!(y, rand(λ[a]))
end
```

and using this sample statement (with what seems like very relaxed parameters):

```
s = Gibbs(PG(10, :k), NUTS(20, .65, :μ, :ps))
@time chn = sample(mm(y), s, 500);
```

it took ~ 45 mins to complete. Does that seem as expected for 500 data points and 500 iterations?

Hi, sorry for the late reply.

The current implementation for particle Gibbs doesn’t exploit parallelism. Ideally you want to use only HMC or NUTS in this model.

Meaning you would marginalise out the discrete latent assignments. Does that answer your question?

Yes, that makes sense. Thanks for the follow up!