Trouble coding a continuous / discrete 'mixture model' in Turing

Hi!.

I have a generative process of the following kind: it generates a number of zeros, and then Normal draws, up to 1000, as follows:

μ, σ,  α, N = 0, 1, 0.3, 1000 # Parameters of the generative process
logistic(x) = exp(x) / (1 + exp(x)) # The logistic function
f(x,N) = round(Int, N* logistic(x)) # A function to compute the desired number of zeros

y₀ = [zeros(f(α,N); rand(Normal(μ,σ),N - f(α,N)] # The synthetic data

This works fine. My question is about how to code my generative model inside Turing. I start with:

@model function outcome(Y₀)

    # Assumptions
    μ ~ Uniform(0,1)
    σ ~ Uniform(0,1)
    α ~ Uniform(0,1)
    
    # Observations
    Y₀ .~ filldist(Normal(μ,σ),length(Y₀))
    
    return Y₀ 

end

My question is about the last step. Obviously, the last line does not do what I want: That line generates length(Y₀) Normal draws, whereas inside the Turing model I need Y₀ to have f(α, length( Y₀)) zeros, given the simulated α, and then length( Y₀) - f(α, length( Y₀)) Normal draws. I need to replace that last line with something that does what I just described, but I don’t know how to do this.

Any help will be greatly appreciated.

I think you should rethink your model. Basically the observed number of zeros in the data allows you to solve for the range of alpha that is feasible ahead of time. So just fit the normal component.

Thank you for your answer. I need to think a bit more about it, but in the meantime know that I had stripped down from the actual model some features and parameters that were inessential to the computation problem I was having, and that make the inference about alpha less trivial. My main issue is that I need to know how to generate these Y₀ with a fixed length, but that each Y₀ may have a number of zeros that depends on the realization of one of the ‘hyperparameters.’

I’ll simplify the problem further, as follows:

μ, σ, N₀ = 0, 1, 750 # Parameters of the generative process
y₀ = [zeros(N₀); rand(Normal(μ,σ),1000 - N₀)] # The synthetic data

# Turing 
@model function outcome(Y₀)

    # Assumptions
    μ ~ Uniform(0,1)
    σ ~ Uniform(0,1)
    N₀ ~ DiscreteUniform(1,1000)
    
    # Observations
    Y₀ .~ filldist(Normal(μ,σ),length(Y₀))
    
    return Y₀ 

end

Again, the goal is to replace the

    Y₀ .~ filldist(Normal(μ,σ),length(Y₀))

line with something that creates Y₀ as having N₀ ‘zeros’ and (1000 - N₀) ‘Normal(μ,σ)’ rv’s.

Last, I want my Y₀ to always have 1000 elements so that I don’t create an array mismatch when I run

chain_outcome = sample(outcome(y₀), NUTS(0.65), 10000);

down the road.

Again, thank you for your help.

First off, I think you want ~ instead of .~ since filldist creates a multivariate independent distribution

You can try something like:

Y0 ~ arraydist([ n < N ? Dirac(0.0) : Normal(mu,sigma) for n in 1:length(y0)])

edit: probably best to use Dirac(0.0) because you’ll get a Float64 instead of an Int

I don’t think Dirac will sample properly with NUTS though.

Thank you. I tried that and it did not work for me. I get this ERROR: MethodError: no method matching Product(::Vector{UnivariateDistribution})

I went back to your original suggestion, of just modeling the Normal component. I tried it, and that does not work for me either. because I need to generate the Normal component as a vector that is of a size that varies inside each MCMC trial and I then get an ERROR: DimensionMismatch: inconsistent array dimensions.

I don’t see why this would be. In any given inferential run you have fixed data which has a fixed number of zeros in front and then a fixed number of Normals after.

Trial 1 may have 300 zeros and 700 normals.
Trial 2 may have 400 zeros and 600 normals.
And so on.

The data arrives as a set number of zeros and a remaining numbers up to 1000 of normals.
My Bayesian model generates the parameters for the normal, and a parameter that determines number of zeros.

I would love to hear what you all think, and thank you in advance.

Yes of course but for any given run the number of zeros is fixed so any given MCMC chain has only to always generate a fixed number of Normals during the chain run.

Yes, I think we’re on the same page about what I need the code to do. I just can’t get the code to do it.

μ, σ, N₀ = 0, 1, 750 # Parameters of the generative process
y₀ = [zeros(N₀); rand(Normal(μ,σ),1000 - N₀)] # The synthetic data

# Turing 
@model function outcome(Y₀)

    # Assumptions
    μ ~ Uniform(0,1)
    σ ~ Uniform(0,1)
    N₀ ~ DiscreteUniform(1,1000)
    
    # Observations
    Y₀ ~ filldist(Normal(μ,σ),length(Y₀))
    
    return Y₀ 

end

I think you just need to pass a view of the data into the model:

Nzeros = findfirst(x-> x != 0.0,Y0)-1
mymodel = outcome(@views Y0[Nzeros+1:end])

I tried that and it does not work either. Thank you for trying.

Wait what does it mean by does not work?

Did you get rid of .~ you need just ~ I believe.

with .~ it means "each entry of this vector is distributed according to the multivariate filldist(Normal…) distribution (ie. it’s a vector of vectors).

While ~ means “this vector is distributed according to filldist(Normal…)” ie. it’s a vector of numbers.

Here is the entire code:

using Turing, Random, Distributions
Random.seed!(746);

μ, σ, n₀ = 0, 1, 750 # Parameters of the generative process
y₀ = [zeros(n₀);rand(LogNormal(μ,σ),1000 - n₀)] # The synthetic data

# Turing 
@model function outcome(Y₀)

    # Assumptions
    μ ~ Uniform(0,1)
    σ ~ Uniform(0,1)
    N₀ ~ DiscreteUniform(1,1000)
    
    # Observations
    Y₀ ~ arraydist([ n < N₀ ? Dirac(0.0) : Normal(μ,σ) for n in 1:length(Y₀)])
    # Y₀ .~ filldist(Normal(μ,σ),N₀)

    return Y₀ 
end

# New Idea
Nzeros = findfirst(x-> x != 0.0,y₀)-1
mymodel = outcome(@views y₀[Nzeros+1:end])

# MCMC sampling
chain_outcome = sample(mymodel, NUTS(0.65), 1000);

# MCMC Results
summarystats(chain_outcome)

Here’s the error message:

MethodError: no method matching Product(::Vector{UnivariateDistribution})
Closest candidates are:
  Product(::V) where {S<:ValueSupport, T<:UnivariateDistribution{S}, V<:AbstractVector{T}} at ~/.julia/packages/Distributions/tFdHM/src/multivariate/product.jl:25
Stacktrace:
  [1] arraydist(dists::Vector{UnivariateDistribution})
    @ DistributionsAD ~/.julia/packages/DistributionsAD/bxCGB/src/arraydist.jl:6
  [2] outcome(__model__::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.TypedVarInfo{NamedTuple{(:μ, :σ, :N₀), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:μ, Setfield.IdentityLens}, Int64}, Vector{Uniform{Float64}}, Vector{AbstractPPL.VarName{:μ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Uniform{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:N₀, Setfield.IdentityLens}, Int64}, Vector{DiscreteUniform}, Vector{AbstractPPL.VarName{:N₀, Setfield.IdentityLens}}, Vector{Int64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, __context__::DynamicPPL.SamplingContext{DynamicPPL.SampleFromUniform, DynamicPPL.DefaultContext, Random._GLOBAL_RNG}, Y₀::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true})
    @ Main ~/Dropbox/University Life/Academic/Research/Working Papers/egalitarian equivalent treatment effects/Julia_prototypes/test_of_Turing.jl:16
  [3] macro expansion
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:493 [inlined]
  [4] _evaluate!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:476 [inlined]
  [5] evaluate_threadunsafe!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:451 [inlined]
  [6] evaluate!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:404 [inlined]
  [7] evaluate!! (repeats 2 times)
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:415 [inlined]
  [8] initialstep(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:μ, :σ, :N₀), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:μ, Setfield.IdentityLens}, Int64}, Vector{Uniform{Float64}}, Vector{AbstractPPL.VarName{:μ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Uniform{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:N₀, Setfield.IdentityLens}, Int64}, Vector{DiscreteUniform}, Vector{AbstractPPL.VarName{:N₀, Setfield.IdentityLens}}, Vector{Int64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}; init_params::Nothing, nadapts::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Turing.Inference ~/.julia/packages/Turing/S4Y4B/src/inference/hmc.jl:173
  [9] step(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, init_params::Nothing, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/sampler.jl:104
 [10] macro expansion
    @ ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:120 [inlined]
 [11] macro expansion
    @ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
 [12] macro expansion
    @ ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:9 [inlined]
 [13] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
    @ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:111
 [14] sample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; chain_type::Type, resume_from::Nothing, progress::Bool, nadapts::Int64, discard_adapt::Bool, discard_initial::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Turing.Inference ~/.julia/packages/Turing/S4Y4B/src/inference/hmc.jl:133
 [15] sample
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/hmc.jl:103 [inlined]
 [16] #sample#2
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:145 [inlined]
 [17] sample
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:138 [inlined]
 [18] #sample#1
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:135 [inlined]
 [19] sample(model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, alg::NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}, N::Int64)
    @ Turing.Inference ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:129
 [20] top-level scope
    @ ~/Dropbox/University Life/Academic/Research/Working Papers/egalitarian equivalent treatment effects/Julia_prototypes/test_of_Turing.jl:28

In this case you don’t want Dirac anymore, you want

   Y0 ~ filldist(Normal(...),...)

I replaced with

    Y₀ ~ filldist(Normal(μ,σ),N₀)

Then I get:

ERROR: DimensionMismatch: inconsistent array dimensions
Stacktrace:
  [1] logpdf
    @ ~/.julia/packages/Distributions/tFdHM/src/common.jl:247 [inlined]
  [2] loglikelihood
    @ ~/.julia/packages/Distributions/tFdHM/src/common.jl:451 [inlined]
  [3] observe
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:233 [inlined]
  [4] observe
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:230 [inlined]
  [5] tilde_observe
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:137 [inlined]
  [6] tilde_observe
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:135 [inlined]
  [7] tilde_observe
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:130 [inlined]
  [8] tilde_observe!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:183 [inlined]
  [9] tilde_observe!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/context_implementations.jl:170 [inlined]
 [10] outcome(__model__::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.UntypedVarInfo{DynamicPPL.Metadata{Dict{AbstractPPL.VarName, Int64}, Vector{Distribution}, Vector{AbstractPPL.VarName}, Vector{Real}, Vector{Set{DynamicPPL.Selector}}}, Float64}, __context__::DynamicPPL.SamplingContext{DynamicPPL.SampleFromUniform, DynamicPPL.DefaultContext, Random._GLOBAL_RNG}, Y₀::SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true})
    @ Main ~/Dropbox/University Life/Academic/Research/Working Papers/egalitarian equivalent treatment effects/Julia_prototypes/test_of_Turing.jl:16
 [11] macro expansion
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:493 [inlined]
 [12] _evaluate!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:476 [inlined]
 [13] evaluate_threadunsafe!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:451 [inlined]
 [14] evaluate!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:404 [inlined]
 [15] evaluate!!
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:415 [inlined]
 [16] Model
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:375 [inlined]
 [17] VarInfo
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/varinfo.jl:127 [inlined]
 [18] VarInfo
    @ ~/.julia/packages/DynamicPPL/R7VK9/src/varinfo.jl:126 [inlined]
 [19] step(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, init_params::Nothing, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
    @ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/sampler.jl:86
 [20] macro expansion
    @ ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:120 [inlined]
 [21] macro expansion
    @ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
 [22] macro expansion
    @ ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:9 [inlined]
 [23] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
    @ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:111
 [24] sample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; chain_type::Type, resume_from::Nothing, progress::Bool, nadapts::Int64, discard_adapt::Bool, discard_initial::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Turing.Inference ~/.julia/packages/Turing/S4Y4B/src/inference/hmc.jl:133
 [25] sample
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/hmc.jl:103 [inlined]
 [26] #sample#2
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:145 [inlined]
 [27] sample
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:138 [inlined]
 [28] #sample#1
    @ ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:135 [inlined]
 [29] sample(model::DynamicPPL.Model{typeof(outcome), (:Y₀,), (), (), Tuple{SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}}, Tuple{}, DynamicPPL.DefaultContext}, alg::NUTS{Turing.Essential.ForwardDiffAD{0}, (), AdvancedHMC.DiagEuclideanMetric}, N::Int64)
    @ Turing.Inference ~/.julia/packages/Turing/S4Y4B/src/inference/Inference.jl:129
 [30] top-level scope
    @ ~/Dropbox/University Life/Academic/Research/Working Papers/egalitarian equivalent treatment effects/Julia_prototypes/test_of_Turing.jl:27

I had tried something like this. In a ‘standard’ problem, one has a Y0, uses it as argument of the Turing model, and generates data of length(Y0). So the dimensions match. Here, if I only pass the non-zeros, I don’t just want to simulate that number of normals because information is lost (how many zeros there were).

After giving many thanks to dlakelan for his many answers, I must admit that I am still very confused. I don’t think what I’m trying to do is very complicated or rare. There has to be a simple way to do this in Turing.

Let me restate. Maybe others can pitch in as well.

  1. The data to be analyzed is a 1000 dimensional vector with real numbers, many of which can be zero.
  2. The Turing model generates parameters. One of the parameters is “the number of non zeros,” Nz, a random number between 1 and 1000.
  3. The observations inside the model are generated as follows: the first 1000 - Nz entries are zeros, the rest, Nz are Normal(0,1).
  4. The Turing model spits out a 1000 dimensional vector.

But this is just a constant, it doesn’t change at each step in the MCMC so you keep track of it “outside” the MCMC

EDIT:

A Bayesian parameter is for describing some uncertainty in a quantity. There is no uncertainty in the number of zeros at the start of the vector, therefore there is no meaningful parameter.

Perhaps what you want is a generated quantity… After sampling the Normals, you create a padded vector which contains a bunch of 0 values followed by the normal values…?

See generated_quantities in the docs?

I think what is confusing here is that it’s not clear what is really trying to be done. Are you hoping to generate data sets each one having a different number of zeros? Or are you trying to do inference on a given data set to determine the value of some parameters like the mean of parameters of the normal distribution?

If you are trying to generate data, then maybe Turning is not the best approach use an approach like in R, just generate the data…

But if you are trying to do inference, then the number of zeros at beginning of data is known before running the model and you should do inference on just the unknown parameters and for that you need only the Normal values towards the end of the data list.

If you can clarify what your goal is I think it will be clearer.

Here’s the actual code I need to run. It provides context for the role the ‘zeros’ are playing.

using Turing, Random, Distributions
Random.seed!(746);

# Two useful functions
logistic(x) = exp(x) / (1 + exp(x)) 
z(N,x) = N - round(Int, N * x)  # Calculates how many 'non-zeros' in a sample of size N and fraction x of zeros

### Characteristics of the true generating process that I would be trying to 'recover' using the Bayesian model

μ, τ = 0.1, 0.2
σ, λ = .02, .01
α, γ = 0.3, 0.7 
N = 1000

# Simulate data from the outcome distributions of the control and treatment groups
z₀ = z(N,logistic(α)); z₁ = z(N,logistic(α + γ)) 
y₀ = [zeros(z₀); rand(LogNormal(μ,exp(σ)), N - z₀)] # z₀ zeros and N - z₀ LogNormals
y₁ = [zeros(z₁); rand(LogNormal(μ + τ, exp(σ + λ)), N - z₁)] # z₁ zeros and N - z₁ LogNormals

# Bayesian model
@model function outcome(Y₀,Y₁)

    # Assumptions
    μ ~ Normal(0, 5)
    τ ~ Normal(0, 5)
    σ ~ Normal(0, 5)
    λ ~ Normal(0, 5)
    α ~ Normal(0, 5)
    γ ~ Normal(0, 5)
    
    # Observations
    Y₀ ~ arraydist([ n < z(length(Y₀),logistic(α)) ? Dirac(0.0) : LogNormal(μ,exp(σ)) for n in 1:length(Y₀)])
    Y₁ ~ arraydist([ n < z(length(Y₁),logistic(α + γ)) ? Dirac(0.0) : LogNormal(μ + τ, exp(σ + λ)) for n in 1:length(Y₀)])
   
    return Y₀, Y₁

end

# MCMC sampling
chain_outcome = sample(outcome(z₀, y₀ ,z₁, y₁), NUTS(0.65), 100);

# MCMC Results
summarystats(chain_outcome)

When I run the MCMC sampling step, I get this error:

MethodError: no method matching Product(::Vector{UnivariateDistribution})

The problem I have is with the generation of Y₀ and Y₁ inside the Turing model.