Problem Turing bug multivariate Bernoulli

I am trying to estimate the paremeters of a model using Turing, in the code below there is a toy model for simplicity. This model as output has a 10 dimensional reponse variable. Therefore for each time I have a vector as the response variable. My response is a probability of having x species at each patch, so it ranges from 0 to 1. And the observations are if in this patch there was x species so it is 1 or there is not x species. This is why I use a Bernoulli distribution for the likelihood. I have seen that for other distributions there is the Multivariate distribution such as for the normal MvNormal, but I have not seen this with the Bernoulli. So I code it myself doing two loops, as a recommedation of a julia user.

This is the code:

# Toy model
function fun_na(u, p, t_vec)
    y = Array{Float64}(undef,length(u), length(t_vec))
    for i in 1:length(t_vec)
        y[:, i] = p[1].*rand(length(u))
    end
  return y
end

# Set parameters
t0=0.0
tf=6574.0
tspan = (t0, tf)
t_vect=1:tf
u = pop_init

simulations  = fun_na(u,p,t_vect) # Example

# Load packages
using Turing
using StatsPlots

# Estimation Bayesian --------------------------------
@model function fitlv(data, t_obs) 
    # Prior distributions.
    c_d_dist ~ Uniform(0.1e-8, 0.1)
    c_h_dist ~ Uniform(0.1e-8, 0.1)
    #xi_dist ~ Uniform(0, 1)
    e_dist ~ Uniform(0.1e-8, 0.1)

    # Simulate Lotka-Volterra model. 
    p = [c_d_dist, c_h_dist, e_dist] 
    predicted = fun_na(u,p,t_vect)    # Simulations. 
    
    if length(predicted) > 0
      for i in 1:length(t_obs)
        predicted_value = predicted[:,i]
        for j in 1:length(predicted_value)
            data[j, i] ~ Bernoulli(predicted_value[j])  # Ensure it's a probability
        end
      end
    end
    # Example in Turing tutorial which does not work because it is not Multivariate
    #for i in 1:length(t_obs) 
    #    data[:, i] ~ Bernoulli(predicted[:,i]) 
    #end 

    return nothing 
end 

# matrix_obs is a 10 x 18 matrix and t_obs are the 18 moments in time
model = fitlv(matrix_obs,t_obs)  

# Sample 3 independent chains with forward-mode automatic differentiation (the default). 
iterations = 1000
ϵ = 0.05
τ = 10
chain = sample(model, Gibbs(HMC(ϵ, τ, :p), MH(:y)), iterations; progress = true)

I get the folllowing error:
ERROR: MethodError: reducing over an empty collection is not allowed; consider supplying init to the reducer

In another issue in the julia community I asked the same question with a Dynamic model much more complicated. And they have told me to use a simple toy model and ask again because it looks like a Turing bug.

Can you share the complete stack trace?

Hi Guillaume,

Sure this is the stack trace:

Can’t see the stack trace, can you try pasting it again? Also, your example doesn’t run as-is, there are a few missing parameter definitions. But if I’m understanding it correctly, within the model, predicted is a matrix the same size as data, right?

If that’s the case, you could re-write your observation statement using arraydist (see here). Instead of the two loops, you would have something like this:

using Turing, StatsFuns

function simulation(p, n1, n2)
    y = Array{eltype(p)}(undef, n1, n2)
    for t in 1:size(y, 2)
        # made-up simple simulation function
        y[:, t] .= logistic(p[1] + p[2] * t)
    end
    return y
end

@model function test(y)
    a ~ Normal()
    b ~ Normal()
    p = [a, b]
    prob_pred = simulation(p, size(y)...)
    y ~ arraydist(Bernoulli.(prob_pred))
end

p_true = [-1, 0.02]
prob_sim = simulation(p_true, 10, 100)
y = rand.(Bernoulli.(prob_sim))
mod = test(y)
chn = sample(mod, NUTS(), 1000)