Best practices for dealing with NaNs in multiple data set using Turing

What is the best way to deal with NaNs in multiple data treatments when using Turing?

For example, for this example tutorial from

@model function fitlv(data, prob)
    σ ~ InverseGamma(2,3)
    α ~ truncated(Normal(1.3,0.5),0.5,2.5)
    β ~ truncated(Normal(1.2,0.25),0.5,2)
    γ ~ truncated(Normal(3.2,0.25),2.2,4.0)
    δ ~ truncated(Normal(1.2,0.25),0.5,2.0)
    ϕ1 ~ truncated(Normal(0.12,0.3),0.05,0.25)
    ϕ2 ~ truncated(Normal(0.12,0.3),0.05,0.25)
    p = [α,β,γ,δ,ϕ1,ϕ2]
    prob = remake(prob, p=p)
    predicted = solve(prob,SOSRI(),saveat=0.1)

    if predicted.retcode != :Success
        Turing.acclogp!(_varinfo, -Inf)
    for j in 1:length(data)
        for i = 1:length(predicted)
            data[i,j] ~ MvNormal(predicted[i],σ)

Here, how Turing is going to deal with NaN in data[i,j]? Assuming the number of treatments [j] is high, do you have any recommendations on how to keep it simple and deal with the nans?

I believe here Turing will error, because it doesn’t know what the NaNs mean. So, what do the NaNs mean?

In my case the NaNs mean missing. I didn’t tried using missing. How does turing is dealing with this?
I found another way to solve this problem by estimate the likelihood individually for each set of data and not using the MvNormal. But I imagine that is slowing down the process.