# Specifying model with threshold / ifelse likelihood in Turing / Soss / Tilde

Hi everyone,

I have a model which is a simplified version of the one presented in Mulder & Hoff (2021) and looks something like this:

\begin{align} \mathbf{z}_{i} &\sim \textrm{MvNormal}(\boldsymbol{\theta}_{i},\mathbf{I}) \\[0.8ex] \boldsymbol{\theta}_{ir} &= \mathbf{x}_{ir}^\top \boldsymbol{\beta} \\[0.8ex] c_{i} \mid \mathbf{z}_{i} &\sim \textrm{truncate} \Big( \textrm{Normal}(0, \sigma_c), -\infty,\max(\textbf{z}_{i}) \Big) \\[0.8ex] y_{ir} &= \begin{cases} 1 &\textrm{if } z_{ir} > c_{i} \\ 0 & \textrm{otherwise} \end{cases} \end{align}

In the paper, they discuss a Gibbs sampler implementation but I would like to specify this in a Julia PPL. Here’s how far I got in Turing (probably very non-optimized):

@model function MRREM(X, y)
N, A, K = size(X)

β ~ filldist(Normal(0, 1), K)
σc ~ Exponential()

θ = [@views X[i,:,:] * β for i in 1:N]

z ~ arraydist([MvNormal(t, I) for t in θ])
zmax = maximum(z, dims=1) |> vec
c ~ arraydist([Truncated(Normal(0, σc), -Inf, zm) for zm in zmax])
# y ~ ???
end

let
N, A, k = 100, 20, 3
X = rand(N, A, k)
MRREM(X, missing) |> rand
end


But I don’t really know how to specify the ifelse for the y_{ir} in a PPL or how to transform this into something that you can sample from / evaluate the logdensity of. Does someone know what to do here?

1 Like

Hi @Jakob ,

It’s pretty typical for PPLs to have difficulty “observing” deterministic functions of a distribution. I think the standard way to do this would be to integrate out c, so each y[i] follows a Bernoulli distribution as a function of σc and zmax[i].

You could write this in the model itself, or (in Soss or Tilde) build a new measure to express this, then use that measure in the model. The advantage of this is that you can test the measure independently and do any performance doing in isolation.

2 Likes

Thanks, @cscherrer. That sounded like more math than I’m comfortable with but luckily I found that further down in the paper they give a specification of the probability of the y_{ir}'s as:

Pr(y_{ir}=1 |\mathbf{z_i},\sigma_c)=\frac{\Phi\big(\frac{z_{ir}}{\sigma_c}\big)}{\Phi \Big(\frac{\max(\textbf{z}_{i})}{\sigma_c}\Big)}

So I guess I can just work in terms of that.

Btw., I remember seeing something about truncated measures in MeasureTheory but couldn’t find it anymore. Is there something I overlooked or is there anything planned?

1 Like

Great! Yes, without working through it explicitly that’s how I’d expect the result to look.

There’s a start on it here:
https://github.com/cscherrer/MeasureTheory.jl/pull/194

It’s not hard in principle, just a matter of working through some corner cases.

1 Like

Ah great, I’m looking forward to giving the model a go in Soss / Tilde when that lands.

Please post the code, if you get something working.