Hello all,

Newbie to Turing here. I want to introduce two random variables within a Turing model, say x ~ Bernoulli(.1) and y ~ Bernoulli(.5) or x,y are Normal distributions.

I want to condition on the observation that x and y take the same value. I can’t seem to be able to figure out how to do this! Is there some kind of assertion statement that can tell the sampler that x and y must have the same value? I tried writing something like this:

```
@model attempt() = begin
x ~ Normal(0,1)
y ~ Normal(0,2)
@assert (x==y)
end
```

But it fails at the assert.

Manoj

I’m not entirely sure I understand what you want to do. But one thing you might be able to do is to model the equality of `x`

and `y`

in terms of a Bernoulli distributed RV.

I don’t quite understand what you mean by “condition on the observation that x and y take the same values”.

What would make the variables distinct (i.e. why sample two variables?) if they always take the same value? Or am I missunderstanding?

Thanks very much for the responses!

So I think about it this way: when we pass a value to something defined with a @model, we are conditioning the joint distribution defined by model at that value of those variables. Now sometimes the values of variables may not be directly observable, only a relation satisfied by the variables may be observable. Just like you may want to call a model with certain values for random variables, you may want to call the model with a certain relation imposed between random variables, some kind of implicit conditioning by equations.

It seems like something useful: for example, the distribution described by x==y in the code above is an interesting distribution: the intuitive experiment is to take many samples from the two gaussians, and reject those samples where x and y are different. How else would you describe it? Sometimes, like here, this is a measure zero set, so rejection sampling is not directly useful as a sampling technique. But the math is well established through the Gibbs conditioning principle, and I believe it should be computationally tractable to do such sampling as well.

Perhaps I’m not getting some idiom here. Perhaps there is another way of doing such conditioning in Turing which is “completely obvious,” and I’m just not seeing it.

Manoj

1 Like

Yes, I was having a hunch you are talking about conditioning onto predicates. There is the excellent work by @zennatavares (see: http://proceedings.mlr.press/v97/tavares19a/tavares19a.pdf) that is aiming for this. You can have a look at his package called Omega.jl

We currently don’t have a way to nicely do this in Turing and I’m also not too familiar with those kind of problems. But maybe we could find a way to bridge Omega and Turing at some point. If you can obtain analytical expressions for the Gibbs conditionals then you can at least sample from the conditional in Turing in the very near future (we have a open PR for this atm). But as I said, conditioning on predicates is currently not well established in Turing we didn’t think much about it tbh. But it would definitely be interesting to do.

Thanks Martin. I will take a look at Omega.

1 Like

For what it’s worth, I’ve been separating out the code for allowing predicate relaxation so that it could be used with other packages, like Turing. It can actually be implemented very elegantly with Cassette, but Cassette still has performance issues.

2 Likes

Cool, could you give me some pointers in the source of Omega.jl to see how this could fit together with Turing. Maybe we can utilise IRTracker for it.

1 Like