Hello Julia PPL,
I’ve been thinking some more about how Soss might relate to other PPL work. Hope I might get your thoughts on this, and how it might benefit the community as a whole.
The big idea is that Soss is probabilistic glue.
Say we have a relatively generic model like
m = @model x, d1, d2
z ~ d1(x)
y ~ d2(x,z)
end
This acts like a parameterized family of distributions, similar to Normal
, etc. And like Normal
, specifying parameters produces a distribution, in this case a Soss JointDistribution
. These are handy, because it’s very easy to rand
the whole thing or build predictive distributions that remove ancestors of given variables (there’s really more to it than that, but that’s the idea).
Like a distribution, you can (not always, but often) call rand
, logpdf
, and some other things. In Soss we do this by passing responsibility to the components, and generating code to connect the pieces in the right way. We will be considering special cases where code can be rewritten, but we can always fall back on this per-component approach. At inference time, you can specify any variables you want and reason about the rest.
Typically, d1
and d2
would be Distribution
s. But they can really be anything where the required methods are available: Soss models, or even if principle Turing or Gen models.
It can go the other way as well; any PPL that requires a logpdf can call a Soss model. This could make it easy, for example, to wrap a Gen model in Turing, or vice-versa, by using Soss as the glue.
This opens up some interesting possibilities. For example, the design of Turing makes it difficult to connect to MLJ, but it seems this may not be a problem for Soss. So a solution could be to solve it for Soss, then connect Soss with Turing. I’d expect the benefits to be the same as those of any glue code.
I’m still feeling this out, but it What do you think?