Julia PPL survey paper?

Would it be feasible for the Julia PPL community to collaborate on a survey article? It could cover things like

  • Why Julia is great for PPL
  • Overview of current approaches and the general design space
  • Opportunities for “Cross-PPL” work
  • How ongoing Julia (core + library) development will impact the PPL landscape

Some potential benefits:

  • Strengthen relationships, encourage collaboration
  • Draw more of the PPL community to Julia
  • Help users understand the relationships between PPLs - we get lots of questions about this

Questions/risks:

  • The previous two bullets take the perspective of different intended audience. Is this too broad?
  • We’d need an even-handed treatment of the different PPLs that’s correct and objective, while not leaving anyone feeling run-over in the process.
18 Likes

What’s ppl?

4 Likes

A PPL is a probabilistic programming language, like

Julia is particularly well-suited to building these and cross-library interoperability, so we have a rich PPL ecosystem.

5 Likes

I think a saw a survey article of python packages in PPL…

Sounds like a really nice idea. I’m happy to join the efforts.

This could be quite beneficial for the Julia language too. Maybe you want to invite one of the Julia devs for this too.

1 Like

This is a great idea. I would be glad to participate (I am one of the Forneylab contributors).

3 Likes

Great! Sorry I had missed ForneyLab, I’ve corrected the oversight :slight_smile:

1 Like

I am happy to contribute a part on DynamicHMC & related packages.

2 Likes

In light of what some of us have discussed in the Community Call yesterday, I would be very interested to think about and contribute Chad’s point 2: investigate all the approaches we currently have, to lay out the design space (from the perspective of both “evaluation” and “model specification/expressibility”), and describe where each PPL stands in there.

So, instead of going “we want a PPL that covers everything”, think about what it is that can be covered, abstract that, and see where everything stands already. Then overlapping parts can be identified and interfaces searched for, and uncovered areas be pointed out.

The separation between the “specification abstraction” and “evaluator abstraction”, across multiple implementations, would be something that I haven’t really seen before – everyone’s always proposing a complete system, right? The closest thing would be the formalization attempts of probabilistic models with monads and types, but that is more semantic than syntactic. Or does anyone know of prior work in that direction?

3 Likes

This sounds great, but I think I’d suggest confirming that there’s interest in this before putting too much into it. I made the suggestion because I think it would be valuable for the community, and could be a good opportunity for collaboration. But I think actually making it happen depends a lot (at least for me) on interest from the Julia PPL community.

It doesn’t seem that way to me at all. There’s a long history of continuation-based systems (Church, Anglican, WebPPL, monad-bayes) and “DAG-based” (BUGS, JAGS, Stan), and the comparison of capabilities and performance show a pretty clear tradeoff. It’s only very recently that people have been building much more general systems with more consistent performance across inference methods.

We talked a bit about this in our paper on Grappa. To be honest, the type system hangups there got pretty annoying, and made me realize the need for better metaprogramming. One of the big reasons I’m here :slight_smile:

@cscherrer I think I meant something different with “complete system”. It was not related to performace across all cases. Most of the PPL implementations I know of are, even if restricted to a certain class of inference problem, given as a combination of a model specification part and an evaluation part. Mostly, those two are rather intertwined, and the implementation of the system depends on having both under their control. The frontend is tightly copled with the inference model.

We do have abstracted “pure inference” libraries, that really only take a function and do their work, but they aren’t really a PPL. There’s some “linguae frankae” like the Stan/JAGS syntax, but it’s also somewhat restricted and not independently maintained – the ones coming later just chose to take over the same kind of input format for their own implemenation.

What I’m thinking of is a model specification form in its own right, that has more general analysis capabilites, and can then be transformed town to whatever the evaluator requires – into CPS, as a monad, as a DAG, as a factor graph, you name it.

BTW, that Grappa paper – it happens to do something I was thinking about during the last days, namely ensuring well-foundedness of a PP through an indexed monad. I guess I have to learn Idris again.

I think that Gen.jl was designed to allow the kind of decoupled inference & model that you are looking for. Their paper is https://dl.acm.org/doi/10.1145/3314221.3314642.

2 Likes

I think I see, seems like you’re making quite a fine distinction here. One of the big benefits of PPL in general is the decoupling of model specification from inference. But if I’m understanding you right, you’re saying that PPLs still tend to have a single system for both. So while you can swap out inference back-ends, you’re still limited to back-ends specified in the given system.

I think we’re already doing better than other languages in this regard. For example, in Soss it’s very easy to plug in any reasonably generic back-end, especially one written in Julia.

What do you imagine this looking like? Soss started out with this goal; originally a model was just a single wrapped Expr, and inference transformed this into generated code. But general cases like this can so easily become pathological. So now Soss has a static “top level” where the body is a collection of Statements, with (pseudo-Julia)

  • Sample(::Symbol, ::Expr) <: Statement
  • Assign(::Symbol, ::Expr) <: Statement
  • Return(::Expr) <: Statement

This makes it easy to transform models, and we track the statements as a poset so code generation stays consistent. So we can do what you describe, and the Exprs in the rhs of each can include things like Gaussian processes, or even other Soss models. It’s just the “top level” that is required to be static.

I spent a lot of time learning monads, and formally this includes probability distributions. But it really still limits you to particle-based systems, and it’s frustrating that there’s no way for the semantics to reflect commutativity of conditionally independent samples. I also tried something with arrows: https://pps2017.luddy.indiana.edu/files/2017/01/arrow-ppl.pdf

But I think for PPL we really need nonstandard evaluation, and someone does the fundamental type theory research to come up with more distribution-friendly compositional semantics, the restrictive type system just gets in the way. Oh, and this was all Haskell. I’ve heard good things about Idris, but never tried it :slight_smile:

I really like Gen, but I don’t see it as so decoupled. There’s and interface to match to work within their framework, and doing that takes work. Well worth it, though.

That is true, and Gen has one of the most elegant interfaces for this. But it is the other way round from what I am thinking of: in Gen, the interface defines what a model needs to be able to do for the evaluators to work. It makes no assumptions about the structure; in fact, that’s the whole point. (The dynamic and static modelling languages are just two possible, but each in their own way restricted implementations of the GFI)

I’d like something where we do define the structure (thus having a kind of interface to it), but remain complely agnostic to what an evaluator might require.

Now, then, ideally, you could just implement a model-to-GFI transform and have the full machinery that Gen allows you to write available. But equally well you might ignore the GFI, check statically that a model does something restricted with only Gaussian Processes, and convert it to something that Stheno understands.

Quite that, yes. Universal PPLs have as their goal to let the user write down every model the language allows, and still be able to do inference on it. Of course at the “edges” of the space of reasonable programs, tradeoffs need to be made to still be able to do that. I’d like to get rid of the second part of that “and”. I want a format in which you can “write down” every possible model of a very large class, without a priori having to deal with the restrictions of inference. Then for each model, you can choose a suitable, perhaps specialized, backend that understands the fragment of the model language you used.

Probably the answer to the confusion I have caused is this: I come from a metaprogramming/analysis perspective, with interest in programming language design. I wanted variable names and dependencies to behave nicely, and primarily a closed, elegant language. Many PPL people probably come from an inference perspective, putting the language design problem second to that. “I want to write all the models” vs. “I want to do all the inference”. But I also try to close a bridge to the mostly theoretical, FP-based approaches of just formalizing probabilistic programs.

A bit like Julia IR. Representing an abstract and uninterpreted parametrized joint density function over its trace (as given through the unified name set) and return value, factorized syntactically into primitive statements and blocks.

But as I always say: pure phantasy as of yet :shrug: Soss surely comes closest to my ideas, I think.

I think symmetric monoidal categories with some extra stuff provide a good semantic framework for graphical models. After all, arrows are closely related to string diagrams.

3 Likes

I looked into making interfaces to GitHub - mschauer/ZigZagBoomerang.jl: Sleek implementations of the ZigZag, Boomerang and other assorted piecewise deterministic Markov processes for Markov Chain Monte Carlo including Sticky PDMPs for variable selection . It’s kind of the master level problem for a foreign inference interface because zigzag produces a piecewise linear trace (not just samples) and it ideally wants to know your Markov blanket. For Gen/Jaynes there is some progress, more to come: Interface/implementation of ZigZag kernel · Issue #281 · probcomp/Gen.jl · GitHub


You can kind of see that if you treat the reflection points instead of the complete path as “samples” you’ll get too many samples in the tails where there are many and too few samples close to the mode where there are few reflections

1 Like

Have you seen this: https://github.com/femtomc/ProbabilisticIR.jl