What's the difference between Gen and Turing for probabilistic programming?

Asked question in #probprog channel on Slack.

2 Likes

See also the archive of the conversation here: Extended Slack PPL discussion

2 Likes

In case people find this via Google, here are Marcoā€™s and my answers on the Gen side of things. But please also see the Slack discussion linked above, from which these are taken, and which includes discussion on various sub-questions that popped up.

Marco: Gen aims to allow users to write pretty much any inference algorithm they want, customized for their model, and to allow users to follow a development path where they can start with a simple algorithm and gradually make it more complex and performant over time. There is no list of built-in inference algorithm implementations in Gen, because users actually compose the algorithms themselves as regular Julia code. But that Julia code uses data structures and primitive operations provided by Gen. So yes, Gen does aim to present a lower-level interface to users than most other probabilistic programming systems.

But why be low-level? In general, there are tradeoffs between generality and performance in most computing systems ā€“ if you are willing to specialize your algorithm and/or implementation to your model, then you can get better performance. One of the design constraints on Gen was that we wanted to do inference in performance-constrained settings like computer vision and robotics where a high level of control over the algorithm is critical. In particular, for these sorts of applications, training custom proposal distributions for use with Metropolis-Hastings, importance sampling, and SMC using discriminative learning methods including deep learning, is a key feature that Gen supports. We also designed Gen to be extensible with new algorithms, by developing the low-level ā€œgenerative function interfaceā€ that provides a set of core operations on top of which various inference algorithms are implemented.

As another example of Genā€™s flexibility for inference customization, Gen lets you express any MCMC move in the reversible jump framework, if you want. This feature is useful for writing efficient trans-dimensional MCMC moves. There are various ways of constructing simpler MCMC kernels, but the reversible jump one is maximally flexible. But because Genā€™s inference primitives are designed to be user-facing, there is really an open ended set of algorithms. For example, users can easily write SMC algorithms that use arbitrary MCMC kernels for rejuvenation, and there is no explicit built-in code for this in Gen, because the primitives are designed to be composable.

I havenā€™t looked closely at Turingā€™s implementation recently, but I would say that a high-level difference between Gen and Turing is that Turing aims to play a role similar to Stan but for a more expressive set of models (with the recognition that the more expressive set of models demands some more customizability that Turing provides). Whereas Gen takes a more extreme position and is first and foremost a platform for implementing custom inference algorithms, including for applications outside of statistics including robotics and computer vision. One sign of this difference is that Gen has no ā€˜inferā€™ or ā€˜sampleā€™ function like Turing does. (We could add this sort of higher-level interface to Gen, but have not because it doesnā€™t really align with Genā€™s design philosophy ā€“ instead, we are working on domain-specific modeling languages, built on top of Gen, that expose that sort of high-level interface).

I havenā€™t done benchmarks recently, but I expect another difference is that Turing has more optimized implementations of some of the generic inference algorithms, like HMC. Optimizing HMC is not something that we have focused on in Gen.

(Sorry for the long message ā€“ we will be improving the Gen site and documentation soon to better reflect what itā€™s all about!) (edited)

Alex: It may also be worth clarifying what it means for Gen to support, e.g., arbitrary reversible-jump MCMC moves.

Genā€™s inference primitives are low-level, but do automate many of the tedious and error-prone aspects of implementing inference algorithms. So when the user implements a custom Reversible-Jump MCMC proposal, they need not write the code for computing the Metropolis Hastings correction; this is computed automatically based on the userā€™s sampling code for both the model and proposal distributions.

Similar things are true of all of Genā€™s inference primitives: users can implement complex models and inference algorithms, without hand-coding importance weights, sequential Monte Carlo weights, variational inference objectives, stochastic gradient estimators, asymmetric Metropolis-Hastings corrections, Reversible-Jump jacobian corrections, and so on.

By comparison, many other probabilistic programming languages have extensible backends (so users can hook up new general-purpose samplers to them), but those new samplers must compute proposal-dependent acceptance probabilities and importance weights and gradient estimates themselves.

7 Likes

Please see my request for a comprehensive comparison of the PPLs in Julia in the thread linked above.

I sincerely hope that this will happen so that there will be a reliable comparison, which people can use as a reference point, in the near future.

2 Likes

it would be nice if we can hear turning side of the story.

1 Like