Recursive Bayes

I’m beginning a new project and eager to use Turing.jl or some other PPL for all the benefits, but have not found an example in a recursive Bayes problem setting. That is to say the current posterior is used as a prior when more data is collected, and new data is constantly streaming in. While it’s not necessary to demonstrate the idea in this on-line setting, we hope to scale the method to more complex problems that ideally update the posterior on every new observation (or batch of observations).

I can think of a simplistic way of repeatedly calling sample in Turing on new models generated on new data vectors, but was hoping there was something more streamlined for this kind of on-line problem.

Has this kind of problem setting been demonstrated with Turing or some other PPL in Julia?

2 Likes

It’s funny, this is such a standard use of Bayesian analysis, but most PPLs don’t have much support for it. It’s usually better to do inference for as much data as you have at a time, because the posterior is represented as a function of the data. There are two main exceptions to this:

  • If you make strong assumptions about the data, e.g. for a Kalman filter you assume normality.
  • If you make very few assumptions, and do things very empirically. Then you end up with a particle filter.

One big focus of Soss.jl is model composability, in particular models can chain together. There’s an example here of building a Markov chain in Soss. This kind of thing will get easier with some new updates in the works. What (I think) we really want is something like

  • Represent a chain abstractly
  • Observe a chunk of data, and get the posterior conditional all this (all at once)
  • Turn this into a representation of a new distribution to be used as a prior. Maybe it’s a Gaussian approximation, or a mixture of Gaussians, or an approximation from some variational family.
3 Likes

Every PPL I know of is implementing some sort of MCMC algorithm, and if you want to be doing everything “by the book,” you can’t really use MCMC online. When new observations arrive, you just have to rerun everything.

Sequential Monte Carlo (SMC) algorithms are a neat class of generic methods especially developed for streaming-data Bayesian inference problems, but one of their shortcomings is that they can have a lot of knobs and levers that you need to tune. Research is ongoing to make SMC algorithms “self-tuning” for broad classes of interesting problems. I’m not aware of any mature PPLs in Julia that do this, but other folks would know better.

But then, you always have simple exponential family models with conjugate priors…Say what you want about them, they’re still good for licking a recursive Bayesian inference problem in a pinch.

2 Likes

The Zig-Zag and other piecewise deterministic MC method allow subsampling of the data without introducing posterior bias, so if the observations are indistinguishable you can take them into account one by one! ZigZagBoomerang.jl has an interface suitable for PPL and could be added to say to Gen (Gen.jl/issues/281)

3 Likes

Hi there! Gen supports (many variants of) Sequential Monte Carlo.

There are several references I’d like to share, but I’m getting an error that only two links are allowed. So maybe I’ll number the references, and provide links in a follow-up.

There’s a tutorial example here: https://www.gen.dev/tutorials/particle-filtering-in-gen/Particle%20Filtering%20in%20Gen

A particularly compelling example of ‘continuous Bayesian learning’ in Gen, I think, is Xuan’s paper, from NeurIPS 2020: [2006.07532] Online Bayesian Goal Inference for Boundedly-Rational Planning Agents. As the model observes an agent acting in an environment, it continuously updates its beliefs about the agent’s goals. The code is available at [1].

Another cool example can be found in Marco’s thesis, Chapter 6.1, available at [2].

For more advanced use cases, there’s a package specifically for advanced SMC in Gen: [3].

You might also find Marco’s 2018 PLDI paper on incremental inference interesting (this sort of SMC is now supported by Gen): [4].

As @johnczito points out, SMC is a very general class of methods. Gen will automate all the math (e.g., valid incremental weighting no matter what proposal program you pass in), as well as the data structures + efficient updates, but high-level algorithmic choices are still up to you. We’re hoping to build out better tutorial material in the future, to help better teach people to use the building blocks provided by Gen’s inference library to construct custom algorithms tailored to their models and use cases. (Unlike, say, the hyperparameters for a neural network, I think at least some of the ‘knobs and levers’ that SMC exposes are rightly viewed as creative choices, which can be made somewhat intuitively without a ton of math.)

4 Likes

1: https://github.com/ztangent/Plinf.jl
2: https://www.mct.dev/assets/mct-thesis.pdf

1 Like

3: GitHub - probcomp/GenParticleFilters.jl: Building blocks for simple and advanced particle filtering in Gen.
4: https://dl.acm.org/doi/10.1145/3296979.3192399

1 Like

A relevant paper I ran across recently that claims exact inference using sequential MCMC:

Making Recursive Bayesian Inference Accessible