PPL collaboration

What is the difference between Gen.jl and Turing.jl?

Both are conceptually different from each other. Gen.jl focuses on programmable inference while Turing.jl focuses on compositional inference and universal probabilistic programming. Additionally, they also provide different sets of inference algorithms. Turing provides a range of robust and maintained MCMC sampler as well as variational inference while Gen allows for multivariate proposal distributions, e.g. using a generative NN, to be used in SMC based algorithms and is more low level in terms of use. Because Turing focuses on universal probabilistic programming, you can have stochastic control flow, varying number of parameters, use discrete random measures and compose models. I think Gen supports a small subset of the mentioned but allows to compile static models for faster inference.

Depending on the use case, one might be more applicable than the other.

There is also Soss btw.

7 Likes

Could you also tell the difference between programmable inference, compositional inference and universal probabilistic programming, please?
I have just used stan for a while and have read about INLA. I’m new to probabilistic programming.
What things could be done with some of them and not with the other?
Which one is supposed to be faster to run? Which one easier?

I would argue that a PPL based on programmable inference allows you to more easily implement problem specific inference algorithms. It’s usually more low level, like Gen, and requires more knowledge about Bayesian inference algorithms and PPLs in general.

Compositional inference aims to combine inference algorithms, such that you can use, e.g. HMC, for some set of variables and another inference algorithms for the rest. This is useful for more complex models where you don’t want to write tailored inference algorithms but need the flexibility to combine different algorithms for efficient inference.

Universal probabilistic programming refers to inference in probabilistic programs that contain stochastic control, varying number of parameters and so on. Lots of the more efficient inference algorithms have strong restrictions on the models they can be used for and are not universal, e.g. HMC and VI cannot handle stochastic control flow. For the same reason, you are not able to represent every model that you can write in Turing using a static graph (and need special care for variable name handling) making it impossible to use certain graph based optimisations or inference algorithms. Turing solves that by providing inference algorithms, core routines and data structures that can handle those difficult models.

Which one is faster? This is difficult to say. If you use the same inference algorithms you likely won’t feel any difference. But there are no reliable benchmarks between both. The HMC inference in Turing is comparable to Stan in terms of speed and effectiveness. Don’t know about Gen, but it should be easy to use Turings HMC in Gen if necessary. But because you can more easily implement tailored inference algorithms you might be able to get faster inference for specific models using Gen.

What is easier to use? For general purpose Turing is much easier to use as this is the target audiences. If you want to implement tailored inference algorithms Gen is easier as it is meant for this purpose.

6 Likes

Note that because both PPLs are pure Julia implementations they have the luxuries property of being able to easily use any Julia library in the model or inference algorithm. Meaning you can use neural networks and GPUs in both and so on. This is different for Stan, PyMC3 and other PPLs that cannot easily leveraged other libraries.

1 Like

In case you don’t need a PPL but only want to perform gradient based inference with HMC. You can use AdvancedHMC (the sampler used by Turing) or DynamicHMC (another efficient HMC implementation) directly on your model log joint.

1 Like

Which one has Nested Laplace Approximations methods implemented like INLA, the fastest framework I know?

1 Like

As far as I know, none of them has INLA implemented. You would have to implement INLA yourself in both PPLs. See the following post on the new interface for inference algorithms in Turing if you aim to do so. [ANN] Turing.jl Breaking Changes

Feel free to open an issue or PR on this.

I haven’t used INLA, but it sounds like exactly the kind of thing I’m targeting with Soss.jl.

The idea behind Soss is to restrict the class of models to those where dependencies can be reasoned about statically. In practice this is more than allowed by Stan, but strictly less than Gen or Turing. But it still allows things like neural nets, and there’s also potential to embed Turing/Gen models in Soss, or vice-versa.

I’m building up to a new release of Soss that is much faster and easier to use. Generated code can be as efficient as you would write by hand, so effectively all overhead comes from whatever back-end you need to call.

2 Likes

Thanks, Martin, for the detailed summary! A couple thoughts on Gen’s expressiveness :slight_smile:

Because Turing focuses on universal probabilistic programming, you can have stochastic control flow, varying number of parameters, use discrete random measures and compose models. I think Gen supports a small subset of the mentioned but allows to compile static models for faster inference.

I believe Gen is universal in the same sense as Turing (and Church, WebPPL, Pyro, etc.). We do have a static modeling language that can be used for pieces of your model to attain speed-ups, but that’s meant to be used in conjunction with the dynamic modeling language, which supports arbitrary Julia code. Like other universal PPLs, Gen’s dynamic language supports stochastic control flow and varying numbers of parameters, and Gen’s design heavily emphasizes model composition (including of models written in different modeling languages). Random measures are supported in that you can write models that return models – for example, here are two representations of the Beta-Bernoulli process, and how a caller would use them:

@gen function betaBernoulli1(red, blue)
  p = @trace(beta(red, blue), :p)
  @gen function drawFromUrn()
    @trace(bernoulli(p), :is_red)
  end
end

@gen function betaBernoulli2(red, blue)
  @gen function drawFromUrn()
    is_red = @trace(bernoulli(red/(red+blue)), :is_red)
    if is_red
      red += 1
    else
      blue += 1
    end
  end
end

@gen function myModel()
  # could switch out with betaBernoulli2
  myUrn = @trace(betaBernoulli1(3, 5), :create_urn)
  for i = 1:10
    @trace(myUrn(), (:ball, i))
  end
end

You could then condition on, e.g., the colors of the first five balls, and do inference about the rest.

That said, being able to express a model is one thing, and being able to efficient inference is another. The code above – with Gen models returning more Gen models – isn’t really “idiomatic Gen,” and we haven’t explored using Gen for fast inference in DPMMs, CrossCat models, and other classics of non-parametric Bayes. We have, however, used Gen for tasks that I think put “universality” in the forefront, like program induction – where the model is a probabilistic grammar over syntax trees, and the likelihood is how well the interpreted program explains data (https://github.com/probcomp/pldi2019-gen-experiments/tree/master/gp).

3 Likes

That’s great, I hope I wasn’t saying Gen is not universal. :wink:

Regarding completely random measures (CRMs), you can express many instances of those in various PPLs. I don’t think Turing or any other PPL is special in these regards. The hard bit is the inference and having cleaver representations that help fast mixing of those beasts. That’s what I’m aiming to work on in Turing. We currently have a non standard representation for some CRMs that does exactly that but this needs some more research to have good representations for more complex CRMs.

I’m reviving this quite frozen over thread. I actually think it’s a good idea to have a JuliaPPL collective on Git.

My thoughts are:

  1. Organizing the different PPLs (as well as common components which are shared as development continues) in a clever way under a Git collective may reduce overload on newcomers who just want to figure out what they need to do what. The collective can be organized in such a way as to reduce the time spent trying to figure out what does what, as well as answer basic questions concerning philosophy of research in each of the main packages.

  2. It appears that people who are working on PPL related topics in this space are not shy about collaboration. I believe that showing this to external users is a strong indicator that you can come and do PPL research here. Julia is at the forefront of probabilistic programming, and indicating that the environment is fruitful and welcoming should ensure that people continue to investigate the state of the art here.

@cscherrer @trappmartin given that I don’t personally have a PPL to metaphorically place the first stone, I welcome your thoughts here. Such an organization need not be “official” in any bold sense, just a convenient abstraction point which can be used to get a lay of the land for those who are venturing.

3 Likes

Also mentioning @alex-lew. I apparently can’t mention more than 2 people in my posts because I’m a newcomer.

Thanks @McCoy, and welcome to Discourse! :slight_smile:

I really like this idea, there are just a couple of things that could get in the way.

First (and really minor) is that I don’t have any idea how these Julia-branded orgs actually work. How is this set up? Is there any functional difference working in one of these than in a private repo? I’d guess these might be questions for @vchuravy (or not?).

Second, and potentially more of an obstruction, is the question of branding. A lot of PPL work is funded by grants, and I’d guess the institutions involved might want code to be very clearly associated with the them. Again, I don’t have experience working in a Julia* org, so there might well be workarounds I don’t know about.

1 Like

I would also say that it would be great with a single UPPL that everyone could contribute to and work on. There are of course trade-offs with the different approaches currently employed and it’s not obvious to me that there is a free lunch lurking around the bush here. However, the layer approach taken by Gen is promising where they add DSLs on top of the lower level code. I do also see the need of ownership and branding which of course also could get in the way of unification and collaboration. :roll_eyes:

2 Likes

I took a look at other organizations - it seems like you can pin projects to organizations without “enclosing” them. @cscherrer this seems like the easiest path - I think it partially avoids the branding problem.

I don’t imagine such an organization making a strong statement about pinned packages or code. It’s more an easy central place where people can look at what sort of work is going on in probabilistic programming in Julia. Similar to JuliaDiff (for example), many different institutions/orgs work across JuliaDiff, but it’s a nice spot to take a look at what packages exist for AD.

@DoktorMike As far as I know, both Turing and Gen accept issues and possibly pull requests?

1 Like

Hi everyone,
I like the sentiment to build a JuliaPPL organisation (or similar). TuringLang is already aiming for this and is an umbrella organisation for all the packages developed for probabilistic programming by the Turing team.

In more practical terms, this might be a bit more difficult than in the case of JuliaDiff or similar organisations. Gen is of course funded by MIT, while the University of Cambridge supports Turing and the packages in TuringLang. Mixing those projects might be problematic as there is quite a bit of money involved. And the might be further conflicts of interest. But I’m no authority to decide anything in those regards. It would be better to talk to @yebai.

Maybe it would instead be an option to solely provide a website for JuliaPPL, similar to https://www.juliadiff.org/, listing the respective “sub” organisation, e.g. Gen and TuringLang.

4 Likes

I’m really glad you’re having a discussion about PPL collaboration.
I am a strong proponent of making it easier for Julia developers to collaborate.
@trappmartin I think what you have in mind is essentially a “Task Views” for PPL in Julia.
These things: 1 help discovery of relevant packages & 2 facilitate collaboration.

For example: @PetrKryslUCSD made a nice outline of packages for PDEs in Julia.

PS: MLJ may serve as an interface to the different PPL packages.
The MLJ Roadmap lists as a rough priority, integrating: Turing, Gen, SOSS.
I hope that becomes a reality…

1 Like

@trappmartin That seems reasonable and very appropriate to begin with. If it ends up being conflict-free to setup an org, that can always be something which is done in the future.

I’ll take a look at the JuliaDiff website for format and content - it looks like it’s hosted through the organization on GitHub, but there’s no reason why we couldn’t setup a site separately.

2 Likes

Yes, I started discussing an integration of Turing with the MLJ folks a while ago. But I got lots of other more pressing work to do in the meantime. I hope to come back to this in the near future.

2 Likes