Community chat 09/20 10 EST

Please post topics of interest here.

2 Likes

For the record, here’s something Hong said some time ago:

Extrapolating [from Turing], what might be interesting to consider as a community, is to co-develop a shared base library AbstractPPL for various approaches to probabilistic programming such as tracing and source rewriting. It would allow different PPLs to talk to each other, and share some codebase in some instances.

And I, personally, would be interested in progress on Measures.jl.

2 Likes

Thanks @phg, the AbstractPPL idea is really interesting. I think it would be helpful to better understand how a given PPL would use this.

I’d be happy to talk about Measures.jl. So far, the biggest problem seems to be that the name is already in use. But we have some goals listed in the README, and I’d love to hear if there are things we’re missing, or any ideas for implementation.

Also, recently @dilumaluthge has been key in moving SossMLJ forward. I think there’s a big opportunity in connecting to MLJ to help PPL become more widely used in data science applications.

1 Like

I think the biggest elephant in the room is how to get Gen and Turing talking.

Take the below with grains of salt - again, I’m one person, outside of both groups. I have no idea about the internal dynamics of these projects. But here are some of my opinions.

I’ve done a good amount of thinking about this. Part of the reason I implemented Jaynes was to really try and understand Gen at a deep level. I also implemented interfaces to Turing’s packages including AdvancedHMC to see what’s going on with interop there. I don’t claim that these interfaces are “well done” - only that I sort of see how things can work together.

Despite this, it’s my belief that it is totally possible to get these two packages working together. I honestly think it would send a really strong message if University of Cambridge and MIT partnered up to manage the JuliaPPL organization and we had an incredibly solid set of base packages. (again, not up to me, but an interesting proposal)

Here’s where I think this is going(and a couple of recent posts/work which has driven this viewpoint). Feel free to disagree, and send your own take!

There’s an open issue to modularize Gen
Hong is fully in support for abstract interfaces (see above).
There are some interesting ideas for a common IR
I’ve been trying to work out how to optimize dynamic programs at the IR level

To me, there’s no inherent barrier which prevents Gen and Turing from working together. Sure, it’s going to involve some sacrifices (likely on both sides) - fundamentally, I think the issue is going to be one of affiliation and reputation and all the sticky issues which come with that. And I totally understand why these issues exist! It makes complete sense why Gen is a separate project, and why Turing is a separate project. But when it comes to shared base packages, does this have to be the case? Can a partnership between MIT and University of Cambridge work this out and present a unified PPL front? I’m not in a position to say obviously, but it’s cool to consider.

I see this as fertile ground to identify what exactly each side “needs” to support their research ideas.

Here are my opinions (again, please refute):

The generative function interface is perfect for the ‘AbstractPPL.jl’ package. As far as I can tell, the ontology is very useful. And it supports arbitrary dynamic programs, all the way down to specialized static languages. This I think is the real innovation of Gen (as has been recently firmly drilled into me by the Gen squad). It allows for the construction of their inference DSLs. And it’s super easy to extend it to new model types through inheritance. It took me like 200 lines to go from Jaynes with crappy standard inference to Jaynes with involution DSL kernels courtesy of the interface.

The advanced samplers and bijectors are the real contribution of Turing. I do not say this to spit in the face of all the excellent work which has gone into the language and associated framework. This is simply my opinion - because I favor the extendability of what the generative function interface offers. I am, however, not familiar with the optimizations which @mohamed82008 has mentioned previously - so it’s possible that there are really cool things that I’m missing. I do know, however, that AdvancedHMC is really a work of art. So is MCMCChains from a usability and user-facing perspective. I’ve found it incredibly useful to hook these up to my trace-based experiments. And there’s a whole interesting area of “how do you visualize chains of traces”.

On the bijectors, I would like to see the Turing bijectors work merge with Marco’s work on trace translators (which generalize bijectors to distributions over choice maps, correct me if wrong).

I also see a rich area of inquiry related to @phg discussion of probabilistic IR. If it’s any indication of my interest, Jaynes implements the GFI - but it’s implemented by transforming and executing SSA IR. With this in mind, I can easily see the “core” of the org shaped as:

  1. AbstractPPL - which includes the GFI, possibly also includes interfaces which Turing team agrees are relevant to extending what they would like to perform research in. I’m sure the GFI is useful - but this would generally be something which everyone needs some input on.

  2. ProbabilisticIR - which extends/forks IRTools IR. The construction plan of this package is TBD depending on Julia 1.6. We could of course start now by working off IRTools IR - but we might have to accept some changes as new compiler interfaces stabilize.

  3. AdvancedHMC - partially re-written to utilize interfaces agreed for AbstractPPL.

  4. MCMCChains - same as above.

  5. Bijectors/involution DSL - this merge I’m less certain about. This seems like a big lift/big thought piece to determine. I don’t think it’s crucial for now.

I’d also like to slot Jaynes in as an intermediate representation optimizer. Right now, this is GFI specific and works on IRTools IR (e.g. I’m hoping to transform it into a sort of optimizing compiler for the GFI). But I’m absolutely open to slicing this apart and re-constructing it if it fits the community plan, to support more general interfaces. I can’t say for certain if my work is useful yet - until I experiment more.

Anyways, ultimately something like this requires buy in from more than just myself. And ultimately, the people who have to buy in are more important than I am for a proposal like this. There are also many reasons why this sort of thing may be out of the question. But for the community, I’d like to raise these points.

Edit: just to be clear, this organization above does not include front-end interfaces for specifying models. I think generally this can be pretty flexible - if we have an agreed upon IR (whatever it is). My point of view comes from how Jaynes implements the GFI now. The interfaces transform the IR. To understand how to construct a front-end, we’d have to determine how you can implement the AbstractPPL interfaces as transformations on the IR. I know how to do this my way now, but there may be (likely, certainly are) other ways!

@phg @trappmartin @mohamed82008 @Marco_Cusumano-Towne @alex-lew @cscherrer

7 Likes

Another possibility:

3 Likes

I have written up many of my ideas so far here. Any feedback is much appreciated.

Again I feel obliged to say that this is not a Turing stance, but only my personal contemplations. Most of it comes from my retrofitting structure onto models written with DynamicPPL through metaprogramming, which isn’t a bad start for this kind of thing, but certainly biased to compiler/PL perspective, more than a inference algorithm perspective.

I think this is a great idea but likely requires some further discussions beforehand.

I feel that I cannot judge that atm as I’m not aware of all the implications of such an approach and what are the limitations of it.

Hm, not sure about that. I suppose this perspective comes from the fact that you have been mostly reusing the samplers while using generative functions as the backend. I’m also not sure it is a relevant discussion. That said, I agree that a common AbstractPPL (or whatever it is called) library would be a good idea.

I see Probabilistic IR as an experimental idea atm and I would leave it out of the discussion for now.

I don’t why a specific inference algorithm should be in the core org. AdvancedHMC is also intensionally written in a way that it is independent of a PPL and I don’t think it is a good idea to tie it to AbstractPPL or whatever.

I agree that this is likely not necessary to move to a core org.

Again, I would leave this out of the discussion for now. I think the Probabilistic IR idea is very interesting and promising, but too experimental for now.

A library such as AbstractPPL should not be experimental code but a solid framework that uses a strategy which is efficient and reliable at first.

Out of a discussion for a new practical AbstractPPL system – certainly, the idea is pure speculation. But we still can discuss the approach in principle.

Thanks for the write-up @McCoy and for the initiative. I would love to experiment with re-writing some Turing and DynamicPPL internals using Gen’s GFI if it gets separated to its own package. I don’t think it’s impossible and I like the concept of Gen’s GFI and how its extensibility is more battle-tested than our DynamicPPL. The VarInfo component of Turing is not super extensible and redesigning it has been on my plate for a while now. So maybe Gen’s GFI can offer an alternative here. This redesign is actually one of my priorities once I get back to being active in the development of Turing. I have been away for a few months now because I am focusing on my PhD for a bit.

It would be nice to have everyone contribute a single PPL that is generic, performant, friendly and extensible. If that’s not possible for logistic and managerial reasons, then at least a shared backend will be a good place to start. This will let us share the burden of maintaining and improving this component of our PPLs and it will allow for more discussion to happen across teams. Ideally, this part of a PPL should not be changing frequently anyways so it would be nice to arrive at a design that works for everyone and stick to that. Then we can focus on building higher level features in the individual PPLs, building application packages, developing inference algorithms, etc. If at the end of this, it turns out that all the PPLs only differ in their syntax but they all offer the same features, then I think we will have all succeeded.

5 Likes

Of course, we should discuss the approach. But maybe better in a thread on its own.

2 Likes

Just a note - I’ve opened an issue on the new compiler staging package: https://github.com/Keno/Compiler3.jl/issues/4

Possibly an interesting reference to look at as things pick up. I still don’t quite know what’s going on with the compiler (as much as I try) - but I’ve been trying to keep an eye on this repository to see what’s happening.

1 Like