Call for Contributors: Probabilistic Programming & Bayesian Inference in ONNX

Hi everyone,

I wanted to share an initiative that’s getting underway within the ONNX ecosystem and invite input and participation from the Julia probabilistic programming community—particularly folks working with Turing.jl, DynamicPPL, Bijectors.jl, and related tooling.

We’re working on a proposal to support probabilistic programming and Bayesian inference as first-class workloads in ONNX, with the goal of defining portable operator semantics that allow probabilistic models to be exported, executed, and deployed across frameworks and hardware.

Turing.jl is explicitly part of the long-term design and scope of this effort.

What we’re trying to do

At a high level, the goal is to make ONNX capable of representing log-joint probabilistic models and inference primitives, not just deterministic neural networks.

Concretely, we’re proposing:

  • A probabilistic operator domain in ONNX (e.g. ai.onnx.prob)
  • Operators for:
    • Probability distributions and log-density evaluation
    • Factors / observations
    • Bijectors and constrained-parameter transforms (inspired heavily by Bijectors.jl)
    • Stateless, splittable RNG semantics (JAX-style, reproducible, parallel-safe)
    • Special mathematical functions used in Bayesian computation
  • Optional inference operators and building blocks:
    • Laplace, Pathfinder, INLA
    • Metropolis, Gibbs, Slice
    • HMC / NUTS (FSM-style control flow)
    • Sequential Monte Carlo (SMC)

The intent is not to replace Turing.jl or DynamicPPL, but to:

  • Provide a portable intermediate representation for probabilistic models
  • Enable deployment and execution outside the Julia runtime when needed
  • Allow multiple PPLs (Turing, Stan, PyMC, Pyro, NumPyro, TFP) to target a shared backend

Why this might matter for Turing.jl users

From a Julia / Turing perspective, this opens up some interesting possibilities:

  • A deployment path for Turing models beyond Julia-only environments
  • A standardized IR for probabilistic models that preserves uncertainty semantics
  • Potential interoperability with non-Julia systems while keeping modeling in Julia
  • A way to express probabilistic computation in a form amenable to accelerators and edge devices
  • An opportunity to influence how probabilistic concepts are represented at the IR level

We see Turing.jl’s architecture as a strength here, particularly:

  • DynamicPPL’s separation of model definition and inference
  • Bijectors.jl as a best-in-class reference for transform semantics
  • Julia’s multiple dispatch and composability as a guide for operator design

Scope with respect to Julia

In the near term, the focus is on:

  • Operator specification and semantics (language-agnostic)
  • RNG correctness and reproducibility
  • Distribution and bijector catalogs
  • Reference implementations and conformance tests

In the future phases, we expect:

  • Exploration of Turing.jl → ONNX export paths
  • Alignment with Bijectors.jl semantics
  • Investigation of how Julia-based inference (e.g. AdvancedHMC, DynamicHMC-style logic) maps to ONNX primitives
  • Collaboration on what should remain in the PPL vs. what belongs in ONNX

Nothing here assumes abandoning Julia-native execution—this is about optionality and portability, not replacement.

Open questions we’d love Julia community input on

  • What parts of a probabilistic program should be represented in an IR like ONNX?
  • How much inference logic should live in the backend vs. the PPL?
  • How best to represent dynamic control flow without losing portability?
  • Whether ONNX-style operators are a good fit for certain Julia inference patterns
  • How to ensure performance expectations align with Julia users’ standards

Getting involved

We’re forming working groups around:

  • RNG semantics and correctness
  • Distribution and bijector operator design
  • Inference building blocks
  • Exporter strategies for different PPLs (including Julia-based ones)

If you’re interested in contributing, reviewing specs, or simply providing feedback from a Julia / Turing.jl perspective, we’d really welcome the discussion. Even strong skepticism is useful at this stage.

Feel free to reply here, ask questions, or reach out directly if you’d like to be looped into follow-up conversations.

Best,
Brian

10 Likes

Great initiative! I believe you will make it a lot easier to use Julia to develop models for production.

2 Likes

Hi Brian, really excited to see this the stateless RNG semantics piece especially resonates with me. I’m a Turing.jl contributor (recently merged a PR moving the test suite away from global Random.seed!) and I’m working on a GSoC 2026 proposal around Gibbs sampling. Would love to get involved in the RNG semantics or inference building blocks working groups what’s the best way in?

2 Likes

Hi @Anurag_Gupta , thanks again for replying to this. Yes, please feel free to join. The next meeting is in two weeks. We are consolidating the PRNG spec so please take a look at the working group document we’re trying to expand on working-groups/probabilistic-programming/documents/initial-onnx-splittable-prng-writeup-dec-2024.md at main · onnx/working-groups · GitHub .

I can send you the papers we’re using to do this. They should be in our slack channel for the group so if you can look there.

Thanks Brian I’ll review the PRNG spec doc this week. Looking forward to the meeting in two weeks.

Could you please share the link to the slack channel here as I tried many times, I just can’t find it on slack.

1 Like

It should be here Slack