Call for Contributors: Probabilistic Programming & Bayesian Inference in ONNX

Hi everyone,

I wanted to share an initiative that’s getting underway within the ONNX ecosystem and invite input and participation from the Julia probabilistic programming community—particularly folks working with Turing.jl, DynamicPPL, Bijectors.jl, and related tooling.

We’re working on a proposal to support probabilistic programming and Bayesian inference as first-class workloads in ONNX, with the goal of defining portable operator semantics that allow probabilistic models to be exported, executed, and deployed across frameworks and hardware.

Turing.jl is explicitly part of the long-term design and scope of this effort.

What we’re trying to do

At a high level, the goal is to make ONNX capable of representing log-joint probabilistic models and inference primitives, not just deterministic neural networks.

Concretely, we’re proposing:

  • A probabilistic operator domain in ONNX (e.g. ai.onnx.prob)
  • Operators for:
    • Probability distributions and log-density evaluation
    • Factors / observations
    • Bijectors and constrained-parameter transforms (inspired heavily by Bijectors.jl)
    • Stateless, splittable RNG semantics (JAX-style, reproducible, parallel-safe)
    • Special mathematical functions used in Bayesian computation
  • Optional inference operators and building blocks:
    • Laplace, Pathfinder, INLA
    • Metropolis, Gibbs, Slice
    • HMC / NUTS (FSM-style control flow)
    • Sequential Monte Carlo (SMC)

The intent is not to replace Turing.jl or DynamicPPL, but to:

  • Provide a portable intermediate representation for probabilistic models
  • Enable deployment and execution outside the Julia runtime when needed
  • Allow multiple PPLs (Turing, Stan, PyMC, Pyro, NumPyro, TFP) to target a shared backend

Why this might matter for Turing.jl users

From a Julia / Turing perspective, this opens up some interesting possibilities:

  • A deployment path for Turing models beyond Julia-only environments
  • A standardized IR for probabilistic models that preserves uncertainty semantics
  • Potential interoperability with non-Julia systems while keeping modeling in Julia
  • A way to express probabilistic computation in a form amenable to accelerators and edge devices
  • An opportunity to influence how probabilistic concepts are represented at the IR level

We see Turing.jl’s architecture as a strength here, particularly:

  • DynamicPPL’s separation of model definition and inference
  • Bijectors.jl as a best-in-class reference for transform semantics
  • Julia’s multiple dispatch and composability as a guide for operator design

Scope with respect to Julia

In the near term, the focus is on:

  • Operator specification and semantics (language-agnostic)
  • RNG correctness and reproducibility
  • Distribution and bijector catalogs
  • Reference implementations and conformance tests

In the future phases, we expect:

  • Exploration of Turing.jl → ONNX export paths
  • Alignment with Bijectors.jl semantics
  • Investigation of how Julia-based inference (e.g. AdvancedHMC, DynamicHMC-style logic) maps to ONNX primitives
  • Collaboration on what should remain in the PPL vs. what belongs in ONNX

Nothing here assumes abandoning Julia-native execution—this is about optionality and portability, not replacement.

Open questions we’d love Julia community input on

  • What parts of a probabilistic program should be represented in an IR like ONNX?
  • How much inference logic should live in the backend vs. the PPL?
  • How best to represent dynamic control flow without losing portability?
  • Whether ONNX-style operators are a good fit for certain Julia inference patterns
  • How to ensure performance expectations align with Julia users’ standards

Getting involved

We’re forming working groups around:

  • RNG semantics and correctness
  • Distribution and bijector operator design
  • Inference building blocks
  • Exporter strategies for different PPLs (including Julia-based ones)

If you’re interested in contributing, reviewing specs, or simply providing feedback from a Julia / Turing.jl perspective, we’d really welcome the discussion. Even strong skepticism is useful at this stage.

Feel free to reply here, ask questions, or reach out directly if you’d like to be looped into follow-up conversations.

Best,
Brian