I have a simple agents.jl sim where at each step with some probability agents are added or change state or are removed. Agents are not independent, e.g., there is a finite “capacity” resource which might prevent an agent from being added.
I’d like to model this in a PPL, to enable also inference. Any examples/tips/reasons why its impossible/a bad idea?
Closest mathematical abstraction I’ve found is Resource dependent branching process .
 Resource-dependent branching process - Wikipedia
What specifically do you want to infer?
Disclaimer: I’m no expert, and as Seth writes it kind of depends on what you want to infer.
But from the rough description you provide, if you want to infer using Monte Carlo methods, you probably have to do it using Reversible jump MCMC, which Stan, Turing etc. don’t offer afaik.
Gen on the other hand offers ways to do this and has a nice tutorial, which might help you figure out whether this is what you’re looking for.
I am still in exploratory mode, so I don’t really know, in principle everything.
Yet your question is usefully challenging, so thanks.
Not directly using PPLs, but in pp. 14 and ff. of this paper, “Being Bayesian in the 2020s: opportunities and challenges in the practice of modern applied Bayesian statistics” (https://royalsocietypublishing.org/doi/10.1098/rsta.2022.0156) they use as an example calibrating an ABM model using Approximate Bayesian Computation.
As for whether or not you need Reversible Jump, and how to deal with dimensionality changes with Turing, you might want to take a look at Failure when dimension changes · Issue #1115 · TuringLang/Turing.jl · GitHub and Models with dynamic dimensionality · Issue #434 · TuringLang/DynamicPPL.jl · GitHub.
Another approach would be to use an invertible neural network to learn the likelihood function or posterior distribution. Here is a paper: Estimation of agent-based models using Bayesian deep learning approach of BayesFlow