Is there a way to avoid checking whether parameters for a distribution fall within the required bounds (e.g. positives only for beta-binomial)? It should make sampling with Turing.jl faster, and can also be helpful when using distributions in unusual ways (Extending the Beta-Binomial distribution to allow for negative values of α and β lets you use it for underdispersed data, for instance).
Some constructors have a check_args
keyword argument. But others will throw an error if you use this, so you need to handle each independently.
Your suggestion is the one of the things we’re focusing on for MeasureTheory.jl.
Thanks! Is it possible to use MeasureTheory.jl with Turing.jl?
tl;dr I’m not sure
The biggest difference I can see is that Distributions uses logpdf
, while in MeasureTheory we have logdensity
. This is because many measures aren’t probability measures, so the name doesn’t really fit.
Hmm, there’s also the problem of dimensionality. In Turing, the dimension of the transformed space ( \mathbb{R}^n for some n) is required to be the same as the dimensionality of the parameter space of the model. That’s often not the case, for example for the Dirichlet the dimensions are “off by one”. Turing handles this by having an extra “dummy” dimension for the Dirichlet in Bijectors.jl
.
From what I understanding talking about this with @mohamed82008 and @torfjelde , it could take a fair amount of work to get the n in \mathbb{R}^n to match the dimension of the manifold determined by the parameter space of the model.
So there’s nothing inherintly in Bijectors.jl that doesn’t allow implementation of a SimplexBijector
which acts on the n - 1 dimensional representation, but yes, it’s currently not implemented (because atm we’re relying on Distributions.jl, and so supporting the n - 1 dimensional representation which isn’t used by Distributions.jl isn’t a high priority atm).
And no, unfortunately atm it’s not possible to use MeasureTheory.jl within Turing.jl in a seemless manner You can of course manually increment the logjoint of the model by usin the addlogprob!
macro inside the model, but this means that you lose some convenience that Turing.jl provides when working with distributions, e.g. automatic transformations for certain samplers that need to work in real space, and thus you’d have to deal with these issues manually.
Btw, it seems like it should be possible by passing a NoArgCheck()
as the last argument (going by Optional checks of the constructor parameters by matbesancon · Pull Request #942 · JuliaStats/Distributions.jl · GitHub)
Hmm – how difficult would it be to switch Turing.jl to use MeasureTheory.jl as its distributions backend (or to add it as another possible backend alongside Distributions.jl)? It seems to offer a lot of advantages (e.g. better support for symbolic manipulations, which I assume would be useful for conjugate priors).
Would it be possible to set logpdf
as an alias of logdensity
for measures that are also distributions, and to create a redundantDirichlet
which includes the dummy dimension? Actually, shouldn’t it be possible to just use the Dirichlet from Distributions.jl given MeasureTheory exports it?
We already have logdensity(d::Distribution, x) == logpdf(d, x)
. But we want to keep them separate, for a couple of reasons.
First, as you mention, only some measures are also probability distributions, so logdensity
is more fitting as a general term.
Second, we want to have logdensity(d, base, x)
where base
is taken as the base measure (assuming d
is absolutely continuous relative to it).
Yes, you could create a redundantDirichlet
. I don’t know the internals of Turing very well, but I’d guess this would be one of the easier steps in getting it to work with MeasureTheory. But I hope I’m wrong about that, since it would be great to have Turing easily set up to use it