Using MeasureTheory.jl with Distributions.jl

I’ve heard there may be some perception that to use MeasureTheory, you need to entirely drop Distributions.jl. That’s not true!

Calling Distributions from MeasureTheory

MeasureTheory actually has Distributions as a dependency, and exports it as Dists. So this works just fine:

julia> using MeasureTheory

julia> m = Normal(2,3)
Normal(μ = 2, σ = 3)

julia> d = Dists.Normal(2,3)
Distributions.Normal{Float64}(μ=2.0, σ=3.0)

We generally prefer logdensity over logpdf, because probability densities are a very special case. And we generally leave off the normalization constant. Well, “leave off” isn’t quite right, it’s really that we change the base measure to make the computation more efficient.

Currently, logdensity of a Distribution just calls logpdf:

julia> logdensity(d, 0)

julia> Dists.logpdf(d, 0)

and vice versa:

julia> logdensity(m, 0)

julia> Dists.logpdf(m, 0)

This last one is really not correct, we should instead do

julia> logdensity(m, Lebesgue(ℝ), 0)

I mean, it’s correct as a log-density, but not as a logpdf a user might expect to be compatible with Distributions.jl.

Because the density of m with respect to Lebesgue(ℝ) integrates to one. But there’s not always a base measure that makes this true. And even if we do know this base measure, bringing the constant back in will always slow down the computations. Anyway, please let me know if you have ideas for a better way for the interface between these to be set up.

Calling MeasureTheory from Distributions

Of course we can also go the other way:

julia> using Distributions

julia> import MeasureTheory

julia> m = MeasureTheory.Normal(2,3)
Normal(μ = 2, σ = 3)

julia> d = Normal(2,3)
Normal{Float64}(μ=2.0, σ=3.0)

julia> MeasureTheory.logdensity(m, 0)

julia> logpdf(m, 0)

julia> MeasureTheory.logdensity(d, 0)

julia> logpdf(d, 0)

julia> MeasureTheory.logdensity(m, MeasureTheory.Lebesgue(MeasureTheory.ℝ), 0)