Earlier this week we put out a paper describing a new method for hierarchical Bayesian inference we dubbed the Marginal Unbiased Score Expansion (MUSE), which can pretty drastically outperform HMC / VI for a wide class of high-dimensional problems. The accompanying Julia package is
pkg> add https://github.com/marius311/MuseInference.jl
It works when you have a high-dimensional latent space you want to marginalize out which can be non-Gaussian, but the final constraints on parameters you care about are fairly Gaussian (often automatically the case if the latent space is high-dimensional due to the central limit theorem). Its approximate, but seems to often perform very well. It scales with dimension b/c no dense operators of the dimensionality of the latent space are ever formed (unlike eg Laplace approx or full-rank VI).
It has an interface into Turing, so, following the example from the documentation, its as easy as e.g.:
@model function funnel()
Īø ~ Normal(0, 3)
z ~ MvNormal(zeros(512), exp(Īø/2))
x ~ MvNormal(z, 1)
end
x = (funnel() | (Īø=0,))() # draw sample of `x` to use as simulated data
model = funnel() | (;x)
muse(model, 0, get_covariance=true)
and you get speedups like ~40X vs. HMC, even more (+ accuracy) vs. MFVI (the paper has more detailed benchmarking discussion):
It should be ready to go on any existing Turing model (modulo some API caveats) and if anyone is willing / interested, Iām quite curious to hear feedback either about the accuracy of MUSE or the Turing interface itself when applied to real-world or other toy problems. In the paper we use it on a problem in cosmology weāre working on with about 6 million dimensions, so it definitely scales.
On the roadmap is getting this into other Julia PPLs. Thereās also I think some cool extensions which higher order AD like Diffractor will make possible, which Iām looking forward to trying. Please donāt hesitate to get in touch on Github / privately if any of this is interesting to you!