Hi @svilupp!
If you use NormalMeanVariance
as your likelihood function, you’d need to assign InverseGamma
prior to the variance. At the moment, it’s just not implemented in ReactiveMP.jl
. We are working on it as well as on other cool things! Keep an eye on it In theory, using
NormalMeanPrecision
as a likelihood function shouldn’t change the results for the posterior.
So, the error you see suggests that no variational rule is implemented for the NormalMeanVariance
. It has nothing to do with q_μ
being NormalWeightedMeanPrecision
; ReactiveMP.jl
doesn’t care about the parametrization of a Gaussian, it embraces NormalWeightedMeanPrecision
, NormalMeanPrecision
, NormalMeanPrecision
under “umbrella” type UnivariateNormalDistributionsFamily
.
NormalWeightedMeanPrecision
is preferred in ReactiveMP.jl
for computational reasons, it’s just faster
We recognize that the current Stacktrace
is not very informative, we will improve it soon.
As for literature, Loeliger et al. (2007) is a good start, also Sascha Korl thesis (2005)
For a theoretical analysis of these graphs and MP algorithms, I would refer you to Senoz et al
For ReactiveMP.jl
you can have a look at Bagaev et al (2021), but watch out - the API for inference in this paper is oldish.
P.S. If you don’t change the model, but tweak only the initmarginals
, initmessages
and priors
of the model, you can use the free_energy score as a proxy for model comparison
inference(blablabla,
free_energy=true)