Hi @EvoArt!
Thanks for trying out ReactiveMP.jl
.
You are trying to impose a prior on the diagonal elements of the covariance matrix of your likelihood function.
We’ve just created a PR that will support this setup (although you’d need to use precision parameter instead of variance and MvNormalMeanPrecision
likelihood instead of MvNormalMeanCovariance
).
To run your model with the current master branch of ReactiveMP.jl
you can impose an InverseWishart
distribution on the matrix of your likelihood function. I understand you don’t want to have additional correlations in your observations, so our solution is not 100% match your intentions:
using GraphPPL, Rocket, ReactiveMP, LinearAlgebra
n = 250
m = 100
@model [ default_factorisation = MeanField() ] function linear_regression(n,m)
a ~ MvNormalMeanCovariance(zeros(m), diagm(ones(m)))
b ~ NormalMeanVariance(0.0,1.0)
W ~ InverseWishart(n+2, diageye(n))
c ~ ones(n)*b
x = datavar(Matrix{Float64})
y = datavar(Vector{Float64})
z ~ x*a+c
y ~ MvNormalMeanCovariance(z, W)
end
results = inference(
model = Model(linear_regression, n,m),
data = (y = randn(n), x = randn(n,m)),
initmarginals = (W = InverseWishart(n+2, diageye(n)), ),
returnvars = (a = KeepLast(), b = KeepLast(), W = KeepLast()),
free_energy = true,
iterations = 10
);
using Plots
plot(results.free_energy) # check convergence
Note that you need to initialize marginals for the W
(initmarginals = (W = InverseWishart(n+2, diageye(n)), ),
) as some part of your computational graph will perform VMP. If you have further questions, we will be happy to help!