Hi @ElOceanografo!
To give a tiny bit of background, RxInfer.jl
uses a particular type of factor graphs, which establishes the connection between the random variables. I will not go into details about the computational graph behind the package. Still, it technically means that if I want to implement your model, I need to add additional nodes (functions) in your graph representation.
For example, you can specify the graph as follows:
x
|
(f_x)
|
a---(f_a)---b
| |
(f_a) |
| |
c--------(f_cb)--d--(f_d)--y
The difference with your specification is that I explicitly show the functional dependence between your variables. The graph I showed is still incomplete, though I hope it will help you to understand what’s missing from the RxInfer.jl
perspective when implementing your graph.
Now, depending on the functional form of f
s (let’s say Gaussians, i.e., f_i(i, x) = N(i|x, 1.0) i in {x, a, d}
and f_cb=N(d|c+b, 1.0)
) you can write the model and inference in RxIner.jl
as:
using RxInfer
@model function loopy_model()
y = datavar(Float64)
x_0 = datavar(Float64)
# variances are chosen arbitrary
x ~ Normal(mean = x_0, var = 1.0)
a ~ Normal(mean = x, var = 1.0) # f_x
b ~ Normal(mean = a, var = 1.0) # f_a
c ~ Normal(mean = a, var = 1.0) # f_a
d ~ Normal(mean = c+b, var = 1.0) # f_cb
y ~ Normal(mean = d, var = 1.0)
end
result = inference(
model = loopy_model(),
data = (y = 1.0, x_0=1.0),
initmessages = (
a = vague(NormalMeanPrecision),
),
iterations = 50,
free_energy = true
)
@model macro lets you specify the model in a probabilistic way, RxInfer.jl
handles the graphical equivalent of the model under the hood for fast computations.