Bayes_rule_discrete.ipynb demo in ForneyLab


I am relatively new to ForneyLab and am trying it again after a year-long break using Julia 1.7 and ForneyLab 0.11.4 . While most of the demos provided run fine, the simplest demo of all, Bayes-rule_discrete.ipynb generates an error when run. More specifically, when executing

using ForneyLab

# Build the generative model
g = FactorGraph()

b = [0.7, 0.3] # Prior probability vector
A = [0.2 0.9; 0.8 0.1] # Left-stochastic matrix for conditional probability

@RV m ~ Categorical(b) # Prior
@RV w ~ Transition(m, A) # Observation model

placeholder(w, :w, dims=(2,)); # Placeholder for observation

algo = messagePassingAlgorithm(m) # Build the algorithm code
source_code = algorithmSourceCode(algo)
eval(Meta.parse(source_code)) # Parse and load the algorithm in scope


the following error is generated on eval(Meta.parse(source_code):

error in method definition: function ForneyLab.step! must be explicitly imported to be extended

 [1] top-level scope
   @ none:0
 [2] top-level scope
   @ none:22
 [3] eval
   @ ./boot.jl:373 [inlined]
 [4] eval(x::Expr)
   @ Base.MainInclude ./client.jl:453
 [5] top-level scope
   @ In[35]:3
 [6] eval
   @ ./boot.jl:373 [inlined]
 [7] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
   @ Base ./loading.jl:1196

I searched the internet and the issues reported on the ForneyLab github, but did not find anything. Here is the source code generated via println(source_code):


# You have created an algorithm that requires updates for (a) clamped parameter(s).
# This algorithm requires the definition of a custom `optimize!` function that updates the parameter value(s)
# by altering the `data` dictionary in-place. The custom `optimize!` function may be based on the mockup below:

# function optimize!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=init())
# 	...
# 	return data
# end

function init()

messages = Array{Message}(undef, 3)

messages[2] = Message(vague(Categorical, (2,)))

return messages


function step!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=Array{Message}(undef, 3))

messages[1] = ruleSPCategoricalOutNP(nothing, Message(Multivariate, PointMass, m=[0.7, 0.3]))
messages[2] = ruleSPTransitionOutNCP(nothing, messages[1], Message(MatrixVariate, PointMass, m=[0.2 0.9; 0.8 0.1]))
messages[3] = ruleSPTransitionIn1PNP(Message(Multivariate, PointMass, m=data[:w]), nothing, Message(MatrixVariate, PointMass, m=[0.2 0.9; 0.8 0.1]))

marginals[:m] = messages[1].dist * messages[3].dist

return marginals


end # block

I assume I am doing something incorrectly. Note however, that most of all the demos use continuous distributions as opposed to this one. Any insight would be greatly appreciated!


Here is what I get on Windows with Julia 1.7.2


function step!(data::Dict, marginals::Dict=Dict(), messages::Vector{Message}=Array{Message}(undef, 2))

messages[1] = ruleSPCategoricalOutNP(nothing, Message(Multivariate, PointMass, m=[0.7, 0.3]))
messages[2] = ruleSPTransitionIn1PNP(Message(Multivariate, PointMass, m=data[:w]), nothing, Message(MatrixVariate, PointMass, m=[0.2 0.9; 0.8 0.1]))

marginals[:m] = messages[1].dist * messages[2].dist

return marginals


end # block

So I’d assume an ‘environmental’ problem on your side(which means either dependent on Julia or package version)?

@goerch : I agree with you. I am using Julia 1.7.2 on a Macbook M1 Air. That might be the reason for the error. The error occurred in eval(). Here is the status of my packages:

      Status `~/.julia/environments/v1.7/Project.toml`
  [c52e3926] Atom v0.12.36
  [31c24e10] Distributions v0.25.53
  [587475ba] Flux v0.13.0
  [9fc3f58a] ForneyLab v0.11.4
  [f526b714] GraphViz v0.2.0
  [7073ff75] IJulia v1.23.3
  [e5e0dc1b] Juno v0.8.4
  [b964fa9f] LaTeXStrings v1.3.0
  [91a5bcdd] Plots v1.27.5
  [d330b81b] PyPlot v2.10.0
  [276daf66] SpecialFunctions v2.1.4

I also executed the example in “Getting Started” in the documentation, which is discreate, and it ran perfectly fine. Not only that, but the source code generated was identical, except in the number of edges in the graph. Here is the code that worked:

using ForneyLab

 = 25          # number of coin tosses
p = 0.75        # p parameter of the Bernoulli distribution
sbernoulli(n, p) = [(rand() < p) ? 1 : 0 for _ = 1:n] # define Bernoulli sampler
dataset = sbernoulli(N, p); # run N Bernoulli trials
print("dataset = ") ; show(dataset)

g = FactorGraph()       # create a factor graph
a = placeholder(:a)     # define hyperparameter a as placeholder
b = placeholder(:b)     # define hyperparameter b as placeholder
@RV θ ~ Beta(a, b)      # prior
@RV y ~ Bernoulli(θ)    # likelihood
placeholder(y, :y)      # define y as a placeholder for data

# Generate a message passging sum-product algorithm that infers theta
algo = messagePassingAlgorithm(θ) # derive a sum-product algorithm to infer θ
algo_code = algorithmSourceCode(algo) # convert the algorithm to Julia code
algo_expr = Meta.parse(algo_code) # parse the algorithm into a Julia expression

I am answering my own question. I rebooted my mac and everything works. Some strange configuration must have occurred during my work, messing things up.