Hi all,
I’ve been thrown into the deep end with some stats, and don’t know enough about the Julia ecosystem around my problem to know if I’m doing this right.
For the moment, this is an example of what I’m doing: an initial ‘belief’ with some certainty, modeled by a truncated normal distribution. A ‘truth’ with some certainty, again modeled by a truncated normal.
I am stepping through time and updating the belief based on an exposure to the truth.
using Distributions
dx = 0.01
x = -1.0:dx:1.0
trueseek = TruncatedNormal(0.6, 0.4, -1.0, 1.0)
flfin = pdf(trueseek, x)
flfin ./= sum(flfin)*dx # Normalise
belief0 = TruncatedNormal(-0.698, 0.194, -1.0, 1.0)
belief = pdf(belief0, x)
belief ./= sum(belief)*dx # Normalise
updated_mu = Float64[]
updated_sigma = Float64[]
for i in 1:1000
belief .*= flfin # Bayes rule
belief ./= sum(belief)*dx # Normalise
push!(updated_mu, sum(dx.*x.*belief)) # Recover mu
push!(updated_sigma, sqrt(sum(dx.*((x.-updated_mu[end]).^2).*belief))) # Recover sigma
end
I’m only interested in the \mu and \sigma of the belief over time. This implementation works (as far as I can tell), but it’s not so performant (in terms of computation time) when I need to compare a few thousand beliefs to the ‘truth’ distribution.
I’m wondering if there is a way I can recover the updated \mu and \sigma values of belief
without having to generate the pdf
s over some (arbitrarily chosen) discrete x
?
I’ve found this snippet concerning the product of two gaussian pdfs, but I don’t think that translates to the truncated case.
I expect that something like this has already been investigated by those in the Julia community that are far more competent than I am when it comes to Bayesian statistics. Essentially, all I’m looking for is a method to solve this issue faster than what I’m doing now.