function f(x)
x^2
end
@model function simplemodel()
x ~ Normal(5, 1)
y := f(x)
return y
end
sample(simplemodel(), NUTS(), 50) |> histogram
As can be seen it involves the deterministic function f. Now lets say that I have some observation for y = f(x), such as y=3.
What I am trying to do is to constrain the model, like this simplemodel() | (y = 3.0,).
However the results are not as I would have expected. In fact this observation does not seem to change anything on the model. I would have expected similar results as when setting simplemodel() | (x = sqrt(3.0),), after which there is only one value for y. Instead nothing happens.
Follow up question: What if my observation of y is not a single value, but a value with an uncertainty, like y = 3 \pm 0.1 = \mathcal N(3,0.1)?
I don’t know the details of the DynamicPPL/Turing implementation, but it would appear that the probabilistic model can’t reason backwards from deterministically derived quantities. (Which is not generally something you want to do anyway.)
If you replace y := f(x) with y ~ Normal(f(x), 0.1), it will behave how you want it to, as it’s now a standard Bayesian problem with a prior and likelihood. Making the observation error very small (e.g. 1e-6) will give you a good approximation of your first, deterministic, situation.
I also tried that, I think its a good idea. For that application there is also the Dirac Distribution, if I want to say that y is precicely known.
However the sampling does not seem to work at all for small standard deviations. It seems to only obtain a single inital sample and then the markov chain does not move any way from it:
using Turing, Plots, StatsPlots
function f(x)
global nevals+=1
x^2
end
@model function simplemodel()
x ~ Normal(5, 1)
y ~ Normal(f(x), 0.001)
# y ~ Dirac(f(x))
# return y
end
nevals=0
chain = sample(simplemodel(), MH(), 1000)
histogram(chain)
plot(chain)
@show nevals