I have a Turing model where I need (in part) to do something like this:
@model test() = begin
x = ones(10)
sigma ~ InverseGamma(1, 1)
x ~ Normal(0, sigma)
end
m1 = sample(test(), NUTS(0.65), 1000)
I was a bit surprised that in the model above, x
is not fixed within the model, but ends up being estimated.
Basically, in my “real” model, I have a quantity x
the that is derived directly from other (estimated) parameters in the model. x
itself should not be estimated.
However, because the generated quantity has a prior that varies – and impacts/is impacted by – other parts of the model, I still need the overall model likelihood to take that into account, so it’s not appropriate to just “not include” a prior on the derived quantity.
Searching didn’t turn up anything useful about how to do something like this, or even whether it’s possible - any Turing users have any idea how to do this sort of thing? Would making a custom function to increment the log-likelihood directly work? Is there an alternative syntax to flag variables as “fixed”?