k is a random variable k ~ Normal(μ, σ), μ and σ have some other priors. I want to use a non-centered parameterization of the prior. I want to do something like this
k_ ~ Normal(0, 1)
k = k_ * σ + μ
Since the above doesn’t use ~ Turing doesn’t record k, it only records k_. Is there a way to do this in Turing?
We currently don’t have a good way to record the transformed variable, but you can apply the transformation after the inference. We will likely add a mechanism to track arbitrary variables soon.
Can you please tell me if the logpdf of these distributions are implemented in a centered or non centered way? If they are implemented in a centered way then I will try to implement a new distribution with non centered parameterization.
What happens when I run x ~ Normal() in Turing. I think it would internally call the logpdf(Normal(), x).
Yes that is more or less what is happening if you run HMC or any other gradien-based inference algorithm. What exactly happens depends on how the respective sampler overloads the function call, but for HMC this is simply the evaluation of the logpdf.
We currently don’t do any automatic reparamterisation of models. This is also quite a difficult problem in general. However, as in Stan you can simply write the model using a centered or a non-centered parameterisation. The only difficulty is that we currently don’t have a good way to retrieve transformed variables after the sampling, but this will likely change soonish as we have discussed solutions to this already.
I was wondering if Turing doesn’t need reparameterization. I’ve come across the Neal’s funnel and I’ve implemented it in Turing. The posterior plot shows that there’s a need to reparameterize to efficiently explore the posterior.
Centered:
Non-centered:
I’ve used the stan’s implementation of the funnel in Turing as shown here