How to do non-centered parameterization in Turing

k is a random variable k ~ Normal(μ, σ), μ and σ have some other priors. I want to use a non-centered parameterization of the prior. I want to do something like this

k_ ~ Normal(0, 1)
k = k_ * σ + μ

Since the above doesn’t use ~ Turing doesn’t record k, it only records k_. Is there a way to do this in Turing?

We currently don’t have a good way to record the transformed variable, but you can apply the transformation after the inference. We will likely add a mechanism to track arbitrary variables soon.

1 Like

I’ve few extra questions

  1. Can you please tell me if the logpdf of these distributions are implemented in a centered or non centered way? If they are implemented in a centered way then I will try to implement a new distribution with non centered parameterization.

  2. What happens when I run x ~ Normal() in Turing. I think it would internally call the logpdf(Normal(), x).

Yes that is more or less what is happening if you run HMC or any other gradien-based inference algorithm. What exactly happens depends on how the respective sampler overloads the function call, but for HMC this is simply the evaluation of the logpdf.

1 Like

We currently don’t do any automatic reparamterisation of models. This is also quite a difficult problem in general. However, as in Stan you can simply write the model using a centered or a non-centered parameterisation. The only difficulty is that we currently don’t have a good way to retrieve transformed variables after the sampling, but this will likely change soonish as we have discussed solutions to this already.

Thank you for clarifying this.

I was wondering if Turing doesn’t need reparameterization. I’ve come across the Neal’s funnel and I’ve implemented it in Turing. The posterior plot shows that there’s a need to reparameterize to efficiently explore the posterior.



I’ve used the stan’s implementation of the funnel in Turing as shown here

1 Like

Really nice. We should also add this to the Turing docs.

1 Like

Can I write this?


Uh that would be awesome!
cc: @cpfiffer

1 Like

Yeah, I’d be delighted to see this added!

1 Like

Let us know if you need help or have any questions.

1 Like

Is there a way to reparameterize MixtureModel?

For example MixtureModel([Uniform(0, 25), Normal(30,5)],[1/2, 1/2])

I have found this example on stan discourse. I guess writing a full log posterior is one way to do this.

Yes, you would have to increment the target value as in the STAN code.
In Turing you can do this as for example as follows:

@model ...
  ... # priors
  lp = log( ... ) # log pdf of observation model
  Turing.acclogp!(_varinfo, lp) # increment internal target value
1 Like