Recommended way to extract logprob and build gradients in Turing.jl?

It depends on what you want to use it for.

If you’re sampling, then generally you want to Approach #2, with a call
to link! if you’re working with a model whose support is bounded,
e.g. s ~ InverseGamma(2, 3) has support only on (0, ∞), but the
sampler assumes support is the unbounded, e.g. HMC requires unbounded
support. Calling link! will ensure two properties:

  1. You’re now working in unbounded space.
  2. The computation of the logjoint now involves a correction such that you’re still targetting the original model.

In contrast, when you’re doing optimization, as in Approach #3, you don’t actually need Property (2), but only Property (1).

Does that help? And yes, Approach #1 is generally now discouraged.

1 Like