Gradient of latent space variable in Turing?

I just started playing with Turing so this is probably a stupid question, is there an easy way (say for the purposes of inspecting my model) to get a posterior gradient of a latent space variable at fixed data? Here’s my attempt, which doesn’t seem to work with either Zygote or ForwardDiff:

@model function MyModel(x)
    σ ~ Uniform(0, 1)
    x ~ Normal(0, σ^2)
    return x
end

model = MyModel(missing)

Zygote.gradient(σ -> logprob"x=1 | model=model, σ=σ", 1)
ForwardDiff.derivative(σ -> logprob"x=1 | model=model, σ=σ", 1)

Both last two lines error (I’m guessing this is not the right way to do this.)

The code generated by the logprob macro is not AD friendly. Try the function:

Turing.gradient_logp(Turing.ZygoteAD(), θ, Turing.VarInfo(model), model)

but you need to define the model using MyModel(x) and θ should be the vector of random variables in order of appearance in the model. In this case, it is just [σ].

3 Likes