Yes, that’s right. The model you used for simulation is
\begin{aligned}
\lambda_i &\sim \mathrm{Normal}(0, 1)\\
y_i &\sim \mathrm{Poisson}(e^{\lambda_i})
\end{aligned},
but the model you have written with @model
is
\begin{aligned}
\lambda &\sim \mathrm{Normal}(0, 1)\\
y_i &\sim \mathrm{Poisson}(e^{\lambda})
\end{aligned}.
Note that the difference is that the Turing model uses a single global \lambda, while in the simulation model, each datum has its own \lambda_i.
But even if you fix that, your posterior would not in general look like the prior. The posterior conditioned on prior predictions only looks like the prior when marginalizing over the prior, i.e. when using the following procedure:
- Draw a single parameter \theta from the prior.
- Draw a single dataset y | \theta
- Draw \tilde{\theta} from the posterior conditioned on y
- Repeat steps 1-3 many times. Discard y and \theta. \tilde{\theta} will converge to the prior distribution.
This is the basis of the Simulation-Based Calibration method for checking if a specific inference method is incompatible with a given model.