I have run the Bayesian Neural Networks example in the Turing Tutorials and it works fine except for that the network weights (parameters variable) run into identifiability/convergence issues, i.e. most of them wanders around and there is no sight of convergence. This is not surprising given the simple MVN prior with fixed variance over the weights. Anyway, the predictions over the grid (Z down in the code example) are fine and reasonable. So my first attempt to solve the idenfiability issue was to try to estimate sigma moving it into the model as sig ~ truncated(Normal(0, 10), 0, 100). Running this works fine and convergence improves slightly. However, the grid predictions in Z now becomes non-sense (only the same value) even though all theta and sigma looks fine. So what’s going on here?