Thanks for raising an issue and posting the data as well. (I don’t check Discourse very often, but I do get GitHub notifications for issues.) To complete the cross-referencing, this is the issue on GitHub.

I’ll answer a little bit here because I don’t think this is a bug, but rather a data/model issue. The difference between the fast and slow fits for GLMM is whether both the random effects and the fixed effects are optimized. For LMM, we only need to optimize for the random effects and get the fixed effects estimates more or less for free. (I’m oversimplifying a bit here.) For GLMM, that leads to a less accurate fit because the fixed effects estimates are conditional on the random effects. (The link function basically makes it so that you get conditional instead of marginal estimates; there has been some discussion on this in various R fora, especially surrounding `GLMMadaptive`

which can give you the marginal estimates as well as the conditional estimates.) So you have optimize over the fixed and random effects, but this greatly increases the size of the parameter space and thus makes for a slower fit. When `fast=true`

, only the random effects are optimized, potentially greatly reducing the size of the parameter space at the cost of accuracy.

For many models, there is little difference between the two fits, but there are some pathological cases. We do have a few examples of models where the slow fit is unstable in some sense, but the fast fit is fine. To the best of my knowledge, all current examples of this pathology are Poisson family, but I don’t know why this is the case. The other pathology I have seen – noticeably different estimates – does occur in binomial models.

For now, I would check to see whether the fast fit does a good job capturing your data, by e.g. comparing fitted vs. observed. If that’s the case, then the fast fit is probably fine.