Hi–new to Julia here, sorry if this has been answered. I came over from lme4 in R-land, where you get p-values from lmers from the lmerTest package, which uses the Satterthwaite method for estimating degrees of freedom and thence p-values.
In MixedModels in Julia, instead of getting a t-statistic in the output of a model, I get a z-value and p-value. Can I infer from this that MixedModels assumes a normal distribution around the coefficient estimate?
Whatever you call the test statistic, it’s still computed the same way: estimate / error. So the t vs. z label is really more of a hint about how you convert that test statistic to a p-value. In MixedModels.jl, the test statistic is treated as coming from a standard normal and is hence called “z”.
This actually isn’t a bad approximation because t → z very rapidly as the residual degrees of freedom go to infinity. In other words, if your residual dof are greater than about 30, then you will see very little difference in treating things as a t value or a z value. If you want a more precise test, then the parametric bootstrap or likelihood profile are available. You can also use the likelihood ratio test.