There is already `GLM.ftest`

which functions similarly to the `anova`

function in base R. You fit two or more nested models and feed them into `ftest`

and you get an analysis of variance table.

It was a conscious decision on the part of the R Core Development Team **not** to create a function that allowed for “Types” of sums of squares. John Fox or any others can create such functions but I cannot imagine that they would ever be part of base R.

This is Bill Venables’ point. In a modern computing environment there is no need for this nonsense about Type I and Type III and Type II and even Type IV, whatever that is. Fit the alternative model; fit the null model; compare them - and you’re done.

I realize that in some disciplines it is considered important to speak of model comparisons using all this arcane and nonsensical terminology. But it doesn’t make it meaningful. I have been a statistician for 45 years and am considered reasonably knowledgeable about linear models. If pressed I think I could describe what Type I and Type III mean. I have had Type II explained to me (and it seemed very suspect) but I don’t think I have understood what Type IV means and I have never met anyone who could tell me what it means.

For Julia I hope that the JuliaStats packages never incorporate an anova function that allows for different “Types”. If the user fits both the alternative and the null model then compares them one can hope the the user understands what the two models represent. The problem with the anova “Types” mumbo-jumbo is that the user thinks they know what it means but unless you can describe the models being compared I don’t see how the result is useful.