Hi there. I am following a MOOC where I am learning goodness of fits tests, in particular the chi-square test for discrete rv and the Kolmogorov-Smirnov test for continuous rv.
I implemented the tests in the following code. It gives results that are consistent with the output of the exercises in the course, however when I simulate the test on true H0 even with LOT of repetitions, I have very low type 1 errors, much lower than the alpha level set in the test, except I am working for uniforms for which the tests accept (wrongly) H1 with a rate alpha.
Here the difinition of the tests…
using Distributions
# The test definitions.....
function computeDensity(data,support)
counts = [count(i -> i==s,data) for s in support]
if length(data) > sum(counts)
error("There are some data not in the support !")
end
return counts
end
"""
goodnessOfFitDiscrete(data,support,f₀;compressedData=true,α=0.05,d=0)
Perform a goodness to fit chi-squared test to check for a particular MMF.
The passed distribution must support the method `pdf(dist,x)` for the provided support.
Hâ‚€ can be either the PDF with a specified set of parameters or the PDF in general. In this case the distribution object should be passed to this function with the MLE estimators that best fit the data (it is NOT done inside this function). In such case the `d` parameter should be set to the number of estimated parameters in order to remove the `d` degree of freedom from the chi-square test.
"""
function goodnessOfFitDiscrete(data,support,f₀;compressedData=true,α=0.05,d=0)
if !compressedData
data = computeDensity(data,support)
end
K = length(support)
N = sum(data)
p̂ = data ./ N
df = K - d - 1
p0 = pdf.(fâ‚€,support)
T = N * sum((p̂[k] - p0[k])^2/p0[k] for k in 1:K)
χDist = Chisq(df)
rejectedH₀ = T > quantile(χDist,1-α)
p_value = 1 - cdf(χDist,T)
return (testValue=T, threashold=quantile(χDist,1-α),rejectedH₀=rejectedH₀, p_value=p_value)
end
When I use the Uniform distribution, I obtain the expected number of errors:
fâ‚€ = DiscreteUniform(1,100)
repetitions = 5000
support = 1:100
outs = fill(false,repetitions)
for rep in 1:repetitions
data = rand(fâ‚€,10000)
out = goodnessOfFitDiscrete(data,support,f₀,compressedData=false,α=0.05,d=0)
outs[rep] = out.rejectedHâ‚€
end
sum(outs)/repetitions # around 0.05
However when I use an other distribution, even a simmetric and unimodal one like the Binomial, I have much lower numbers:
fâ‚€ = Binomial(100,0.7)
repetitions = 5000
support = 0:100
outs = fill(false,repetitions)
for rep in 1:repetitions
data = rand(fâ‚€,10000)
out = goodnessOfFitDiscrete(data,support,f₀,compressedData=false,α=0.05,d=0)
outs[rep] = out.rejectedHâ‚€
end
sum(outs)/repetitions # 0.02 or 0.03 instead of 0.05
Crazy enough, if I did try to increase alpha to e.g. 0.1 or 0.2, I can see that the output of the test when using the uniform adapt and match the new alpha, but for the binomial it remains to very low levels… even if I set my test to reject H0 even with a very high error probability, it almost never does!
Is there an “official” implementation of the chi-square test for goodness of fit I can try with ?