Anderson-Darling test pvalue

Hi everyone, I’m using an Anderson-Darling test to check for normality and am unsure about the results. I am expecting to reject at the .01 level with a p-value around .008. The pvalue() call on the OneSampleADTest returns .323. When looking at the test summary the test statistic is 1.06, expected.

I’m new to Julia and am thinking I am causing the problem. Any reason the returned pvalue is different than what Agostino and Stephen’s would give?

Maybe you could double-check the results using Pingouin.

I did try this. I also double checked off excel, python and JMP. The test statistic is the same across all bu the returned p-value is different, in this case not significant, in Julia vs the others.

Pardon my formatting, still learning:
using CSV
using DataFrames
using Statistics
using StatsPlots
using Distributions
using HypothesisTests
using Pingouin

data = [6.67, -0.69, -2.03, -2.6, 2.19, -1.17, 9.93, -2.68, 10.41, -1.34, -0.16, -0.26, 1.67, -4.04, 6.46, 2.94, 7.32, -2.49, -1.26, -0.17, 7.06] mean_samp = mean(data) std_samp = std(data) d = Normal(mean_samp, std_samp) tested = OneSampleADTest(data, d) tested pvalue(tested)
Returns the right AD value but the pvalue returned is .34. Should be .008.

1 Like

In this case, maybe you could open an issue in the HypothesisTests.jl github page.

Before doing so it might be good to check https://github.com/JuliaStats/HypothesisTests.jl/issues/74 and https://github.com/JuliaStats/HypothesisTests.jl/pull/118 explain the discrepancy.

After some research it seems that the reference most programs use is:

D’Agostino, R.B. and Stephens, M.A. (1986) Goodness-of-Fit Techniques; Marcel-Dekker: New York, USA

The formulas can be easily implemented in Julia and the desired p-value of 0.0087 obtained for your data:

using Distributions

function AD(Zₛ)
    n = length(Zₛ)
    cf = cdf(Normal(0,1), Zₛ)
    (−n − 1/n * sum(((2i-1)*log(cf[i]) +(2*(n-i)+1)*log(1-cf[i])) for i in 1:n))
end

# D'Agostino and Stephens (1986)
function p_Agostino(adc)
    adc >= 0.6 ? exp(1.2937 - 5.709*adc + 0.0186*adc^2)     :
    0.34 < adc ? exp(0.9177 - 4.279*adc - 1.38*adc^2)       :
    0.20 < adc ? 1 - exp(8.318 + 42.796*adc - 59.938*adc^2) :
    1 - exp(-13.436 + 101.14*adc - 223.73*adc^2)
end

X = [6.67, -0.69, -2.03, -2.6, 2.19, -1.17, 9.93, -2.68, 10.41, -1.34, -0.16, -0.26, 1.67, -4.04, 6.46, 2.94, 7.32, -2.49, -1.26, -0.17, 7.06]

sort!(X)
X̄ = mean(X)
σₓ = std(X)
Zₛ = (X .- X̄)/σₓ     # Z-scores
n = length(X)
adc = AD(Zₛ)*(1 + 0.75/n + 2.25/(n*n))     # correction for small sample sizes
p_Agostino(adc)   # p-value = 0.0087
2 Likes