Hi,

I was using HypothesisTests.jl and noticed minor, but possibly important differences in the p-value estimate between julia and two other packages when using Fisher’s exact test. In this example, matlab and R users might claim a significant result, whereas the julia user might not. This is a fairly common test, and it would be great if there was agreement. Thanks for considering this comment for discussion.

Scott

julia (v0.6):

```
julia> FisherExactTest(59, 335, 172, 1366)
Fisher's exact test
-------------------
Population details:
parameter of interest: Odds ratio
value under h_0: 1.0
point estimate: 1.3984544219625261
95% confidence interval: (0.9980930945998796, 1.9393947540537153)
Test summary:
outcome with 95% confidence: fail to reject h_0
**two-sided p-value: 0.051329212328076565 <------------------------**
Details:
contingency table:
59 335
172 1366
```

matlab:

```
>> x = table([59;172],[335;1366])
x =
2×2 table
Var1 Var2
____ ____
59 335
172 1366
>> [h,p,stats]=fishertest(x,'Tail','both','Alpha',0.95)
h =
logical
1
**p =**
** 0.045036387203992 <--------------------------------**
stats =
struct with fields:
OddsRatio: 1.398715723707046
ConfidenceInterval: [1.384515635855077 1.413061452741955]
```

R:

```
> x = matrix(c(59,172,335,1366), nrow = 2)
> x
[,1] [,2]
[1,] 59 335
[2,] 172 1366
> fisher.test(x,alternative = "two.sided")
Fisher's Exact Test for Count Data
data: x
**p-value = 0.04503639 <------------------------------**
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.9980904309 1.9393926010
sample estimates:
odds ratio
1.398453903
```