The basic problem is that, because of the (integrable) singularity in the integrand at x=t, it’s not able to converge the integral to the requested tolerance — it defaults to `rtol=sqrt(eps()) ≈ 1.5e-8`

, which is too ambitious for singular integrands in `Float64`

precision. It keeps refining/subdividing the integration integral near `x=3`

, and eventually gets so close that a cancellation error causes it to divide by zero.

If you request a lower tolerance, it works:

```
julia> t = 3; quadgk_count(F(t), 0, t; rtol=1e-7)
(-3.115217543805007, 2.4294414106433614e-7, 1275)
```

or if you use greater precision (which Mathematica probably does automatically):

```
julia> F(t) = x-> g(x) / (√oftype(x, π) * √(t-x)); # compute gamma(1/2)=√π in precision of x
julia> t = big"3.0"; quadgk_count(F(t), 0, t; rtol=1e-16)
(-3.115217737066816205260802666768569197165490937152478941478554510552125233724725, 2.921873150345359277761508183141050033640978755863127757988065979623519096541608e-16, 4965)
julia> Float64(quadgk_count(F(t), 0, t; rtol=1e-18)[1]) # rounded to ≈ machine precision
-3.1152177370668164
```

Ideally, to compute such integrals accurately and efficiently, you should use a quadrature technique that “knows” the singularity. For example, you could use Gauss–Jacobi quadrature:

```
julia> using FastGaussQuadrature
julia> x, w = gaussjacobi(10, -1/2, 0.0); # 10 points gives answer to about machine precision!
julia> t = 3; sum(w .* (x -> g(x)/√π).((x .+ 1) .* (t/2))) * √(t/2)
-3.115217737066818
```

It’s a little more awkward to use because you have to manually transform the interval to (-1,1) by a change of variables (taking out the 1/\sqrt{t-x} term since that is built into the quadrature rule) with an associated Jacobian factor, and it’s not adaptive so you need to decide how many points to use yourself by e.g. repeatedly doubling the order. But it’s fantastically accurate with just a few points, without going to extended precision!