Fixed that.

I have installed the new version of quadGK and it seems like the original problem went away (if I run `intdNdr`

individually). But within my Turing model, it still errors, with this stacktrace:

```
MethodError: no method matching kronrod(::Type{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, ::Int64)
Closest candidates are:
kronrod(::Type{T}, ::Integer) where T<:AbstractFloat at ~/.julia/packages/QuadGK/kf0xA/src/gausskronrod.jl:150
Stacktrace:
[1] macro expansion
@ ~/.julia/packages/QuadGK/kf0xA/src/gausskronrod.jl:259 [inlined]
[2] cachedrule
@ ~/.julia/packages/QuadGK/kf0xA/src/gausskronrod.jl:259 [inlined]
[3] do_quadgk(f::Integrals.var"#14#15"{IntegralProblem{false, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, typeof(MdNdr), ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}}, s::Tuple{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(norm), segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:7
[4] #28
@ ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:186 [inlined]
[5] handle_infinities(workfunc::QuadGK.var"#28#29"{Float64, Float64, Int64, Int64, typeof(norm), Nothing}, f::Integrals.var"#14#15"{IntegralProblem{false, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, typeof(MdNdr), ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}}, s::Tuple{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}})
@ QuadGK ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:115
[6] #quadgk#27
@ ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:185 [inlined]
[7] #__solvebp_call#13
@ ~/.julia/packages/Integrals/9bCIo/src/Integrals.jl:172 [inlined]
[8] #__solvebp#56
@ ~/.julia/packages/Integrals/9bCIo/src/Integrals.jl:317 [inlined]
[9] solve(::IntegralProblem{false, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, typeof(MdNdr), ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::QuadGKJL; sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, do_inf_transformation::Nothing, kwargs::Base.Pairs{Symbol, Float64, Tuple{Symbol}, NamedTuple{(:reltol,), Tuple{Float64}}})
@ Integrals ~/.julia/packages/Integrals/9bCIo/src/Integrals.jl:155
[10] intMdNdr(Mass::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, zmin::Float64, zmax::Float64, rmin::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, rmax::Float64)
@ Main ./In[3]:47
[11] dmh_smooth_part(Mhalo_init::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, zmin::Float64, zmax::Float64)
@ Main ./In[4]:5
```

After this the stack trace continues to my main model and so on. This error looks kind of similar to the previous one, so did the fix only apply to running ForwardDiff explicitly but not within Turing?

It hasnāt merged yet.

Oh it did, check `]st`

do you have the right version?

I have `QuadGK v2.6.0`

which appears to be the correct merged version.

Hmm, thatās definitely weird if the version is correct. Did you manage to work the problem out? If not, it would help to provide a link to your code somewhere.

After more debugging, Iāve found that the issue is that one of my integration bounds is a dual number. I have found that if I wrap my integration bounds with `ForwardDiff.value()`

, then the code runs without errors. However Iām not sure if doing it this way breaks the autodiff?

When you say that one of your integration bounds is a dual number, do you mean itās like that in the original code, or that when Turing evaluates the code, it inserts a dual number to do autodiff? The fix @ChrisRackauckas put out should have fixed the second problem.

It just means that it will not differentiate with respect to the bounds of the integral. If the bounds of the integral are constant, thatās fine.