Fixed that.
I have installed the new version of quadGK and it seems like the original problem went away (if I run intdNdr
individually). But within my Turing model, it still errors, with this stacktrace:
MethodError: no method matching kronrod(::Type{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, ::Int64)
Closest candidates are:
kronrod(::Type{T}, ::Integer) where T<:AbstractFloat at ~/.julia/packages/QuadGK/kf0xA/src/gausskronrod.jl:150
Stacktrace:
[1] macro expansion
@ ~/.julia/packages/QuadGK/kf0xA/src/gausskronrod.jl:259 [inlined]
[2] cachedrule
@ ~/.julia/packages/QuadGK/kf0xA/src/gausskronrod.jl:259 [inlined]
[3] do_quadgk(f::Integrals.var"#14#15"{IntegralProblem{false, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, typeof(MdNdr), ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}}, s::Tuple{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(norm), segbuf::Nothing)
@ QuadGK ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:7
[4] #28
@ ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:186 [inlined]
[5] handle_infinities(workfunc::QuadGK.var"#28#29"{Float64, Float64, Int64, Int64, typeof(norm), Nothing}, f::Integrals.var"#14#15"{IntegralProblem{false, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, typeof(MdNdr), ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}}, s::Tuple{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}})
@ QuadGK ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:115
[6] #quadgk#27
@ ~/.julia/packages/QuadGK/kf0xA/src/adapt.jl:185 [inlined]
[7] #__solvebp_call#13
@ ~/.julia/packages/Integrals/9bCIo/src/Integrals.jl:172 [inlined]
[8] #__solvebp#56
@ ~/.julia/packages/Integrals/9bCIo/src/Integrals.jl:317 [inlined]
[9] solve(::IntegralProblem{false, Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}}, typeof(MdNdr), ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::QuadGKJL; sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, do_inf_transformation::Nothing, kwargs::Base.Pairs{Symbol, Float64, Tuple{Symbol}, NamedTuple{(:reltol,), Tuple{Float64}}})
@ Integrals ~/.julia/packages/Integrals/9bCIo/src/Integrals.jl:155
[10] intMdNdr(Mass::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, zmin::Float64, zmax::Float64, rmin::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, rmax::Float64)
@ Main ./In[3]:47
[11] dmh_smooth_part(Mhalo_init::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 9}, zmin::Float64, zmax::Float64)
@ Main ./In[4]:5
After this the stack trace continues to my main model and so on. This error looks kind of similar to the previous one, so did the fix only apply to running ForwardDiff explicitly but not within Turing?
It hasn’t merged yet.
Oh it did, check ]st
do you have the right version?
I have QuadGK v2.6.0
which appears to be the correct merged version.
Hmm, that’s definitely weird if the version is correct. Did you manage to work the problem out? If not, it would help to provide a link to your code somewhere.
After more debugging, I’ve found that the issue is that one of my integration bounds is a dual number. I have found that if I wrap my integration bounds with ForwardDiff.value()
, then the code runs without errors. However I’m not sure if doing it this way breaks the autodiff?
When you say that one of your integration bounds is a dual number, do you mean it’s like that in the original code, or that when Turing evaluates the code, it inserts a dual number to do autodiff? The fix @ChrisRackauckas put out should have fixed the second problem.
It just means that it will not differentiate with respect to the bounds of the integral. If the bounds of the integral are constant, that’s fine.
Hello everyone,
Apologies for the necrobump but I seem to still be running into this error when attempting to emulate Turing.jl
’s Bayesian linear regression example. What I’m specifically attempting to do is recreate this model:
@model function linear_regression(x, y)
σ² ~ truncated(Normal(0, 100); lower=0)
β ~ Normal(0, sqrt(3))
nfeatures = size(x, 2)
coefficients ~ MvNormal(Zeros(nfeatures), 10.0 * I)
μ = β .+ x * coefficients
return y ~ MvNormal(μ, σ² * I)
end
On this dataframe, like so:
julia> df
3×2 DataFrame
Row │ a b
│ Int64 Int64
────┼──────────────
1 │ 2 4
2 │ 3 9
3 │ 5 25
julia> model = linear_regression(df.a, df.b) # runs fine
julia> chain = sample(model, NUTS(), 3_000) # error
ERROR: MethodError: no method matching *(::Vector{Int64}, ::Vector{Float64})
Closest candidates are:
[...]
I read through this thread so far and from what I’ve understood, it was an issue caused by a bounding issue with the integrals used during autodiff; an issue that’s been marked as solved due to the merge mentioned by @ChrisRackauckas earlier.
Because of this, I’m a little confused - should I still be getting this error? If so, should I try to instead use ForwardDiff.value()
instead? Or is there an issue in my function altogether? I’ve tried this on Julia 1.8 with QuadGK 2.8.1 and Turing 0.24, I haven’t tried this with Julia 1.9.
EDIT: Please let me know if I should post the complete stacktrace, it’s rather large which is why I’ve truncated it.
This is unrelated to this thread. Note that in Turing’s linear regression tutorial, x
is a matrix, but you have passed a vector. So this line becomes problematic:
μ = β .+ x * coefficients
because, as noted by the error, you cannot multiply 2 column vectors.
Thanks Sethaxen, now that you’ve pointed that out this issue makes much more sense to me. Apologies for the disturbance; I was directed to this thread (and no other) only after searching for the exact phrasing of the error I received, so I was a little misguided.