# Can't get a user defined function to work in my Turing Model specification

I have this simple Turing model which I cannot get to work. The full MWE is given below.

``````using Turing, Distributions

function myfilter(x::AbstractVector, λ)
xnew = x
xnew[1] = x[1]
for i in 2:length(x)
xnew[i] = x[i] + λ*xnew[i-1]
end
xnew
end

# Make a simple transformed linear regression with parameter λ = 0.5 and β = 3
x = rand(100)*1000
x[rand(1:length(x), 50)] .= 0
y = 3*myfilter(x, 0.5) + randn(100) * 100

# Specify the turing model
@model mwe(x, y) = begin
β ~ Normal(0, 5)
λ ~ Beta(1,1)
σ ~ Normal(3, 3)
xx = myfilter(x, λ)
for i in 1:length(y)
μ = β*xx[i]
y[i] ~ Normal(μ, exp(σ))
end
end

# Sample it
model = mwe(x, y)
nchains = 4
chain = sample(model, NUTS(0.65), MCMCThreads(), 3_000, nchains);
chain
``````

I get a long error message when trying to run this model.
However, if I define a dummy myfilter function like this

``````function myfilter(x::AbstractVector, λ)
xnew = x .+ λ
xnew
end
``````

it works. So I’m wondering why the myfilter function doesn’t work in the original example? Is it because I’m indexing into the vector and changing data?

The full error message is here:

``````TaskFailedException

Stacktrace:
[1] wait
[3] macro expansion
[4] macro expansion
@ ~/.julia/packages/AbstractMCMC/ByHEr/src/sample.jl:338 [inlined]
[5] (::AbstractMCMC.var"#31#41"{Bool, Base.Iterators.Pairs{Symbol, UnionAll, Tuple{Symbol}, NamedTuple{(:chain_type,), Tuple{UnionAll}}}, Int64, Int64, Vector{Any}, Vector{UInt64}, Vector{DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}}, Vector{DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}}, Vector{Random._GLOBAL_RNG}})()

nested task error: TypeError: in typeassert, expected Float64, got a value of type ForwardDiff.Dual{Nothing, Float64, 3}
Stacktrace:
[1] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{Turing.Core.var"#f#3"{DynamicPPL.TypedVarInfo{NamedTuple{(:β, :λ, :σ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Tuple{}}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:β, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:λ, Tuple{}}, Int64}, Vector{Beta{Float64}}, Vector{AbstractPPL.VarName{:λ, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Tuple{}}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:σ, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Float64}, Float64, 3}, i1::Int64)
@ Base ./array.jl:839
[2] myfilter
@ ./REPL[108]:5 [inlined]
[3] #21
@ ./REPL[114]:5 [inlined]
@ Main ./none:0
[5] macro expansion
@ ~/.julia/packages/DynamicPPL/emAbx/src/model.jl:0 [inlined]
[6] _evaluate
@ ~/.julia/packages/DynamicPPL/emAbx/src/model.jl:150 [inlined]
@ ~/.julia/packages/DynamicPPL/emAbx/src/model.jl:140 [inlined]
@ DynamicPPL ~/.julia/packages/DynamicPPL/emAbx/src/model.jl:94
[9] Model
@ ~/.julia/packages/DynamicPPL/emAbx/src/model.jl:98 [inlined]
[10] f
[11] vector_mode_dual_eval
@ ~/.julia/packages/ForwardDiff/QOqCN/src/apiutils.jl:37 [inlined]
[15] gradient_logp(::Turing.Core.ForwardDiffAD{40}, θ::Vector{Float64}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:β, :λ, :σ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Tuple{}}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:β, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:λ, Tuple{}}, Int64}, Vector{Beta{Float64}}, Vector{AbstractPPL.VarName{:λ, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Tuple{}}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:σ, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, model::DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, sampler::DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}, ctx::DynamicPPL.DefaultContext)
[17] ∂logπ∂θ
@ ~/.julia/packages/Turing/TbEZL/src/inference/hmc.jl:433 [inlined]
[18] ∂H∂θ
[19] phasepoint
[21] initialstep(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, spl::DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:β, :λ, :σ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Tuple{}}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:β, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:λ, Tuple{}}, Int64}, Vector{Beta{Float64}}, Vector{AbstractPPL.VarName{:λ, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Tuple{}}, Int64}, Vector{Normal{Float64}}, Vector{AbstractPPL.VarName{:σ, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}; init_params::Nothing, nadapts::Int64, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/TbEZL/src/inference/hmc.jl:167
[22] step(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, spl::DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, kwargs::Base.Iterators.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ DynamicPPL ~/.julia/packages/DynamicPPL/emAbx/src/sampler.jl:87
[23] macro expansion
@ ~/.julia/packages/AbstractMCMC/ByHEr/src/sample.jl:123 [inlined]
[24] macro expansion
@ ~/.julia/packages/AbstractMCMC/ByHEr/src/logging.jl:15 [inlined]
[25] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, sampler::DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Iterators.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/ByHEr/src/sample.jl:114
[26] #sample#40
@ ~/.julia/packages/Turing/TbEZL/src/inference/hmc.jl:133 [inlined]
[27] macro expansion
@ ~/.julia/packages/AbstractMCMC/ByHEr/src/sample.jl:347 [inlined]
[28] (::AbstractMCMC.var"#927#threadsfor_fun#42"{UnitRange{Int64}, Bool, Base.Iterators.Pairs{Symbol, UnionAll, Tuple{Symbol}, NamedTuple{(:chain_type,), Tuple{UnionAll}}}, Int64, Vector{Any}, Vector{UInt64}, Vector{DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}}, Vector{DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}}, Vector{Random._GLOBAL_RNG}})(onethread::Bool)
[29] (::AbstractMCMC.var"#927#threadsfor_fun#42"{UnitRange{Int64}, Bool, Base.Iterators.Pairs{Symbol, UnionAll, Tuple{Symbol}, NamedTuple{(:chain_type,), Tuple{UnionAll}}}, Int64, Vector{Any}, Vector{UInt64}, Vector{DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}}, Vector{DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}}, Vector{Random._GLOBAL_RNG}})()
Stacktrace:
[1] sync_end(c::Channel{Any})
[2] macro expansion
[3] macro expansion
@ ~/.julia/packages/AbstractMCMC/ByHEr/src/sample.jl:316 [inlined]
[4] macro expansion
@ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
[5] macro expansion
@ ~/.julia/packages/AbstractMCMC/ByHEr/src/logging.jl:8 [inlined]
[6] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, sampler::DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}, ::MCMCThreads, N::Int64, nchains::Int64; progress::Bool, progressname::String, kwargs::Base.Iterators.Pairs{Symbol, UnionAll, Tuple{Symbol}, NamedTuple{(:chain_type,), Tuple{UnionAll}}})
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/ByHEr/src/sample.jl:310
[7] sample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{var"#21#22", (:x, :y), (), (), Tuple{Vector{Float64}, Vector{Float64}}, Tuple{}}, sampler::DynamicPPL.Sampler{NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}}, ensemble::MCMCThreads, N::Int64, n_chains::Int64; chain_type::Type, progress::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/TbEZL/src/inference/Inference.jl:217
[8] sample
@ ~/.julia/packages/Turing/TbEZL/src/inference/Inference.jl:217 [inlined]
[9] #sample#6
@ ~/.julia/packages/Turing/TbEZL/src/inference/Inference.jl:202 [inlined]
[10] sample
@ ~/.julia/packages/Turing/TbEZL/src/inference/Inference.jl:202 [inlined]
[11] #sample#5
@ ~/.julia/packages/Turing/TbEZL/src/inference/Inference.jl:189 [inlined]
@ Turing.Inference ~/.julia/packages/Turing/TbEZL/src/inference/Inference.jl:189
``````

My guess would be that in the second version of the function x + lambda calls a base method that adds a forward diff dual thingy to a float and returns forward diff dual. Whereas in the original function xnew = x creates a vector of floats, and when you try to set new values with forward diff dual values you get the error.

I’m no expert, but that’s just my guess. Also, this should probably be under the probabilistic programming domain.

I think EvoArt is correct. The segment of the error message indicates some part of your code expects a `Float64`, but received a `Dual`. I suspected that the problem is that `xnew` is defined in terms of x, which appears be a `Float64`. You might try something like` xnew = zeros(typeof(λ), n)`, where `n` is the desired length of `xnew`.

Thanks for the tip. I moved it to the PPL domain.

Thanks Christopher. I will try this and report back if it solves it.

No problem. I had a chance to look at your example in more detail. The following function works with your example model:

``````function myfilter(x::AbstractVector, λ)
n = length(x)
xnew = zeros(typeof(λ), n)
xnew[1] = x[1]
for i in 2:n
xnew[i] = x[i] + λ * xnew[i-1]
end
xnew
end
``````
1 Like

Thanks a lot @Christopher_Fisher! That did indeed solve it. For the record I can also say that removing the type on `x` also solves it.

``````function myfilter(x, λ)
xnew = x
for i = 2:length(x)
xnew[i] = x[i] + λ * xnew[i - 1]
end
xnew
end
``````

Thus I set my own trap when I specified the type on `x`. Thanks for the help guys.