I am running into a similar problem as this thread which unfortunately didn’t resolve the issue in my case. I think I am perhaps misusing `dualcache`

in the definition of my update function and I would be grateful for any help.

I am unable to provide a MWE as I am not sure what exactly is the source of the bug, neither am I able to post *any* working example as the model is both >100 lines long, and relies on a plethora of other functions that are imported from a private* repo on GitHub. I tried creating a simpler example from scratch but I ran into another issue that didn’t have an obvious solution from my searches so I will instead post and describe snippets of my code.

Here is an example of how I am using `dualcache`

*inside* my `model_step!`

function given to `solve`

```
# This variable will be used to check the type of the inputs, either Floats or Duals
if eltype(u) <: ForwardDiff.Dual
type_tester = u[1, 1]
elseif eltype(du) <: ForwardDiff.Dual
type_tester = du[1, 1]
elseif typeof(σ) <: ForwardDiff.Dual
type_tester = σ
else
type_tester = 0.0
end
FA = get_tmp.(FA, type_tester)
FH = get_tmp.(FH, type_tester)
FW = get_tmp.(FW, type_tester)
sum_FH = get_tmp(sum_FH, type_tester)
sum_FW = get_tmp(sum_FW, type_tester)
F_total = get_tmp.(F_total, type_tester)
```

Where the variables are defined as follows:

```
FA = [dualcache(@MVector([0.0, 0.0]), 10, levels = 2) for _ = 1:nagents]
FH = reshape([dualcache(@MVector([0.0, 0.0]), 10, levels = 2) for α = 1:nagents^2], nagents, nagents)
sum_FH = dualcache(@MVector([0.0, 0.0]), 10, levels = 2)
FW = reshape([dualcache(@MVector([0.0, 0.0]), 10, levels = 2) for α = 1:nagents*nshapes], nagents, nshapes)
sum_FW = dualcache(@MVector([0.0, 0.0]), 10, levels = 2)
F_total = [dualcache(@MVector([0.0, 0.0]), 10, levels = 2) for _ = 1:nagents]
```

Unfortunately, it appears that doing this does not suffice, as simply casting everything as `ForwardDiff.Dual`

is not necessarily the same `Dual`

as is used in `du`

or `u`

when I try to do `NUTS`

sampling in Turing:

```
# Define the model
@model function fit_social_force(data, sf_problem)
σ1 ~ TruncatedNormal(0.1, 0.1, 0, Inf)
p = dests, τs, Δt, V0_αβ, σ1, U0_αB, R, φ, c, target_directions, target_velocities, target_speeds, FH, sum_FH, FA, F_total, FW, sum_FW, vect_of_segments, min_closest_point
prob = remake(sf_problem, p = p)
data = solve(prob, Rodas5(), saveat = 0.1)
end
# Run the model to gather data
turing_timespan = (0.0, round(float(start_x_distance_apart) / minimum(target_speeds), sigdigits = 2))
turing_sf_prob = ODEProblem(social_force!, u0, turing_timespan, p, alg_hints = [:stiff])
turing_sol = Array(solve(turing_sf_prob, Rodas5(), saveat = 0.1))
# Instantiate the PPL model
model = fit_social_force(turing_sol, turing_sf_prob)
# Error due to u and du being different types.
sample(model, NUTS(), 1)
```

The error (full stacktrace here)

```
ERROR: LoadError: TypeError: in typeassert, expected Float64, got a value of type ForwardDiff.Dual{Nothing, Float64, 1}
```

shows that this line is causing issues:

```
du[3, i], du[4, i] = F_total[i]
```

and upon further investigation I found that `u`

and `du`

can have different types, causing values calculated from them such as `F_total[i]`

to cause typing assertion errors. I don’t know where to go from here. Apologies for no MWE but I would appreciate any input on or examples of integrating `Turing`

, `DifferentialEquations`

, and `PreallocationTools`

.

*I have been advised to keep it private due to plagiarism software false-flagging my repo. I can give access to the repo on request, just not post it online.