I’m not sure if calling rand
inside a Turing model works as expected, i.e., at least when using NUTS, the model should specify a deterministic (and differentiable) log-posterior function.
Instead, I would try something like this:
@model function fitlv_time_uncertainty(times, prob, data)
# Prior distributions
σ ~ InverseGamma(2, 3)
α ~ truncated(Normal(1.5, 0.5), 0.5, 2.5)
β ~ truncated(Normal(1.2, 0.5), 0, 2)
γ ~ truncated(Normal(3.0, 0.5), 1, 4)
δ ~ truncated(Normal(1.0, 0.5), 0, 2)
τ ~ Uniform(0, 0.3)
# Simulate Lotka-Volterra model
p = [α, β, γ, δ]
predicted = solve(remake(prob, p=p, tspan=(-0.5, maximum(times) + 0.5)), Tsit5())
# Time uncertainty
ΔT ~ MvNormal(zeros(length(times)), τ)
T = times .+ ΔT
# Observations
data .~ Normal.(stack(predicted.(T)), σ^2)
end
The idea behind this is that
- the time uncertainty should be sampled via Turing just like all other parameters.
- passing auto-diff types – which Turing will to internally for the parameters it samples – to
tspan
orsaveat
might not work as expected. Instead, the ODE is solved on a range large enough to cover the time span including some uncertainty - calling
predicted.(T)
will given an inter-/extrapolation of the solution at the desired timesT
(including uncertainty). Further, auto-diff most likely works on the interpolation used byDifferentialEquations