ODEs inside a Turing model

I’m trying to write an HMM using Turing. The forward model from one time step to another is given by a system of differential equations. I wrote this up but I’m getting an error:

MethodError: no method matching OrdinaryDiffEq.Vern9Cache(

Is this because I have to use Dual Numbers inside the ODE? Do I typecast the initial condition to be a vector of dual numbers?

Is there a minimal example of this sort of thing? None of the Turing tutorials cover working with an ODE within Turing. I could find one example here: Simple example of ODE parameters inference · Issue #810 · TuringLang/Turing.jl · GitHub of a predator-prey model whose parameters were estimated within Turing, but this case has one single ODE problem with one single initial condition. My situation requires a fresh ODE problem at every discrete time-step with fresh initial conditions.

Some kind of pointers to what to read, or minimal examples, would be very helpful!

Maybe @ChrisRackauckas might be able to help. I don’t think it is directly related to Turing but I might be wrong. Could you also post a MWE?

1 Like

We should get a quick tutorial into the Turing docs, and probably show off a few things. I think @Vaibhavdixit02 was looking to do it.

@Manoj_Gopalkrishnan you need to make sure that your state types match the dualness they will need, so probably just _u0 = eltype(p).(u0) to dualify them. And FWIW, it’s probably best to use concrete_solve and Zygote differentiation for adjoints if you have large numbers of parameters.

2 Likes

Yes, a tutorial would be great! I only have a very basic knowledge on ODEs, would be amazing if someone with more expertise could write up a tutorial.

1 Like

Thank you Martin and Chris!

Meanwhile I wrote this code snippet based on the document I had shared and was able to get it to work in my ODE solver loop.

function test_f(u0,q)
    prob = ODEProblem(PR!,eltype(p).(u0),(0.0,1.0), q)
    concrete_solve(prob,Rodas4(),saveat=[1.0];abstol=1e-6, reltol= 1e-6, alg_hints=[:nonstiff])
end

Here PR! is my function describing the vector field for the ODE and q are parameters. Though this worked, one thing I’m not clear on is: how does it know what is p? Is it picking up p from some global context, or does it know that p is the name to one of the fields returned by ODEProblem? Or should I be using eltype.(q).(u0)?

Now when I describe my Turing model and run a chain, I get an error message

Mutating arrays is not supported

What does this mean and how do I find the part of my code that is causing this, and fix it? I am not able to give a MWE short of giving my entire Turing model, because I am not sure where the error is.

Also: if I comment out the ODE code within my Turing model, reversediff works. If I include the ODE code, reversediff gives the above error. Zygote gives the above error even with or without the ODE code commented out.

So my question is: What should I be looking for to find mutating arrays? Does mutating mean the type of the array changes? Where could this be happening? How do I stop it from happening?

Let me fix that really quickly. I forgot to add the @grad overload after @mohamed82008 added it to ReverseDiff.jl.

1 Like

So I tried to write a MWE. Here it is, really minimal. It’s a stupid way of using differential equations to do linear regression. The differential equation propagates the y vector from one time to the next by adding m to y.

using Plots,Turing, Distributions, DifferentialEquations 
using XLSX, ReverseDiff, Zygote, DualNumbers, DiffEqSensitivity, OrdinaryDiffEq
Turing.setadbackend(:reversediff)

function line!(du,u,p,t)  
    du[1] = p[1]
end

function test_f(u0,m)
    prob = ODEProblem(line!,eltype(m).(u0),(0.0,1.0), m)
    solve(prob)
end

@model function fitline(ys, ::Type{T}=Vector{Float64}) where {T}
    θ ~ Uniform(0, .9*π/2) 
    m = tan(θ)  
    c ~ Normal(0,100)
    σ ~ truncated(Normal(0,100),0,Inf)
    τ = length(ys)
    
    u0 = Array{Float64}(undef, τ+1)
    u0[1] = c
    for i = 1:τ
       u0[i+1] = test_f([u0[i]],[m])
       ys[i] ~ Normal(u0[i+1], σ)
    end
end;

ys = [3.0 + rand(Normal(i*5.0, 0.1)) for i in 1:20]
model = fitline(ys)

chain = sample(model, NUTS(.65),30)

and the error is

MethodError: Cannot `convert` an object of type OrdinaryDiffEq.ODECompositeSolution{Float64,2,Array{Array{Float64,1},1},Nothing,Nothing,Array{Float64,1},Array{Array{Array{Float64,1},1},1},ODEProblem{Array{Float64,1},Tuple{Float64,Float64},true,Array{Float64,1},ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},CompositeAlgorithm{Tuple{Tsit5,Rosenbrock23{0,false,DefaultLinSolve,DataType}},AutoSwitch{Tsit5,Rosenbrock23{0,false,DefaultLinSolve,DataType},Rational{Int64},Int64}},OrdinaryDiffEq.CompositeInterpolationData{ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float64,1},1},Array{Float64,1},Array{Array{Array{Float64,1},1},1},OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5Cache{Array{Float64,1},Array{Float64,1},Array{Float64,1},OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64}},OrdinaryDiffEq.Rosenbrock23Cache{Array{Float64,1},Array{Float64,1},Array{Float64,1},Array{Float64,2},Array{Float64,2},OrdinaryDiffEq.Rosenbrock23Tableau{Float64},DiffEqBase.TimeGradientWrapper{ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Float64,1},Array{Float64,1}},DiffEqBase.UJacobianWrapper{ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,Array{Float64,1}},DefaultLinSolve,FiniteDiff.JacobianCache{Array{Float64,1},Array{Float64,1},Array{Float64,1},UnitRange{Int64},Nothing,Val{:forward},Float64},FiniteDiff.GradientCache{Nothing,Array{Float64,1},Array{Float64,1},Val{:forward},Float64,Val{true}}}},AutoSwitch{Tsit5,Rosenbrock23{0,false,DefaultLinSolve,DataType},Rational{Int64},Int64}}},DiffEqBase.DEStats} to an object of type Float64
Closest candidates are:
  convert(::Type{T}, !Matched::Unitful.Quantity) where T<:Real at C:\Users\Manoj\.julia\packages\Unitful\9XsxK\src\conversion.jl:145
  convert(::Type{T}, !Matched::Unitful.Level) where T<:Real at C:\Users\Manoj\.julia\packages\Unitful\9XsxK\src\logarithm.jl:22
  convert(::Type{T}, !Matched::Unitful.Gain) where T<:Real at C:\Users\Manoj\.julia\packages\Unitful\9XsxK\src\logarithm.jl:62
  ...

Stacktrace:
 [1] setindex!(::Array{Float64,1}, ::OrdinaryDiffEq.ODECompositeSolution{Float64,2,Array{Array{Float64,1},1},Nothing,Nothing,Array{Float64,1},Array{Array{Array{Float64,1},1},1},ODEProblem{Array{Float64,1},Tuple{Float64,Float64},true,Array{Float64,1},ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}},DiffEqBase.StandardODEProblem},CompositeAlgorithm{Tuple{Tsit5,Rosenbrock23{0,false,DefaultLinSolve,DataType}},AutoSwitch{Tsit5,Rosenbrock23{0,false,DefaultLinSolve,DataType},Rational{Int64},Int64}},OrdinaryDiffEq.CompositeInterpolationData{ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Array{Float64,1},1},Array{Float64,1},Array{Array{Array{Float64,1},1},1},OrdinaryDiffEq.CompositeCache{Tuple{OrdinaryDiffEq.Tsit5Cache{Array{Float64,1},Array{Float64,1},Array{Float64,1},OrdinaryDiffEq.Tsit5ConstantCache{Float64,Float64}},OrdinaryDiffEq.Rosenbrock23Cache{Array{Float64,1},Array{Float64,1},Array{Float64,1},Array{Float64,2},Array{Float64,2},OrdinaryDiffEq.Rosenbrock23Tableau{Float64},DiffEqBase.TimeGradientWrapper{ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Array{Float64,1},Array{Float64,1}},DiffEqBase.UJacobianWrapper{ODEFunction{true,typeof(line!),LinearAlgebra.UniformScaling{Bool},Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing,Nothing},Float64,Array{Float64,1}},DefaultLinSolve,FiniteDiff.JacobianCache{Array{Float64,1},Array{Float64,1},Array{Float64,1},UnitRange{Int64},Nothing,Val{:forward},Float64},FiniteDiff.GradientCache{Nothing,Array{Float64,1},Array{Float64,1},Val{:forward},Float64,Val{true}}}},AutoSwitch{Tsit5,Rosenbrock23{0,false,DefaultLinSolve,DataType},Rational{Int64},Int64}}},DiffEqBase.DEStats}, ::Int64) at .\array.jl:826
 [2] macro expansion at .\In[34]:24 [inlined]
 [3] ##evaluator#898(::DynamicPPL.Model{var"###evaluator#898",(:ys, :T),Tuple{Array{Float64,1},Type{Array{Float64,1}}},(),DynamicPPL.ModelGen{var"###generator#899",(:ys, :T),(:T,),Tuple{Type{Array{Float64,1}}}}}, ::DynamicPPL.VarInfo{DynamicPPL.Metadata{Dict{DynamicPPL.VarName,Int64},Array{Distribution,1},Array{DynamicPPL.VarName,1},Array{Real,1},Array{Set{DynamicPPL.Selector},1}},Float64}, ::DynamicPPL.SampleFromPrior, ::DynamicPPL.DefaultContext) at C:\Users\Manoj\.julia\packages\DynamicPPL\h0R38\src\compiler.jl:355
 [4] evaluate_singlethreaded at C:\Users\Manoj\.julia\packages\DynamicPPL\h0R38\src\model.jl:147 [inlined]
 [5] Model at C:\Users\Manoj\.julia\packages\DynamicPPL\h0R38\src\model.jl:136 [inlined]
 [6] VarInfo at C:\Users\Manoj\.julia\packages\DynamicPPL\h0R38\src\varinfo.jl:110 [inlined]
 [7] VarInfo at C:\Users\Manoj\.julia\packages\DynamicPPL\h0R38\src\varinfo.jl:109 [inlined]
 [8] DynamicPPL.Sampler(::NUTS{Turing.Core.ReverseDiffAD{false},(),AdvancedHMC.DiagEuclideanMetric}, ::DynamicPPL.Model{var"###evaluator#898",(:ys, :T),Tuple{Array{Float64,1},Type{Array{Float64,1}}},(),DynamicPPL.ModelGen{var"###generator#899",(:ys, :T),(:T,),Tuple{Type{Array{Float64,1}}}}}, ::DynamicPPL.Selector) at C:\Users\Manoj\.julia\packages\Turing\d4vqQ\src\inference\hmc.jl:378
 [9] Sampler at C:\Users\Manoj\.julia\packages\Turing\d4vqQ\src\inference\hmc.jl:376 [inlined]
 [10] sample(::Random._GLOBAL_RNG, ::DynamicPPL.Model{var"###evaluator#898",(:ys, :T),Tuple{Array{Float64,1},Type{Array{Float64,1}}},(),DynamicPPL.ModelGen{var"###generator#899",(:ys, :T),(:T,),Tuple{Type{Array{Float64,1}}}}}, ::NUTS{Turing.Core.ReverseDiffAD{false},(),AdvancedHMC.DiagEuclideanMetric}, ::Int64; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at C:\Users\Manoj\.julia\packages\Turing\d4vqQ\src\inference\Inference.jl:161
 [11] sample at C:\Users\Manoj\.julia\packages\Turing\d4vqQ\src\inference\Inference.jl:161 [inlined]
 [12] #sample#1 at C:\Users\Manoj\.julia\packages\Turing\d4vqQ\src\inference\Inference.jl:151 [inlined]
 [13] sample(::DynamicPPL.Model{var"###evaluator#898",(:ys, :T),Tuple{Array{Float64,1},Type{Array{Float64,1}}},(),DynamicPPL.ModelGen{var"###generator#899",(:ys, :T),(:T,),Tuple{Type{Array{Float64,1}}}}}, ::NUTS{Turing.Core.ReverseDiffAD{false},(),AdvancedHMC.DiagEuclideanMetric}, ::Int64) at C:\Users\Manoj\.julia\packages\Turing\d4vqQ\src\inference\Inference.jl:151
 [14] top-level scope at In[34]:31

I think the problem is inside the function test_f when I am trying to cast u0 to the type of m which is Float64 here. I don’t know how to fix this. Please help!

No, I think the issue is that u0 is Array{Float64}. @mohamed82008 do you know how to do this correctly?

– EDIT –
It might work if you use this magical definition, but I can’t try it out atm.

@model function fitline(ys, ::Type{T}=Vector{Float64}, ::Type{V}=Vector{Float64}) where {T,V}
    θ ~ Uniform(0, .9*π/2) 
    m = tan(θ)  
    c ~ Normal(0,100)
    σ ~ truncated(Normal(0,100),0,Inf)
    τ = length(ys)
    
    u0 = V(undef, τ+1)
    u0[1] = c
    ...
1 Like

The problem seems to be that solve returns an instance of OrdinaryDiffEq.ODECompositeSolution but @Manoj_Gopalkrishnan is trying to store this in a vector of Float64.

2 Likes

Tried

out. Got the error.

syntax: function argument names not unique

Stacktrace:
 [1] top-level scope at C:\Users\Manoj\.julia\packages\IJulia\DrVMH\src\kernel.jl:52

@Manoj_Gopalkrishnan I suggest you first write a function that samples parameters from the prior distributions and then generates fake data using rand then move on to making a Turing model out of it. The errors above don’t seem Turing-specific but are more basic Julia errors that can be fixed in the fake data generating function.

1 Like

Thank you @mohamed82008 . I needed to write

u0 = test_f([u0],[m]).u[end][1]

In particular, I didn’t need u0 to be an array. With this I am able to generate data, as you suggested. This is also a cool debugging technique for Turing models! I feel I learnt something.

BUT when I am trying to run the Turing model, the chain starts but doesn’t terminate even after a long time. Given how small this model is, and how few samples I’m trying to generate, clearly I am doing something wrong. Here’s my code as it now stands.

using Turing, Distributions, DifferentialEquations 
using ReverseDiff, Zygote, DualNumbers, DiffEqSensitivity, OrdinaryDiffEq
Turing.setadbackend(:reversediff)

function line!(du,u,p,t)  
    du[1] = p[1]
end

function test_f(u0,m)
    prob = ODEProblem(line!,eltype(m).(u0),(0.0,1.0), m)
    solve(prob,saveat=[1.0], alg_hints=[:nonstiff])
end

@model function fitline(ys, ::Type{T}=Vector{Float64}) where {T}
    θ ~ Uniform(0, .9*π/2.0) 
    m = tan(θ)  
    c ~ Normal(0,100.0)
    σ ~ truncated(Normal(0,100.0),0,Inf)
    τ = length(ys)
    
    u0 = Array{Float64}(undef, τ+1)
    u0[1] = c
    for i = 1:τ
        u0[i+1] = test_f([u0[i]],[m]).u[end][1]
        ys[i] ~ Normal(u0[i+1], σ)
    end
end;

ys = [3.0 + rand(Normal(i*5.0, 0.1)) for i in 1:20]
model = fitline(ys)

chain = sample(model, NUTS(.65),30)

So after a longish while it halted, and gave reasonable output:

Object of type Chains, with data of type 15×15×1 Array{Float64,3}

Iterations        = 1:15
Thinning interval = 1
Chains            = 1
Samples per chain = 15
internals         = acceptance_rate, hamiltonian_energy, hamiltonian_energy_error, is_accept, log_density, lp, max_hamiltonian_energy_error, n_steps, nom_step_size, numerical_error, step_size, tree_depth
parameters        = c, θ, σ

2-element Array{ChainDataFrame,1}

Summary Statistics
  parameters     mean      std  naive_se     mcse     ess   r_hat
  ──────────  ───────  ───────  ────────  ───────  ──────  ──────
           c   0.6625   0.3308    0.0854  missing  2.8000  2.5974
           θ   1.3484   0.0399    0.0103  missing  2.8000  2.5303
           σ  11.5305  11.5763    2.9890  missing  2.8000  2.0529

Quantiles
  parameters    2.5%   25.0%   50.0%    75.0%    97.5%
  ──────────  ──────  ──────  ──────  ───────  ───────
           c  0.1994  0.3262  0.6950   0.9553   1.0723
           θ  1.2897  1.3043  1.3734   1.3818   1.3825
           σ  2.4997  2.8359  3.8546  19.2765  31.9908

Given how few samples I took, theta looks a good fit for the slope (tan \theta looks close to actual slope) but \sigma and c are a bit off, which is not surprising actually.

I’m still wondering: why is this so slow? And how can I speed it up? Any tips?

Which AD backend are you using?

With the next release of DiffEqBase (coming tonight), the other reverse modes will work on concrete_solve, so this is valid code:

using Turing, Distributions, DifferentialEquations
using ReverseDiff, Zygote, DualNumbers, DiffEqSensitivity, OrdinaryDiffEq
Turing.setadbackend(:reversediff)

function line!(du,u,p,t)
    du[1] = p[1]
end

function test_f(u0,m)
    prob = ODEProblem(line!,u0,(0.0,1.0), m)
    concrete_solve(prob, nothing, save_everystep=false, alg_hints=[:nonstiff])
end

@model function fitline(ys, ::Type{T}=Vector{Float64}) where {T}
    θ ~ Uniform(0, .9*π/2.0)
    m = tan(θ)
    c ~ Normal(0,100.0)
    σ ~ truncated(Normal(0,100.0),0,Inf)
    τ = length(ys)

    u0 = c
    for i = 1:τ
        u0 = test_f([u0],m)
        ys[i] ~ Normal(u0, σ)
    end
end;

ys = [3.0 + rand(Normal(i*5.0, 0.1)) for i in 1:20]
model = fitline(ys)

chain = sample(model, NUTS(.65),30)

However, you only want to use reverse mode AD when there are a lot of parameters. For something like <100 parameters, it’s probably better to do the following:

Turing.setadbackend(:forwarddiff)

function line!(du,u,p,t)
    du[1] = p[1]
end

function test_f(u0,m)
    prob = ODEProblem(line!,eltype(m).(u0),(0.0,1.0), m)
    concrete_solve(prob, nothing, save_everystep=false, alg_hints=[:nonstiff])[end][1]
end

@model function fitline(ys, ::Type{T}=Vector{Float64}) where {T}
    θ ~ Uniform(0, .9*π/2.0)
    m = tan(θ)
    c ~ Normal(0,100.0)
    σ ~ truncated(Normal(0,100.0),0,Inf)
    τ = length(ys)

    u0 = c
    for i = 1:τ
        u0 = test_f([u0],m)
        ys[i] ~ Normal(u0, σ)
    end
end;

ys = [3.0 + rand(Normal(i*5.0, 0.1)) for i in 1:20]
model = fitline(ys)
chain = sample(model, NUTS(.65),30)

Still, this is not optimized all of the way because we can specialize on the fact that you’re solving a scalar ODE, and thus you can use an out-of-place ODE function with a scalar initial condition like:

Turing.setadbackend(:forwarddiff)

function line_oop!(u,p,t)
    p[1]
end

function test_f(u0,m)
    prob = ODEProblem(line_oop!,eltype(m).(u0),(0.0,1.0), m)
    concrete_solve(prob, nothing, save_everystep=false, alg_hints=[:nonstiff])[end]
end

@model function fitline(ys, ::Type{T}=Vector{Float64}) where {T}
    θ ~ Uniform(0, .9*π/2.0)
    m = tan(θ)
    c ~ Normal(0,100.0)
    σ ~ truncated(Normal(0,100.0),0,Inf)
    τ = length(ys)

    u0 = c
    for i = 1:τ
        u0 = test_f(u0,m)
        ys[i] ~ Normal(u0, σ)
    end
end;

ys = [3.0 + rand(Normal(i*5.0, 0.1)) for i in 1:20]
model = fitline(ys)
chain = sample(model, NUTS(.65),30)

So those examples should set you on the right track.

2 Likes

I was using ReverseDiff, am going to try with the others as well…

Using :forwarddiff, now it’s lightning fast! I still couldn’t get concrete_solve to work with any of the solvers. Maybe your update is not pushed yet.

Had to modify one line in your code:
from

concrete_solve(prob, nothing, save_everystep=false, alg_hints=[:nonstiff])[end]

to

solve(prob, nothing, save_everystep=false, alg_hints=[:nonstiff]).u[end][1]

After just 500 samples, for m=5, c=33, and \sigma = .1, we get

Object of type Chains, with data of type 250×15×1 Array{Float64,3}

Iterations        = 1:250
Thinning interval = 1
Chains            = 1
Samples per chain = 250
internals         = acceptance_rate, hamiltonian_energy, hamiltonian_energy_error, is_accept, log_density, lp, max_hamiltonian_energy_error, n_steps, nom_step_size, numerical_error, step_size, tree_depth
parameters        = c, θ, σ

2-element Array{ChainDataFrame,1}

Summary Statistics
  parameters     mean     std  naive_se    mcse       ess   r_hat
  ──────────  ───────  ──────  ────────  ──────  ────────  ──────
           c  33.0087  0.0396    0.0025  0.0014  173.6316  0.9968
           θ   1.3732  0.0001    0.0000  0.0000  146.2767  1.0001
           σ   0.0999  0.0173    0.0011  0.0027  206.8505  1.0022

Quantiles
  parameters     2.5%    25.0%    50.0%    75.0%    97.5%
  ──────────  ───────  ───────  ───────  ───────  ───────
           c  32.9347  32.9827  33.0057  33.0371  33.0864
           θ   1.3730   1.3732   1.3733   1.3733   1.3735
           σ   0.0732   0.0872   0.0977   0.1091   0.1391

which is bang on the dot! And took a twinkle to run!

1 Like

DiffEqBase v6.35. But for ForwardDiff you don’t need concrete_solve.

1 Like

There will be a tutorial in the Turing docs soon enough covering this: Add Bayesian differential equations tutorial by ChrisRackauckas · Pull Request #68 · TuringLang/TuringTutorials · GitHub

3 Likes