Implementation of NUTS | Translating Stan model to Julia | JuliaConnectoR

Hello,

I am a new Julia user, moving from Stan. I am currently working on Bayesian epidemiological modeling. I have worked the @Storopoli tutorial and have successfully implemented posterior sampling with NUTS(), following the code on the page and replicating the posterior output. I have a couple of questions.

Question 1

I wanted to experiment with the arguments of NUTS(), described in the Turing.jl dev docs, namely the n_adapts (The number of samples to use with adaptation), the target acceptance rate for dual averaging and max_depth (Maximum doubling tree depth), so as to replicate the analysis I have performed in Stan.

I change

chain_sir = sample(model_sir, NUTS(), 2_000)

to

chain_sir = sample(model_sir,  NUTS(600, .70, 19), 700) 

But this throws the error:

Stacktrace:
 [1] indexed_iterate(I::Int64, i::Int64, state::Nothing)
   @ Base .\tuple.jl:98
 [2] merge(a::NamedTuple{(), Tuple{}}, itr::Tuple{Int64, Float64, Int64})
   @ Base .\namedtuple.jl:292
 [3] (NUTS{Turing.Core.ForwardDiffAD{40}})(::Int64, ::Vararg{Any})
   @ Turing.Inference C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:460
 [4] NUTS(::Int64, ::Vararg{Any}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ Turing.Inference C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:414
 [5] NUTS(::Int64, ::Vararg{Any})
   @ Turing.Inference C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:414
 [6] top-level scope
   @ REPL[24]:1

Also, tried

chain_sir = sample(model_sir,  NUTS(n_adapts = 600, target_accept_ratio = 0.7, max_depth=10),700)

, receiving the error

Closest candidates are:
  (NUTS{AD})(::Float64; max_depth, Δ_max, init_ϵ, metricT) where AD at C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:449 got unsupported keyword arguments "n_adapts", "target_accept_ratio"
  (NUTS{AD})(::Int64, ::Float64, ::Symbol...; max_depth, Δ_max, init_ϵ, metricT) where AD at C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:437 got unsupported keyword arguments "n_adapts", "target_accept_ratio"
  (NUTS{AD})(::Int64, ::Float64, ::Tuple{}; kwargs...) where AD at C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:428

What would be the correct syntax? An online search did not help.

Question 2

I am currently implementing a more complex Bayesian compartmental model (I have previously asked a related question here), and I have translated my Stan code to Julia. I aim to implement my analysis via R, therefore I am using the JuliaConnectoR R library.

I define all functions and the model into a Julia module, then I import the module in R with JuliaConnectoR. Test data, the Julia module and a reproducible R code are available in this repo.

2.A. The following attempt to execute NUTS results in the error

nuts_fit_1 <- Turing$sample(model_sir, Turing$NUTS(0.7), 700)

Error: Evaluation in Julia failed.
Original Julia error message:
MethodError: no method matching sample(::DynamicPPL.Model{Main.Test_module.var"#1#2", (:y_deaths, :n_obs, :n_pop, :y_init, :ts, :left_t, :right_t, :eta0_sd, :eta1_sd, :gamma_shape, :gamma_scale, :sigmaBM_sd, :reciprocal_phi_scale, :sigmaBM_cp1, :sigmaBM_cp2, :I_D_rev), (), (), Tuple{Vector{Float64}, Int64, Float64, Vector{Float64}, Vector{Int64}, Vector{Int64}, Vector{Int64}, Int64, Int64, Int64, Int64, Int64, Int64, Int64, Int64, Vector{Float64}}, Tuple{}}, ::Turing.Inference.NUTS{Turing.Core.ForwardDiffAD{40}, (), AdvancedHMC.DiagEuclideanMetric}, ::Float64)
Closest candidates are:
  sample(::AbstractMCMC.AbstractModel, !Matched::AbstractMCMC.AbstractSampler, ::Any; kwargs...) at C:\Users\lbour\.julia\packages\AbstractMCMC\oou1a\src\sample.jl:15
  sample(::AbstractMCMC.AbstractModel, ::Turing.Inference.InferenceAlgorithm, !Matched::AbstractMCMC.AbstractMCMCParallel, !Matched::Integer, !Matched::Integer; kwargs...) at C:

2.B Attempting to change the user arguments results in:

nuts_fit_1 <- Turing$sample(model_sir, Turing$NUTS(600,0.7,10), 700)

Error: Evaluation in Julia failed.
Original Julia error message:
BoundsError: attempt to access Float64 at index [2]
Stacktrace:
  [1] indexed_iterate(I::Float64, i::Int64, state::Nothing)
    @ Base .\tuple.jl:98
  [2] merge(a::NamedTuple{(), Tuple{}}, itr::Tuple{Float64, Float64, Float64})
    @ Base .\namedtuple.jl:292
  [3] (Turing.Inference.NUTS{Turing.Core.ForwardDiffAD{40}})(::Float64, ::Vararg{Float64})
    @ Turing.Inference C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:460
  [4] Turing.Inference.NUTS(::Float64, ::Vararg{Float64}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Turing.Inference C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:414
  [5] Turing.Inference.NUTS(::Float64, ::Vararg{Float64})
    @ Turing.Inference C:\Users\lbour\.julia\packages\Turing\uAz5c\src\inference\hmc.jl:414
  [6] invokelatest(::Any, ::Any, ::Vararg{Any}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})

Trying to debug 2.A., I suspect that the issue arises from my @model , but I cannot understand the Error output in order to fix it. Point 2.B. is more related to Question 1.

Thanks.

Versions:

WIndows 10, laptop machine
Julia 1.7.0
Turing v0.15.1
DifferentialEquations v6.18.0
LazyArrays v0.21.20
R 4.1.0
JuliaConnectoR 1.1.0
RStudio 1.4.11

For Question 1

According to https://turing.ml/dev/docs/library/#Turing.Inference.NUTS , NUTS takes two positional arguments. Notice the semicolon after the first two arguments. What you want is

chain_sir = sample(model_sir,  NUTS(600, .70; max_depth =  19), 700) 
2 Likes

2.A:

At least one problem with this could be that number literals are always floating point numbers in R, and thus your literal 700 is passed as a Float64 to julia. Consequently it complains that no implementation of Turing.sample exists, that takes a Float64 as its third argument.
So you could try to wrap all your integer literals with as.integer in R, e.g:

nuts_fit_1 <- Turing$sample(model_sir, Turing$NUTS(0.7), as.integer(700))

This is just guessing from the error message. Neither have I much experience with R, nor have I ever used JuliaConnectoR

2 Likes

Thanks @trahflow. This seems to do the job.

Adding @stefan-m-lenz in the loop for the JuliaConnectoR specifics. The semicolon in the following command is not accepted.

nuts_fit_1 <- Turing$sample(model_sir,
                            Turing$NUTS(as.integer(600), 0.7; max_depth = as.integer(19)),
                            as.integer(700),
                            progress  = TRUE)
1 Like

This is still R syntax (which JuliaConnectoR will automatically convert to julia syntax for you, I guess), so I think you want a comma here instead of a semicolon, see e.g. here: https://github.com/stefan-m-lenz/JuliaConnectoR/blob/master/tests/testthat/test.R#L967-L975

2 Likes

Thanks for the pointer. as.integer(600) is equivalent to 600L, which is also accepted (and improves readability).
From a first go, replacing the semicolon with the comma appears to work. I am still receiving error messages related to object types in my ODE function. Once sorted out, I will re-execute. For now, the following appears to be the correct syntax:

nuts_fit_1 <- Turing$sample(model_sir,
                            Turing$NUTS(600L, 0.7, max_depth =  19L),
                            700L,
                            progress  = TRUE)

Thanks @trahflow, you are exactly right!

2 Likes

BTW, if the ODE code is in R you may want to JIT compile it using the following setup:

https://github.com/SciML/diffeqr#performance-enhancements

1 Like

Thanks @ChrisRackauckas. The functions and the model code are written in Julia and are located in a .jl source file. With the help of JuliaConnectoR I call the source code and implement the model with Turing in R.

I am almost finished translating my Stan code to Julia, following also the Julia QuantEcon Modeling COVID 19 with Differential Equations. I am not opening a new topic for the following question, as it is (somewhat) related to the migration to Julia.

I receive the following error, related to ForwardDiff.Dual and the SIR ODE solver. Looking at this topic, I understand that I am mixing the Vector{Float64} with type dual number, e.g. the SIR function exports a vector dy_dt = Vector{Float64}(undef, 4) with the derivatives. There are some related topics like this one, but no luck so far.

A working example with the Julia source code is located in this github repo.

MethodError: no method matching nextfloat(::Int64)
Closest candidates are:
  nextfloat(!Matched::Union{Float16, Float32, Float64}, !Matched::Integer) at C:\Users\lbour\AppData\Local\Programs\Julia-1.7.0\share\julia\base\float.jl:709
  nextfloat(!Matched::ForwardDiff.Dual{T, V, N}) where {T, V, N} at C:\Users\lbour\.julia\packages\ForwardDiff\CkdHU\src\dual.jl:265
  nextfloat(!Matched::Tracker.TrackedReal) at C:\Users\lbour\.julia\packages\DistributionsAD\WaBSG\src\tracker.jl:10
  ...
Stacktrace:
  [1] ode_determine_initdt(u0::Vector{Float64}, t::Int64, tdir::Int64, dtmax::Int64, abstol::Float64, reltol::Float64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), prob::SciMLBase.ODEProblem{Vector{Float64}, Tuple{Int64, Int64}, false, Vector{Any}, SciMLBase.ODEFunction{false, typeof(Main.Single_type_sir.sir_ode), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Noth

Many thanks for any help you may provide.

Your time is an integer. Make it a Float64.

the programmer’s way of saying carpe diem.

1 Like

I’m also confused why that isn’t just automatically converting to float because it will do that in Julia, so it is some oddity with the R wrapping.

I begin to worry that these are artifacts of the R wrapping. I noticed the following:

In my @model block, an example of my priors is

  eta0 ~ TruncatedNormal(0, eta0_sd, 0, 10) 
  
  gamma ~ truncated(Gamma(gamma_shape, gamma_scale), 0, 10)

  sigmaBM1 ~ TruncatedNormal(0, sigmaBM_sd, 0, 10) 
  sigmaBM2 ~ TruncatedNormal(0, sigmaBM_sd, 0, 10)
  sigmaBM3 ~ TruncatedNormal(0, sigmaBM_sd, 0, 10) 
  
  reciprocal_phiD ~ truncated(Cauchy(0, reciprocal_phi_scale), 0, 10)

where the hyperparameters eta0_sd, gamma_shape, gamma_scale, sigmaBM_sd, reciprocal_phi_scale are integers, defined by the user.

From the Distributions.jl documentation, it looks like the syntax is correct. However, I receive the error

Original Julia error message:
TypeError: in typeassert, expected Float64, got a value of type ForwardDiff.Dual{Nothing, Float64, 10}
Stacktrace:
  [1] setindex!(A::Vector{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{Turing.Core.var"#f#7"{DynamicPPL.TypedVarInfo{NamedTuple{(:eta0, :gamma, :sigmaBM1, :sigmaBM2, :sigmaBM3, :reciprocal_phiD, :eta), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:eta0, Tuple{}}, Int64}, Vector{Distributions.Truncated{Distributions.Normal{Float64}, Distributions.Continuous, Float64}}, Vector{AbstractPPL.VarName{:eta0, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:gamma, Tuple{}}, Int64}, Vector{Distributions.Truncated{Distributions.Gamma{Float64}, Distributions.Continuous, Float64}}, Vector{AbstractPPL.VarName{:gamma, Tuple{}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:sigmaBM1, Tuple{}}, Int64}, Vect

which I undeerstand that is related to the first instance of truncated(), that is gamma ~ truncated(Gamma(gamma_shape, gamma_scale), 0, 10).

I have attempted to implement the analysis with the R package JuliaCall.

In my attempt to perform a test run of NUTS with

test <- JuliaCall::julia_eval("chain_sir = sample(model_sir, NUTS(5, 0.7; max_depth =  19), 5)")

I receive the same error regarding ForwardDiff.Dual. Note that I am calling Distributions.jl in my preamble.

Thanks.

Hi there,

For the last few days I’ve been trying to get my head around on this. In the end, I’d like to solve an ODE function within the Turing @model macro, which accounts for user-defined variables like integers, user-defined matrices etc, without having to add them in the p argument. There is confusion due to the fact that I mix knowledge from R & Stan and I have not fully grapsed the Julia workflow yet.

User-defined in the macro arguments: n_obs is the number of observations, n_pop the population size, left_t and right_t are vectors of length n_obs (time indexes).

Attempt 1: I tried assigning to p an array and then unpacking inside the sir_ode() function, but I was running into errors regarding Float64 in the ODEproblem, eg:

p = Any[] 
push!(p, n_obs)
push!(p, n_pop)
push!(p, n_difeq)
push!(p, gamma)
push!(p, beta0)
push!(p, beta_N)
push!(p, left_t)
push!(p, right_t)

Attempt 2:

function sir_ode!(du, u, p, t)

   gamma   = p[1]
   beta0   = p[2]
   beta_N  = p[3:(3 + n_obs_con - 1)]
   beta    = 0.1 

    for i in 1:n_obs_con 
        if t >= 0 && t < 1 
			beta = beta0
        elseif t >= left_t_con[i] && t <= right_t_con[i]
			beta = beta_N[i]
		end
    end

    infection = beta * u[1] * u[2] / n_pop_con 
    recovery = gamma * u[2]
	
    du[1] = -infection
		
	du[2] = infection - recovery	
		
	du[3] = recovery
		
	du[n_difeq_con] = infection

    nothing
end 

@model function bayes_sir(y, 
				 n_obs,
				 n_pop,
				 n_difeq,
				 y_init,
				 ts,
				 left_t,
				 right_t,
				 eta0_sd,
				 eta1_sd,
				 gamma_shape,
				 gamma_scale,
				 sigma_sd)
				 
  #-- Define the constants in the macro(?) environment. These will be passed to the sir_ode function.
  # This doesn't work, starting from the n_obs_con variable, not found by the sir_ode() function.
  
  local n_obs_con   = n_obs
  local n_pop_con   = n_pop
  local n_difeq_con = n_difeq
  local left_t_con  = left_t
  local right_t_con = right_t
  
  #-- Initiate the vector of etas:
  eta = Vector{Float64}(undef, n_obs)
  
  #-- Prior distributions:
  eta0 ~ Normal(0, eta0_sd)
  
  gamma ~ truncated(Gamma(gamma_shape, gamma_scale), 0, 10)
  sigmaM ~ TruncatedNormal(0, sigma_sd, 0, 10)
 
  reciprocal_phiD ~ truncated(Cauchy(0, reciprocal_phi_scale), 0, 10)
  
  eta[1] ~ Normal(0, eta1_sd)

  for i in 2:n_obs
      eta[i] ~ Normal(eta[i-1], sigmaM)
  end
  
  beta0  = exp.(eta0)
  beta_N = exp.(eta)
  
  #-- ODE solutions:
  p = vcat(gamma, beta0, beta_N)
  ts = Vector{Float64}(ts)
  #y_init = Vector{Float64}(y_init)
  
  prob  = ODEProblem(sir_ode!, y_init, ts, p)
  # _prob = remake(prob; u0 = convert.(eltype(p), prob.u0), p = p)

  sol  = solve(prob, Midpoint(), saveat = 1.0) 

  #--- Do other stuff.  
end

You want eta = Vector{Float64}(undef, n_obs) to match the type of the stuff you’re actually putting in there. eta = Vector{eltype(eta1_sd * sigmaM)}(undef, n_obs) is an easy way to compute it.

1 Like

Thanks.

What would be the most efficient way to pass variables like n_obs (scalar), right_t (vector), etc in the ODE function, if defining them as global/local within the @model macro doesn’t work?

Eg, when I want to pass a vector A and a matrix B of 3x3 in the function, should I convert the matrix into a vector of length 9, create a vector B_to_vector, with```p = [A, B_to_vector] and attempt to reconstruct B within the ODE function?

Another option would be to set p to an array like “Attempt 1” above, but this doesn’t seem to be supported by DifferentialEquations.

View reshape yeah, if you’re using adjoints.

1 Like