Implementing Gibbs sampler for ODE parameter estimation

I would like to use a Gibbs sampler for a model that looks like this (in Turing.jl):

@model function fit_burger2(data::AbstractVector, prob)
    # Prior distributions
    σᵣ = 1.5e6
	Aₘ ~ truncated(Normal(0.1, 0.05); lower=0.01, upper=1.1)
        Qₘ ~ Uniform(1e3,1e6)
	n ~ Uniform(0.7,4.5)
	pₘ ~ truncated(Normal(0.36, 0.2); lower=0.3, upper=0.7)
	Eₖ ~ Uniform(1e9,1e10)
	Eₘ ~ Uniform(1e8,9e9)
	η ~ truncated(Normal(1e11, 0.5e9); lower=1e9, upper=1e12)
	dₛₛ ~ truncated(Normal(2.4e-4, 1e-4); lower=1e-5, upper=1e-3)
	Fₛₛ ~ truncated(Normal(0.5, 0.1); lower=0.3, upper=0.7)
	ε₁ ~ Uniform(0.01,1.0)
	ε₂ ~ truncated(Normal(0.16, 0.07); lower=0.01, upper=0.9)
	S = 1e-4
	R = 8.3145
	T = 263

    # Simulate the model 
    p = [Aₘ, Qₘ, n, pₘ, R, T, Eₖ, Eₘ, η, dₛₛ, Fₛₛ, ε₁, ε₂, S]
    predicted = solve(prob, Rosenbrock23(); p=p, saveat=df2.time, save_idxs=3)

    # Observations
	data ~ MvNormal(predicted.u, σᵣ^2 * I)

    return nothing
end

I wonder what would be the process to do it, currently, I’m using regular Metropolis-Hastings:

chain2 = sample(model3, MH(), MCMCSerial(), 120_000, 2; 
										progress=true,
										discard_initial=70_000,
										thinning=100)

I was trying to find examples/tutorials that use Gibbs for estimating ODE parameters but couldn’t find any. So any hint to throw me in the right direction would be highly appreciated :slight_smile:

What sort of Gibbs sampling are you thinking?:slight_smile:

If you’re thinking Metropolis-within-Gibbs, this might be helpful: Guide

I was wondering if there is a way to update one parameter at a time while sampling, holding the others constant, and see what happens to the likelihood. Instead of updating the entire parameter set every iteration, just update one at a time (like in Gibbs sampling). The link you pointed out to shows how to treat each parameter separately, but not exactly what I need for the ODE problem.