ODE Parameter Estimation with Priors

I have never done parameter estimation with priors(bayesian inference) before and I am a little unsure if I am going about it the right way. I created a simple example, but will ultimately be doing it for a more complicated system of ODEs. Was hoping I might get some advice/pointers for how I might improve this, or some reassurance that what I am doing is okay, before I dive into the deep end!

Should I be using DiffEqBayes.jl? Or perhaps I should just build my own loss and include the priors in that?

Anyways, Here’s the code:

using OrdinaryDiffEq, DiffEqParamEstim
using Optimization, OptimizationOptimJL
using Distributions
using Plots

#Define ODE Problem
tspan = (0.0,10.0)
f!(du,u,p,t) = (du[1] = -p[1]*u[1])
prob = ODEProblem(f!, [1.0], tspan, [0.4])
tdata = 0.0:0.1:10.0

#Generate dataset from with k=0.4, u0=1.0

k_true = 0.4
u0_true = 1.0
prob_true = ODEProblem(f!, [u0_true], tspan, [k_true])
sol_true = solve(prob_true, Tsit5(); saveat=tdata, reltol=1e-10, abstol=1e-10)
y_true = Array(sol_true)[1,:]

#Define loss and priors and then optimize
loss = L2Loss(tdata, y_true)
priors = [Gamma(1., .1)] # prior on p[1] (positive)

obj = build_loss_objective(prob, Tsit5(), loss; priors=priors, saveat=tdata)

adtype = Optimization.AutoForwardDiff();
optf = Optimization.OptimizationFunction((x, _) → obj(x), adtype);
optprob = Optimization.OptimizationProblem(optf, [0.2]);

res = Optimization.solve(optprob, OptimizationOptimJL.LBFGS());

#Get estimate and then plot against data
prob_fit = remake(prob; p = res.u)

sol_fit = solve(
prob_fit,
Tsit5();
saveat = tdata,
abstol = 1e-9,
reltol = 1e-9
);

yhat = Array(sol_fit)[1, :]

plot(tdata, y_true, label = “data”);
plot!(tdata, yhat, label = “MAP fit”)

Thanks,

DS

I can try to recode your example using Turing, but I am not quite sure I understand exactly what you are trying to do. In any event, Turing.jl is a set of packages that includes a probabalistic programming language (PPL) and Markov Chain Monte Carlo machinery which together can be used to specify and compute Bayesian models. The link is to an example using Turing.jl to compute a solution to a Lokta-Volterra problem in the presence of noisy data. I think this is what you are trying to do above, but again, I got lost. https://turinglang.org/docs/tutorials/bayesian-differential-equations/

Hi @nshpritz thanks for taking the time to respond. This looks like a promising lead. Might be exactly what I am looking for. I am trying to tune the parameters of my model to create a “generic model” which can then be adapted to specific instances, or observations, with the help of priors (informed by the “generic model”).

Thanks again! Appreciate it!