GPU support for Turing modeling with system of ODEs

Hello,

I am trying to perform Bayesian analysis on a Turing model with an ecological ODE system embedded; consider it to be a complex variant of a Lotka-Volterra model. As of now, the code is running perfectly fine without errors, but it is taking multiple days for finishing the sampling. Inference from 7000 draws for a model with 10 parameters is taking close to 3 days. So, I am planning to implement it on GPUs.

Can someone help me as to how to code appropriately? I couldn’t find any tutorials regarding how to perform inference using GPUs. Are there any tutorials available online. Following are some details:

  1. I use the default NUTS sampler and AutoVern7(Rodas5()) ODE solver.
  2. None of the functions I use are custom-made. For instance, no custom prior distributions, no custom likelihood functions, etc.
  3. Following are the packages I need to use:
using DifferentialEquations, Interpolations, XLSX, DataFrames
using Distributions, MCMCChains, Turing
using ForwardDiff, Preferences

I couldn’t find any easy tutorials or documentation as to how to go forward with Turing. Can someone please help me?

I can share more details of the code if needed and also the implementation process. To reiterate, current code using CPU cores is running fine and the only drawback is that it takes a lot of time for execution.

Thanks again for the help.

What did you try? If you just follow DiffEqGPU’s tutorials for example inside of a Turing.jl module it should just work. There’s no combined tutorial, but if you just stick a tutorial for GPU solving of ODEs with a tutorial of Turing.jl then you get both together.

1 Like