For a toy model, I tried to simulate an ODE together with a jump process, which jump rate at a certain time is dependent on the ODE at that time. (Motivated by the growth of a tumor and a death process, where the probability to die increases by tumor size.)
Using DifferentialEquations.jl and JumpProcesses.jl this can be done very cleanly as shown in this MWE:
using DifferentialEquations using JumpProcesses using Plots using Random # set seed for reproducibility Random.seed!(123) # define the drift function function ODE!(du, u, p, t) if (u == 0) du = p.β*u else du = 0 end end # define the jump-rate functions jump_rate(u, p, t) = p.λ*u^2 # define the jump functions function jump_affect!(integrator) if (integrator.u == 0.0) # still alive integrator.u += 1 end nothing end # define initial conditions and parameters u0 = [0.05, 0.0] p = (β = 0.3, λ = 0.1,) tspan = (0.0, 30.0) # define the ODE problem prob = ODEProblem(ODE!, u0, tspan, p) # define the jump problem vrj = VariableRateJump(jump_rate, jump_affect!) prob = JumpProblem(prob, Direct(), vrj) sol = solve(prob, Tsit5());
However, when I access the solution, I run into imprecisions and inconsistent results.
I wanted to find out the death event time and assumed the solver stores the times, where the process jumps. So I searched for them via the array interface of the solution object.
jump_index = findfirst([tuples(sol)[i] == 1.0 for i in 1:length(tuples(sol))]); jump_time = tuples(sol)[jump_index];
But when I call the solution object as a function of that time, I get a different result than the one that is stored in the
sol.u array for that timepoint. I.e. the second line of the following code block will evaluate to
sol.t[jump_index] == jump_time sol.u[jump_index] == sol(jump_time)
I am aware that using the solution object as a function of a timepoints, just returns an approximated/interpolated value of the solution.
However, it seems inconsistent to me that this also happens for the stored values. In my opinion they should agree with each other at those timepoints.
The explanation I could imagine to make sense here, is that the solution object stores the timepoints not to full precision, but cuts them after a certain number of decimal places.
Still, I am not sure whether this is the reason and find it nevertheless inconsistent and not ideal especially when working with jump processes, where I am interested in the exact jump times of the simulation.
Does any one know why this behavior occurs?
Is there any reason why this behaviour can’t be circumvented or is even desirable?
Thanks for your help,
Ps: I hope the above MWE is understandable. If not please let me know how to improve on it or the title of the post.