Somehow I didn’t make the connection between this p
argument and the problem parameters, and thought this is some additional feature. The documentation at http://docs.juliadiffeq.org/latest/basics/integrator.html#Handing-Integrators-1 is a bit confusing.
I guess I’ll use parameters for now, though it feels like a bit of a hack to use them to store extra data generated while solving. I suppose it’s also a bit error-prone since they won’t be automatically initialized in solve
, so some care will be needed with the Monte Carlo interface, but I might just skip it and manage the collection of statistics myself.
Incidentally, the docs also mention “user data” here http://docs.juliadiffeq.org/stable/basics/common_solver_opts.html#User-Data-1, but it’s not very clear how to use it. Is it meant for something of this sort (storing extra data generated while solving)?
The context is modeling a large population of sensory neurons. Neurons emit spikes at random times influenced by external stimulus and possibly influencing other brain processes (which have a more abstracted model so they live in the SDE part). There may be additional information for each spike, which is where marks come in. In my case the mark attached to each spike has parameters of the spiking neuron (you can also formulate this as an unmarked process for each neuron rather than a single marked process, however the marked process approach allows some simplification by approximating a large population using a continuous distribution over neuron parameters). I imagine other models of spiking neural population might also have marks specific to spikes (not just neurons).
I can’t say much unfortunately about how others use MPP models, but I expect support for equations like this should cover most cases, without requiring any new algorithms:
du_t = f(u_t,p,t)dt + Σgᵢ(u_t,p,t)dWⁱ + \int_y h(u_t,p,t,y)N(dt,dy)
where the last term is an integral w.r.p the MPP N viewed as a random discrete measure over [t_0,t_1]\times\mathcal{Y}, with \mathcal{Y} being the space of marks. There could also be a sum over such integrals with different processes N^i, though strictly speaking it doesn’t increase the generality.
Mathematically, a marked point process is characterized by an “intensity kernel” \lambda_t(dy) (relative to some relevant filtration \mathcal{F}_t, which for this purpose could just be “all the history”), such that the rate of points with marks in Y\subset\mathcal{Y} at time t (conditioned on \mathcal{F}_t) is \int_Y\lambda_t(dy). Practically, that just means you generate the times as a point process with rate \int_\mathcal{Y}\lambda_t(dy) and sample each mark from \lambda_t(dy)/\int_\mathcal{Y}\lambda_t(dy). So I imagine an interface like this
# total rate (∫λ(u,p,t,dy), rate of the unmarked process)
total_rate(u,p,t) = ...
# generate a random mark for a point at time t with state u
# (sample from (λ(u,p,t,dy) / ∫λ(u,p,t,dy))
sample_mark(u,p,t) = ...
# affect the state u following a point at time t with mark m
# (the effect of ∫h(...)N(dt,dy))
point_affect!(du,u,p,t,m) = ...
mpp = MarkedPointProcess( total_rate, sample_mark, point_affect! )
prob = JumpProblem( ode_or_sde, mpp )
sol = solve(prob)
times, marks = get_point_events(sol)
which would just mean something like
function jump!(du,u,p,t)
m = sample_mark(u,p,t)
store_time_and_mark_somewhere(t,m,somewhere)
point_affect!(du,u,p,t,m)
end
jump = VariableRateJump( total_rate, jump! )
prob = JumpProblem( ode_or_sde, jump )
(Not sure about the names. There’s a bit of a discrepancy between the “marked points” and “jumps” terminology.)