Background. I have a nice stochastic jump-piecewise-ODE behaving as it should. Thanks @ChrisRackauckas and team! Basically this would have been completely impossible in any other language/package! Now I want to calculate a likelihood for L(θ) for the hidden markov process in the “dumb” (forward) way. That is, I have observations, y[1]...y[n]
at time times[1]...times[n]
I know the likelihoods for the observations given the underlying state p(y[2]|x2), where x2 = u(times[2]) I wish to get,
L(θ) = p(y,t|θ) = ∫dx1 … ∫dxn π(x1) p(y[1]|x1) ∗ p(x2|x1) p(y[2]|x2) ∗ … ∗ p(y[n]|xn)
which we can write as
L(θ) = 〈Π_i p(y[i]|xi) |θ〉where xi are the states of trajectories at time i, and〈 · |θ〉is the average over trajectories obeying the parameters θ.
Let us call the single-trajectory likelihood
Π_i p(y[i]|xi) = (traj_likelihood
)
At the end of integration, I only need traj_likelihood
, over which I am planning to take an ensemble average. (yes it’s slow, but I have no choice.) So I would like to allocate a scalar Float64 reference in the Integrator cache so that every time I hit a times[i]
I calculate the conditional (log)-likelihood log(p(y[2]|x2)) and add it to this mutable cache. Then, at the end, all I want to return is this log-likelihood value (the cache at the end of integration).
I’m thinking all I need is a list of PresetTimeCallback(times[i], affect!)
, and affect! to change this cache, but I can’t figure out how to allocate this cache, and access it in affect!
because there’s just not quite enough here for me to work out the interface. Thanks for the help!
Edit: It occurs to me that this post doesn’t have enough code. I will post a concrete example later today.