Stochastic solver pushes chemical langevin equations into negative concentrations

Summary: When I simulate the stochastic time development of a reaction network the simulation crashes due to concentrations becoming negative (works fine for deterministic solution).

I have this chemical reaction network:

2, ∅ --> X
3, X --> ∅

(original system more complicated, but I have simplified it while retaining the error)
I model it as:

function f(du,u,p,t)
    du[1] = 2 - 3u[1]
function g(du,u,p,t)
    du[1, 1] = sqrt(2)
    du[1, 2] = -1 * sqrt(3u[1])

when I simulate the system using the deterministic approach everything is fine:

u0 = [50.0]
tspan = (0.0,4.0)
prob_det = ODEProblem(f,u0,tspan)
sol_det = solve(prob_det)

however when I try to solve the stochastic version it fails:

u0 = [50.0]
tspan = (0.0,4.0)
dt = 1//2^(12)
prob_stoch = SDEProblem(f,g,u0,tspan,noise_rate_prototype=zeros(1,2))
sol_stoch = solve(prob_stoch,EM(),dt=dt)

the last line yields a DomainError() which can be traced to du[1, 2] = -1 * sqrt(3u[1]) (where the value of u[1] is negative).

I’ve spent some time trying to figure out how this happens, and I have no idea. I’ve modified the functions to print values:

function f(du,u,p,t)
    println(u[1], " ",(2 - 3u[1]))

function g(du,u,p,t)
    println(sqrt(2), " ",-1 * sqrt(3u[1]),"\n")

this yields:

0.16617661194477099 1.5014701641656871
1.4142135623730951 -0.7060664528458444

0.18415292624496235 1.447541221265113
1.4142135623730951 -0.7432757084251355

0.18156840309376518 1.4552947907187046
1.4142135623730951 -0.7380414685377072

0.2004141078247585 1.3987576765257246
1.4142135623730951 -0.7753981709252836

0.164105752037157 1.507682743888529
1.4142135623730951 -0.7016532306712989

0.1469872071015845 1.5590383786952464
1.4142135623730951 -0.6640494117945994

0.15114573738914047 1.5465627878325785
1.4142135623730951 -0.6733774663347604

0.1506813504673137 1.5479559485980587
1.4142135623730951 -0.672342213015025

0.11613339743580103 1.6515998076925968
1.4142135623730951 -0.5902543454371201

0.05865690190122183 1.8240292942963345
1.4142135623730951 -0.4194886240455937

0.061455047206756666 1.81563485837973
1.4142135623730951 -0.42937762123831047

0.03440580432279003 1.89678258703163
1.4142135623730951 -0.3212746690425034

0.01917417408706192 1.9424774777388143
1.4142135623730951 -0.23983853372881048

0.028964799664317847 1.9131056010070464
1.4142135623730951 -0.2947785592490633

0.026156451214798624 1.921530646355604
1.4142135623730951 -0.28012381841677775

-0.005493955043100477 2.0164818651293013

Here both the deterministic and stochastic part seems overwhelmingly positive (which makes sense as the degradation of X should be low as the concentration of X approaches 0). Yet the value of X is still being steadily reduced.
(I have been running this simulation a lot, and it always yields the same error after a while)

There is surely something here which I do not understand, I would be happy if someone could help me point out what that is.

When u is sufficiently small this is basically du = 2dt + sqrt(2)dW_t. It will keep on getting to that situation because of the quasi-steady state at 2/3. I’m pretty sure that you can use that to prove that the equation will hit zero in finite time with probability 1, so this system is not well-defined over any long enough time span.

That said, the issue comes from your definition of the noises. The Gillespie system would restrict the production term part 2dt - sqrt(2)dW_t to be positive, whereas there’s no guarantee that will hold here. This is fine if the system is large enough, but this will cause issues if it can diverge to zero.

The issue is because this is all due to a convergence result. As the number of particles goes to infinity, SSA → SDE → ODE. If you have concentrations close to zero, as can happen in the SDE, then you have copy numbers which are close to zero and thus the model is no longer a good approximation of the chemical reactions.

Here’s a good summary:

This kind of model will also give tau-leaping some issues, and so what you’re really looking for here is probably things like non-negative binomial leaping to be able to solve the SSA directly but without too much computational expense. Or you can adjust the size of the additive noise in the SDE to account for this effect, which if the model is already phenomenological can be perfectly fine.


Thank you, I think I understand.

Basically I am making a simulation where one of the value are too close to 0, and the stochastic nature ensures that it sooner or later hits zero (and below, yielding my problems). The problem is due to the chemical langevin equations not being good at so low values, and should not be used anyway.

Rather when concentrations (and molecule numbers) are so small I should use Gillespie Algorithm to simulate exact molecule numbers.

Alternatively, for the same system, if it fluctuates at a higher value I can simulate using SDE without this problem, e.g:

20, ∅ --> X
3, X --> ∅

will fluctuate around 20/3 and rarely hit negative values, increasing the production term even more should ensure that it practically never does. (Right?)

Yes, if the additive noise term is not increased as much, which naturally happens due to sqrt scaling.