# Help debugging differential equation

Hello all,

I have a set of relatively simple differential equations modeling growth of bacteria as a function of available substrates. Unfortunately, my model fails to be solved in one particular case, and I cannot figure out why. Here’s is the (simplified) model:

``````function solve_model(p, u0)
p = [2.767317774014066e-7, 0.7433220714328409, 15.7, 1.0e9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.012192, 882000.0] # FAILS
# p = [0.06, 0.1806273414373398, 15.7, 1.0e9, 0.0, 0.0, 0.0, 0.0, 0.0, 0.012192, 882000.0] # WORKS
u0 = [1.6382099999999998e13, 3.4061512877350002e10, 0.0, 100000.0, 1.461e7]

# Callback for the max() function in the model - causes a discontinuity.
# If max equation changes, this condition will have to change.
condition(u, t, integrator) = p[2] * u[4] - u[2]
affect!(integrator) = nothing
max_cb = ContinuousCallback(condition, affect!, save_positions=(false,false))

# Build the ODE Problem and Solve
prob = ODEProblem(model, u0[1:end-1], (0.0, last(u0)), p)
sol = solve(prob, Rosenbrock23(), callback=max_cb, tstops=stops, isoutofdomain=is_invalid_domain, maxiters=1e7)

return sol
end

# Implements the differential equations that define the model
function model(du,u,p,t)
# Paramaters
mu_max = p[1]
maintenance_per_cell = p[2]
dOC_per_cell = p[3]

carrying_capacity = p[4]
pOC_input_rate = p[5]
dOC_input_rate = p[6]
inorganic_carbon_input_rate = p[7]
inorganic_carbon_fixing_rate = p[8]
inorganic_carbon_per_cell = p[9]
eea_rate = p[10]
Ks = p[11]

pOC_content = u[1]
dOC_content = u[2]
inorganic_carbon_content = u[3]
cell_count = u[4]

growth = mu_max * (dOC_content / (Ks + dOC_content)) * cell_count * (1 - cell_count / carrying_capacity)

required_dOC = maintenance_per_cell * cell_count

dOC_missing = max(required_dOC - dOC_content, 0)
starvation_deaths = dOC_missing == 0 ? 0 : dOC_missing / required_dOC_per_cell

deaths =  starvation_deaths

dOC_consumption = maintenance_per_cell * (cell_count - deaths)

eea_rate = eea_rate * (1 / (1 + exp(-pOC_content))) * 2 - eea_rate

du[1] = pOC_input_rate - eea_rate*cell_count

du[2] = dOC_input_rate + dOC_per_cell * (deaths - growth) - dOC_consumption + eea_rate*cell_count

du[4] = growth - deaths
end
``````

I originally believed that the reason instability caused the solver to abort was due to discontinuities at the max function. However implementing fixes for that did not solve the issue, see here for attempted solutions. This includes the recommended first steps of trying a different solver and lowering tolerance, the issue is definitely my model.

I was hoping more experienced modelers than I on this forum could offer some direction as to why this set of parameter fails. And from there, I hope to resolve many runs aborting when I conduct a sensitivity analysis. One thing I am aware of is that the model never seems to settle on zero values. Often it gives approximations such as 1e-300.

I would also appreciate general debugging tips, as stepping through 1000 iterations of a model call is impractical and so is manual logging. What are some strategies used?

Full model here

That has a lot of tactics. Note that the easiest way to step through 1000 iterations is to use the integrator interface.

https://diffeq.sciml.ai/stable/basics/integrator/

So it’s just a few lines of code

``````integ = init(prob, alg, ...)
for i in 1:1000
step!(integ)
end

@show integ.u
@show integ.t
@show integ.dt
``````

and you can start to play with it to see what’s diverging first and all. Check `integ.k[1]` to look at the derivatives: see if those make sense.

Yeah this is a funny thing. LU factorizations do not guarantee pure zeros, so you can get phantom values that revive. I’m going to write a blog post on this “soon”, but for now you can refer to Dead species coming back alive · Issue #1651 · SciML/OrdinaryDiffEq.jl · GitHub which describes techniques to use in such a case.

For posterity, my issue was that the model ran for too long trying to approximate a value to 0.

I bypassed this by setting a discrete callback which zeroes the value if it is less than 1e-100.

``````zero_condition(u, t, integrator) = u[1] < 1e-100
function zero_out!(integrator)
integrator.u[i] = 1e-100
end
zero_cb = DiscreteCallback(zero_condition, zero_out!)
``````

@ChrisRackauckas I wonder if this is an acceptable solution to issue with LU factorization you highlighted?

I do wonder why this prevented some parameter sets from completing and not others…

Is it possible to use the integrator interface to show the values of the solver n steps before it aborts?

The below doesn’t print sequentially.

``````step!(integ)
while check_error(integ) == :Success && integ.t < last(u0)
step!(integ)

if integ.t > 5000. # I know it tends to fail around this time
@show "Before fail:\n"
@show integ.u
@show integ.t
end
end

if check_error(integ) != :Success
@show "--fail:\n"
@show integ.uprev
@show integ.tprev
end
``````

what do you mean?

yes, but instead of setting it to a small value, set it to a real zero.

``````zero_condition(u, t, integrator) = u[1] < 1e-12
function zero_out!(integrator)
integrator.u[i] = 0
end
``````

That’s typo, you’re right it should bet set to 0.

The log output was printing correctly that was my mistake. I think the function was being called twice in parallel by other parts of code.

The issue in my differential equation seems to be that EEA rate variable. The equation as is doesn’t guarantee the bounds defined (i.e. EEA rate < u[1]).