DifferentialEquations: enforcing realistic solution?

I have a (large) ODE where each state x must lie in the range [0,1]. Due to numeric inaccuracies, the solution does drift outside of this range. I have tried to map x into the allowed range at the start of the function defining the ODE, but this does (of course) not solve the problem.

Question: what is the right procedure to constrain the solution to the range [0,1]? Use some callback structure and map x to the allowed range before x is stored? Other tricks?

NOTE: this mapping into [0,1] needs to be done after each update of the ODE solver, and not only at certain events.

There is isoutofdomain, maybe that can handle it.


Well if isoutofdomain “doesn’t handle it”, that means the ODE doesn’t actually have a solution that lies in that range. So isoutofdomain is a rather safe way to do it, but sometimes overkill and can hurt performance because of its safety. A quicker way are the domain callbacks which will use interpolations at the cost of some safety.


"isoutofdomain : Specifies a function isoutofdomain(u,p,t) where, when it returns true, it will reject the timestep. Disabled by default."

That is not really what I need — I don’t want the timestep to be rejected. Would there be any help in that? Won’t the next timestep just do the same thing? I could understand that a timestep perhaps should be rejected if there is some stochasticity involved, but my equation is deterministic, and I’d guess the solver would do the same the next time??

So a little bit more concrete, in my case, state u (or x or whatever) is a matrix, and I have a model with a first attempt similar to:

function model(du,u,p,t)
    #... current attempt
    u_ = constrain.(u)
    du = f(u_)

where function constrain does:

function constrain(u)
    if u < 0
        return 0
    elseif u > 1
        return 1
        return u

But this doesn’t hinder the solution to drift out of the range [0,1].

So, what I look for is something like:

function model(du,u,p,t)
    #... new attempt
    du = f(u)

where I instead fix the problem via a solver modification.

Question: would the correct be to use a DiscreteCallback with condition and affect! functions:

condition(u,t,integrator) = minimum(u) < 0 || maximum(u) > 1
affect!(integrator) = integrator.u = constrain.(integrator.u)


Hm… with this DiscreteCallback, I get a solution:

while if I don’t do this, I get a solution:

So the problem is not completely solved, but it is better. Perhaps if I fine-tune some simulation conditions, I can avoid the problem…

A DiscreteCallback will enforce the condition only after every time step and not for internal stages of a Runge-Kutta discretization (assuming that you are using a Runge-Kutta method).

If satisfying the bounds is a hard requirement for you that you want to enforce for every internal stage, you should look for methods allowing a stage_limiter! such as strong stability preserving (SSP) methods, see ODE Solvers · DifferentialEquations.jl. They act similar to callbacks but are activated after every stage of these Runge-Kutta methods. That should do what you ask for but you should still think about your ODE per se - does it really satisfy the bounds property you ask for and if not - why? Working on this theoretical side may be more beneficial than this kind of band aid based on limiters.


If your equation does not mathematically allow the solution to “drift” outside of the domain, then if it does, it means that the solver took too large a step and overstepped the domain boundary. Thus rejecting the step and taking a smaller one which does not overstep is a good thing to do. It’s easy to try, so I would go for it; if it impacts performance too much you can still try other approaches mentioned.

However, as @ranocha said, if your equation is not mathematically constrained to the domain but only by your if-elseif-else statement, then that means that the solution is non-smooth at the boundary which is something many solvers may struggle with (and I think isoutofdomain will not help as it assumes smoothness). Then you probably need a more fancy solution. But, again as @ranocha said, it would be best if you can re-formulate your equation such that it is smooth at the boundary while still respecting the boundary.

Ah, I found a bug in the code… I applied a fitted function to argument x, and didn’t constrain the fitted function when x was outside of the range…

1 Like