Piecewise differential equations

Hello,

I am new at Julia, I would like to solve this system

\frac{dx}{dt} = k1y \
\frac{dy}{dt} = k2y+I

where k1 and k2 are constant parameters. however, I=0 when y,0 or Ky otherwise, where k is a constant value.

I followed the tutorial about ODE in The question is how it is possible to solve this piecewise differential equation in DifferentialEquations.jl however I did not find anything about piecewise functions

Thank in advance

Francisco

I

This is discussed here and I havenā€™t added it to the FAQ yet:

Dear Chris,

Thank you for your answer. I know that a discretecallback would be the solution of my problem. however I can realize how to program it because in all example, as I far as understood,the discontinuity is given by some time point and in my case the parameter change is due to the actual value of the function itself. Could you help me a bit please?

Thank you very much

Just use a ContinuousCallback that triggers when the zeroing happens. Or use the smoothed version of the operation. Iā€™ll see if I get a chance to do a detailed response sooner rather than later. I see you posted here as well:

Thank you very much, all your help would be very appreciated! :smile: I am not such a good programer, I am going to read about callback functions

Here is a (mildly) interesting example x''+x'+x=\pm p_1 where the sign of p_1 changes when a switching manifold is encountered at x=p_2. To make things more interesting, consider hysteresis in the switching manifold such that p_2\mapsto -p_2 whenever the switching manifold is crossed.

The code is relatively straightforward; the StaticArrays/SVector/MVector can be ignored, they are only for speed.

using OrdinaryDiffEq
using StaticArrays

f(x, p, t) = SVector(x[2], -x[2]-x[1]+p[1])  # x'' + x' + x = Ā±pā‚
h(u, t, integrator) = u[1]-integrator.p[2]  # switching surface x = Ā±pā‚‚;
g(integrator) = (integrator.p .= -integrator.p)  # impact map (pā‚, pā‚‚) = -(pā‚, pā‚‚)

prob = ODEProblem(f,  # RHS
                  SVector(0.0, 1.0),  # initial value
                  (0.0, 100.0),  # time interval
                  MVector(1.0, 1.0))  # parameters
cb = ContinuousCallback(h, g)
sol = solve(prob, Vern6(), callback=cb, dtmax=0.1)

Then plot sol[2,:] against sol[1,:] to see the phase plane - a nice non-smooth limit cycle in this case.

Note that if you try to use interpolation of the resulting solution (i.e., sol(t)) you need to be very careful around the points that have a discontinuous derivative as the interpolant goes a little awry. Thatā€™s why Iā€™ve used dtmax=0.1 to get a smoother solution output in this case. (Iā€™m probably not using the most appropriate integrator either but itā€™s the one that I was using in a previous piece of code that I copied-and-pasted :slight_smile:)

If you are interested in the dynamics of piecewise differential equations there are plenty of gotchas to watch out for, particularly if your system is a Filippov system; effects like sliding require a bit more care than shown in the code above. (E.g., Iā€™m not sure if DifferentialEquations.jl can handle a system that changes from an ODE to a DAE part way through solving, such as happens with the sliding vector fields that come from Filippov systems; ever tried it @ChrisRackauckas?)

That shouldnā€™t be a problem. It wonā€™t interpolate over the derivative discontinuity. It has the discontinuity as one of its interval endpoints.

DAEs and ODEs solve the same way. If your system is being solved by a solver that already handles DAEs (i.e. Rosenbrock with mass matrices, IDA, etc.) I donā€™t see it causing a problem.

Unfortunately it seems to be a problem. Try my code above and then

t = linspace(0, 100, 10001)
solt = sol(t)
plot(solt[1,:], solt[2,:])

and youā€™ll see an extra kink in the solution around x=1, x'=0.7 as it switches between the different vector fields. I did this initially without restricting the time step and the results are even more pronounced (and confusing). I donā€™t know if it is because just because Iā€™m abusing the event detection routines somehow.

Thatā€™s a good point; Iā€™ll have to try that some time.

Oh thatā€™s a ā€œbugā€ due to the Verners doing a lazy interpolant grow. I need to document why this is happening and offer a workaround flag.

using OrdinaryDiffEq
using StaticArrays

f(x, p, t) = SVector(x[2], -x[2]-x[1]+p[1])  # x'' + x' + x = Ā±pā‚
h(u, t, integrator) = u[1]-integrator.p[2]  # switching surface x = Ā±pā‚‚;
g(integrator) = (integrator.p .= -integrator.p)  # impact map (pā‚, pā‚‚) = -(pā‚, pā‚‚)

prob = ODEProblem(f,  # RHS
                  SVector(0.0, 1.0),  # initial value
                  (0.0, 100.0),  # time interval
                  MVector(1.0, 1.0))  # parameters
cb = ContinuousCallback(h, g)
sol = solve(prob, DP8(), callback=cb, dtmax=0.1)

t = linspace(0, 100, 10001)
solt = sol(t)
plot(solt[1,:], solt[2,:])

Let me edit the docs and put a fix into the coming v0.7 version. This is a very specific case that shows up only in the case where:

  1. You use a method with a lazy interpolant (currently and probably forever, this is only the 4 Verner methods and BS5)
  2. You change a parameter using a callback
1 Like

That laziness caveat is now mentioned in the docs ( https://github.com/JuliaDiffEq/DiffEqDocs.jl/commit/eda9b90942ca84dbef1f7bcc8bbfcee73b537738 ) and the non-lazy mode has a PR which will merge when tests pass ( https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/pull/418 ). I still want to keep the lazy default since it has about half as many steps if you arenā€™t interpolating every step, so you really only want the extra cost in very specific cases (and if youā€™re using a continuous callback itā€™s not an extra cost anyways since you have to be interpolating).

Ugh I donā€™t like the details this adds though. This side note is invading this thread now

1 Like