I think the problem is the behavior of PositiveDomain
. Its documentations says
It reduces the next step by a certain scale factor until the extrapolated value at the next
time point is non-negative with a certain tolerance. Extrapolations are cheap to compute but might be inaccurate, so if a time step is changed it is additionally reduced by a safety factor of 0.9. Since extrapolated values are only non-negative up to a certain tolerance and in addition actual calculations might lead to negative values, also any negative values at the current time point are set to 0.
The last point is crucial. We want to see very small concentrations up to 1e-6 or so, but looking at the output data, they are simply set to zero by the callback.
Using isoutofdomain=(u,p,t) -> any(x -> x < 0, u)
instead of callback = PositiveDomain()
looks a lot better, filling out the “gaps” in the diagram. I also lowered the tolerances to reltol=1e-11, abstol=1e-11
just to be sure, but that’s probably too low (and isoutofdomain
should have a similar effect).
UPDATE: A combination of low abstol
and the isoutofdomain
check seems to be enough – no need for such a low reltol
which just slows everything down.
PS: Lowering the tolerance inside PositiveDomain
might also have the same effect, didn’t try that yet.