I have been trying to solve a system of differential equations in Julia. I have run into a few problems and am not clear on what I am doing wrong.
Problem definition:
The system I am working on is a model of the immune system. This is represented by 9 differential equations. One of the parameters is time-dependent and is a step function representing the lifetime of an infection in the immune system. This makes the system discontinuous at the time of entry and removal of the infection. To address this, I used the options tstops and callbacks in the solver. This system seems to be stiff and so I used the Rosenbrock23 algorithm to solve this.
Problems:
Because I do not know the parameters, they are assigned random values. For independent simulations (for different parameter sets), I sometimes run into negative solutions. I tried using the callback PositiveDomain but that does not help. Even for simulations where Rosenbrock23 algorithm handles the integration quite well, adding the callback PositiveDomain gives negative solution and aborts with the warning:
WARNING: dt <= dtmin. Aborting. If you would like to force continuation with dt=dtmin, set force_dtmin=true
I understand that the step size can’t be reduced anymore and that’s why it’s aborting but I don’t understand why adding the callback PositiveDomain would do this. Or is my implementation faulty?
In cases, where I do get negative values for some variables, isoutofbound solver option also doesn’t seem to make any difference.
It’s not quite clear to me but I suspect that the differential equations allow the solution becoming negative (for some parameters). If that is indeed the case, then PostiveDomain callback cannot help. As you seem to be fitting parameters, I’d suggest that you have a callback which triggers as a variable reaches zero, stop the integration there, and discard the parameters as unsuitable.
no matter how low I set the tolerance, the ODE was going negative for me at exactly t=320, which is exactly when the parameters are changed:
function affect!(integrator)
@show "parameter change",integrator.t
k = find(x->x==integrator.t,h1);
for i = 1:length(k)
integrator.p[10][k[i]] = rand();
end
k = find(x->x==integrator.t,h2);
for i = 1:length(k)
integrator.p[10][k[i]] = 0.0;
end
end
I am not fitting the parameters here. I have several clones of a single type of cell and they all vary from each other in their properties. In the function f, the loop i creates n different clones. That is why I have defined random parameter values within a range before hand and am not changing it during the simulation.
Is isoutofdomain working? I simulated a parameter set with just the Rosenbrock23 and found negative solution. So, I kept the parameter set fixed and added the isoutofdomain, yet it did not seem to work. It was still going to the negative values.
Can you post an MWE where that is happening? It didn’t go negative for the parameters I checked. A minimal working example should take out the randomness. A good check is when sum(sol .< 0).
Yes, this is something I want to address sooner rather than later.
My fault. It was because of the incorrect use of alg definition. Replacing it with Rosenbrock23() corrected it. That means I was simulating with Tsit5() till now?
That shouldn’t make a difference, and I cannot recreate that. Can you get an example where the auto-alg is getting a negative? That should just be impossible with isoutofdomain.
The attached code has a parameter set where the alg specification (alg=Rosenbrock23 vs Rosenbrock23()) is the difference between positive and negative solutions.
I will check but I don’t think I will. The command seems quite clear, I was only doubtful of the way I had implemented it. If that is correct, I doubt I can find a violation case. But if I do, I will attach a link here.
it seems that the interpolation goes negative (and is somewhat unstable) with the RK method, but the values are all positive. This can happen with isoutofdomain but cannot happen with PositiveDomain since isoutofdomain only checks the steps. In fact, this small dip may be what’s tripping up PositiveDomain() since it tries to maximize steps and then interpolate back pre-zero to speed up (vs isoutofdomain which just rejects any step outside of the domain)
But you can check that lower tolerances do make the “full interpolation” positive:
sol=solve(prob,Rosenbrock23(),abstol=1e-12,reltol=1e-12,tstops=dis,callback=antigen)
A = sol(tspan[1]:0.01:tspan[2])
sum(A.<0)
I checked again. Because alg=Rosenbrock23 is the wrong syntax, the solver takes the default option as when no algorithm is specified. While using the default tolerances, the composite algorithm used for this case was Tsit5() and Rosenbrock23(), whereas by lowering the abstol and reltol to 1e-12, the solver chose to use Vern9() and Rodas5() as the composite algorithm.
Ah! The interpolation. I tried out Rosenbrock23(), AutoTsit5(Rosenbrock23()) and the default (i.e. the composite of Vern9() and Rodas5()) with and without lower tolerances (1e-12) and found that the interpolation in cases where Tsit5() is used with the default tolerance goes negative. So, yes, I think you are right when you say the interpolation with the RK method is at fault here.
I think a more universal solution would be the way to go. But fixing this problem might be a start. My negativity question has already been answered, I was using the wrong syntax! So, I will accept your last reply as the answer.
Also, I was interested in the performance of this code as I was looking to simulate a larger system (~2000 ODEs) and as it is now, the simulation fails because of memory. Do I need to start a new topic for this?
Yup, new topic but feel free to ping me. My first guess would be that you’re saving the whole continuous solution which may not be a good idea in for 2000 x 2000.