Hello all, I’ve been struggling to get the full solution to a set of ODE’s that describe an inflationary cosmology, where the equations, the value of the constants and the initial values are given below on the code. In these equations, u[1] is proportional the volume of the universe (i.e. proportional to the cubic scale factor).
I know from previous literature that this system undergoes very fast inflation around t=2929 but when u[1] is around 1e80 inflation stops and the system stops growing as rapidly. I’ve tried all sorts of different compilers but I usually get the error that the MaxIters are too low. When I manually change the MaxIters to be higher, the solution doesn’t get any deeper into inflation but the computation time does increase a lot. Does anybody else has any experience with solving rapidly-growing systems with Julia and can give me some advice? Do you recommend a specific algorithm (I’ve basically tried all the ones in DifferentialEquations.jl)? Any help would me most appretiatied. Below is my code so you can take a look
using Plots
using DifferentialEquations
using Roots
m=1.2e-6;
γ=0.2375;
Δ=3/(8*π*(γ^2)*0.41);
Δs=sqrt(Δ);
ϕ0=0.97;
v0=1.0;
β0=π/(2*Δs);
Ω(v,β)=(v*sin(Δs*β))/Δs
Heff(v,β,ϕ,p)=(8π*((p^2/v)+v*m^2*ϕ^2)-(6*Ω(v,β)^2/(v^2*γ^2)))
Heff0(p)=Heff(v0,β0,ϕ0,p)
p0=find_zero(Heff0, (0,1))
function geometria(du,u,p,t)
du[1]=(3(u[1]^(1/3))^4*sin(2*Δs*(u[2])))/(2*Δs*γ)
du[2]=2*π*γ*(u[1]^(1/3))*(m^2*(u[3])^2-((u[4])^2/u[1]^2))-((3*(u[1]^(1/3))*sin(Δs*(u[2]))^2)/(2*Δ*γ))
du[3]=(u[4])/(u[1]^(1/3))^2
du[4]=-(u[1]^(1/3))^4*m^2*(u[3])
end
uih = [v0;β0;ϕ0;p0]
tspan = (0.0,3000.0)
prob = ODEProblem(geometria,uih,tspan)
sol = DifferentialEquations.solve(prob,alghints=[:stiff],reltol=1e-6);
The following graph is as far as the solution gets and may help you visualize the problem
How is this model solved in the literature? 1e80 is a crazy large number. I tried to convert the initial condition to BigFloats, but it doesn’t seem to help.
Setting abstol and reltol helps a bit (letting it give up the error control for u[1] and u[4]).
I’d double check your ODE system. At very low tolerances it seems to give you this result, which indicates that it’s truly the result of the ODE that you wrote down.
Any chance you could solve for log(u[1]) rather than u[1]? That would improve the scaling. Tracking dynamics in the inflation part of the solution looks really hard. I can see how the solver in an implicit method would get into real trouble there.
The thing is the solution acts exactly as I’d expect it to, I just can’t get any part of the solution past this point. I fully expect the solution to stop growing as rapidly when it gets to 1e80. In fact, if I choose the ImplicitEuler method with the default error tolerances I get the following graph
I remember trying it like that a while back (more specifically I tried solving for log(u[1])/3) and I run into the same issue. I imagine this happens because the value of the derivatives doesn’t really change so the Jacobian is just as large and the timesteps also tend to zero. Still, I will try once again for good measure.
As far as I know, the first place where these equations were solved was https://arxiv.org/pdf/1601.01716.pdf but there’s no mention of the numerical method. A similar set of equations in https://arxiv.org/pdf/1401.5256.pdf was solved with the adaptive Runge-Kutta method 89 (which I think is Vern9(), for which I’ve tried both the lazy and non-lazy version and no luck, but I’ll look in the reference of the article to see if it’s something different). I tried what you did with relaxing the error and using BigFloat but the only thing that happened was that inflation started a little bit later (you can check that the maximum values of u[1] are of the same order) but got stuck at the same place.
I’ve done graphs with what I have so far (that is, before the end of inflation) and they look identical to those in the literature so I think the ODE system is correct.
For anyone wondering “how” inflation ends, or more especifically, how u[1] stops growing, u[4] becomes negative somewhere around t=2100 and then also grows very large (negatively, of course).
done graphs with what I have so far (that is, before the end of inflation) and they look identical to those in the literature so I think the ODE system is correct.
Can you convince the integrator to evaluate the Jacobian at every time step? If your solution is rapidly varying and the Jacobian is ill-conditioned, the logic in some integrators can get confused and not update the Jacobian when it should. We have run into this with DASSL years ago. We had to hack the code, but I don’t think that is necessary anymore. I think, but my memory on this may be wrong, that pvode and cvode let you do this.
@ChrisRackauckas, is there a flag @MrSantiagoPrado can set in DifferentialEquation.jl to force Jacobian updates at every time step?
I’m a bit new to using numeriacal algoriths, and DifferentialEquations.jl, but, is there any advantage in me calculating the exact Jacobian by hand and feeding it to the solver?
I’ve noticed the r.h.s. of your equations mostly uses the diameter of the universe (i.e. cbrt(volume) ). If that’s the case maybe you could make that your dynamic variable and get a bit smaller numbers to work with. You can always take a power of three in your analysis
Most integrators default to a finite diffrerence Jacobian. That is generally ok and you’d need very little effects from changing to an analytic Jacobian. However, your Jacobian is ill-conditioned and your solution is rapidly varying during inflation. So, the finite difference error might make a difference. If your integrator uses automatic differentiation, then you already have a (computer-generated) analytic Jacobian, so you won’t gain much. For small systems, the performance difference should be pretty minimial. A hand computed Jacobian is faster than anything else if the human time to compute it is reasonable.
The good news is that your system is small and the Jacobian is easy to compute by hand. I would give it a try in this case to see what happens. My guess is that it won’t matter much, but I would be interested in how it works out. Please reply to this message after you do it.
Ok, I think I got it: The mechanism by which V stops growing is that \beta \to 0. However, numerically there is a small residual for \beta that doesn’t vanish (it stays around 10^{-16}). So you need to fix that somehow if you want stop V from exploding.