I am wondering why the code below allocates as much memory as it does despite using an in-place function definition. Is this normal or is there a way to avoid these allocations?

using DifferentialEquations
function odedef!(du,u,p,t)
du[1] = - u[1] * u[3]
du[2] = - u[2] * u[1]
du[3] = - u[3] * u[2]
end
u0 = [1.0, 0.8, 0.5]
p = []
tspan = [0.0, 10.0]
t = collect(0:0.01:10)
prob = ODEProblem(odedef!,u0,tspan,p,saveat=t)
@time sol = solve(prob);
1.256520 seconds (3.78 M allocations: 163.340 MiB, 16.75% gc time)

The biggest issue is that you are measuring compilation time, and compilation involves a lot of allocation (not sure why). The second time you run @time sol = solve(prob); I’m only seeing 5.1k allocations. I believe they come from saving the solution output.

Also note that forcing the algorithm to save at regular time-steps will slow stuff down considerably. DifferentialEquations has put a ton of work into choosing time-steps adaptively to give you the best combination of accuracy and performance. Forcing regularly spaced time-steps won’t improve solution accuracy.

I’m not sure if that’s it. When I reduce the number of time points to only 11 with t = collect(0:1:10) I still get about as many allocations: 1.349207 seconds (3.82 M allocations: 165.336 MiB, 2.94% gc time)

And also getting rid of saveat altogether does not seem to improve things:

using DifferentialEquations
function odedef!(du,u,p,t)
du[1] = - u[1] * u[3]
du[2] = - u[2] * u[1]
du[3] = - u[3] * u[2]
end
u0 = [1.0, 0.8, 0.5]
p = []
tspan = [0.0, 10.0]
prob = ODEProblem(odedef!,u0,tspan,p)
@time sol = solve(prob);
1.338528 seconds (3.82 M allocations: 165.346 MiB, 2.63% gc time)

This is on Julia 1.5.2 and DifferentialEquations 6.15.0.

The measured values were the ones I got after a few runs (not the first one) so this should not have included complilation. Somehow I am seeing more allocations than you.

Using saveat was supposed to emulate that my real time vector is unevenly spaced, but as mentioned in my other reply it does not seem to affect the result much.

For the record, this is what I get after a restart of VS Code vs for the second run without any saveat:

using DifferentialEquations
function odedef!(du,u,p,t)
du[1] = - u[1] * u[3]
du[2] = - u[2] * u[1]
du[3] = - u[3] * u[2]
end
u0 = [1.0, 0.8, 0.5]
p = []
tspan = [0.0, 10.0]
prob = ODEProblem(odedef!,u0,tspan,p)
@time sol = solve(prob);
First run after startup:
17.737284 seconds (64.88 M allocations: 3.198 GiB, 6.35% gc time)
Second run:
1.342455 seconds (3.82 M allocations: 165.346 MiB, 2.42% gc time)

I think I see what you mean now. If I only run the solve command the second time I get much less allocations: 0.000124 seconds (244 allocations: 24.438 KiB)

I suppose that means redefining odedef! (even if unchanged) forces recompilation? Thanks in any case, I think that solves it.

Yeah. The problem is that ODEProblem is specified on the definition of odedef!. Also, in general, it’s really hard to determine whether 2 functions are the same. That said, there might be low hanging fruit where for two textually identical functions, Julia might be able to be smarter in theory.

Just so I fully understand which changes trigger recompilation:

My observations:

when I change the values of any of the other inputs to ODEProblem (e.g. p) while keeping the type the same (e.g. Float64) this does not trigger recompilation

when I change the type of p this triggers recompilation once - presumably creating (compiling?) a new method?

when I declare odedef! again this triggers recompilation in any case.

Also, if I now embed the solve step in an outer function and call this outer function many times with different p, is there anything else I should keep in mind to avoid recompilation for every call to this outer function?