For context, I come from an R background using the deSolve package.
The first examples for ModelingToolkit make use of a specific structure of model definition, using the @parameters, @variables, @named macros. The general workflow seems to be:
- define
@parametersand@variables - define the time differential
D = Differential(t) - define a
@namedODESystemof eqations involving[D(x) ~ ...] - convert the
ODESysteminto anODEProblemwith the non-symbolic parameters and state variables included solvethe problem
My question is: what are the advantages of this workflow? Below are shown two versions of the same code (from adapted from the ODE problem library.
Firstly the macro-y approach:
function sim_lotkavolterra1()
@parameters t p[1:4]
@variables x(t) y(t)
D = Differential(t)
lv = [
D(x) ~ p[1] * x - p[2] * x * y,
D(y) ~ p[4] * x * y - p[3] * y
]
@named sys = ODESystem(lv)
u0 = [1.0, 1.0]
tspan = (0.0, 1.0)
p = [1.5, 1.0, 3.0, 1.0]
prob = ODEProblem(sys, u0, tspan, p)
return solve(prob)
end;
The benchmark yields:
Range (min … max): 1.392 ms … 11.593 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 1.703 ms ┊ GC (median): 0.00%
Time (mean ± σ): 2.038 ms ± 984.789 μs ┊ GC (mean ± σ): 2.33% ± 6.36%
█▆▅▄▄▄▄▄▄▃▃▂▂▁▂▁▁▁▁ ▁
█████████████████████████▇▆█▇█▆▆▇▆▆▄▆▅▄▁▃▄▄▃▁▃▃▁▅▁▁▁▃▃▁▃▁▁▄ █
1.39 ms Histogram: log(frequency) by time 6.5 ms <
Memory estimate: 371.75 KiB, allocs estimate: 6197.
If I use the more “deSolve-y” approach:
function sim_lotkavolterra2()
function lv(du, u, p, t)
x, y = u
du[1] = p[1] * x - p[2] * x * y
du[2] = p[4] * x * y - p[3] * y
end
u0 = [1.0, 1.0]
tspan = (0.0, 1.0)
p = [1.5, 1.0, 3.0, 1.0]
prob = ODEProblem(lv, u0, tspan, p)
return solve(prob)
end;
The @benchmark yields:
Range (min … max): 25.800 μs … 6.690 ms ┊ GC (min … max): 0.00% … 97.68%
Time (median): 28.800 μs ┊ GC (median): 0.00%
Time (mean ± σ): 33.048 μs ± 103.203 μs ┊ GC (mean ± σ): 5.28% ± 1.70%
▇█▆█▇▅▄▃▃▂▁▁ ▁▁ ▂
█████████████▇▆▆▇▇█▇█▇▇█████▇▇▇▆▅▇▆▆▆▆▅▆▆▆▅▆▆▆▆▅▅▄▄▅▅▄▅▅▅▄▄▄ █
25.8 μs Histogram: log(frequency) by time 79.9 μs <
Memory estimate: 18.19 KiB, allocs estimate: 261.
The second approach is about 60x faster and 20x more memory-efficient? Maybe I made a mistake or there is some advantage to the former approach that I’m not understanding? It seems like it also takes time to make things symbolic so maybe there are some analytical-based reasons e.g., computing something precisely rather than numerically?