# Performance Warning when Solving Parameterized ODE

Hi!

With one of the latest Julia updates I got the following warning while running my code:

Warning: Using arrays or dicts to store parameters of different types can hurt performance.

I figured out quickly that the warning has its origin in the parameters that I have associated with my ODE.
I created the following MWE to demonstrate the behaviour of my code:

``````using DifferentialEquations

function pendulum!(u̇, u, p, t)

m = 1.0
g = 9.81

l = p[1]
M = p[3]

u̇[1] = u[2]
u̇[2] = -3g / (2l) * sin(u[1]) + 3 / (m * l^2) * M(t)

end

function ctrl_f!( integrator)

t = integrator.t

if t ≥ 5.0
A = integrator.p[2]
M_new(t) = 2A * sin(t)
integrator.p[3] = M_new
#integrator.p = (integrator.p[1:2]..., M_new) # this doen't work
end

u_modified!( integrator, false)

end

rhs = ODEFunction( pendulum!)

u₀ = [0.01, 0.0]

t_0, t_f = 0.0, 10.0
t = t_0:0.1:t_f

l = 1.0
A = 0.1
M(t) = A * sin(t)
ode_paras = [l, A, M]
#ode_paras = (l, A, M) # this doen't work

cb = PresetTimeCallback( t[2:end], ctrl_f!)

prob = ODEProblem( rhs, u₀, (t_0, t_f), ode_paras, callback=cb)
sol = solve( prob)

``````

As you can see the problem here is not simply solved by using a tuple instead of an array. The main reason for this is that i need to change the parameters during the integration process of my ODE solver which can be done using arrays with `integrator.p[3] = M_new`. However, switching from an array to a tuple in order to avoid the warning above changing the parameters by` integrator.p = (integrator.p[1:2]..., M_new)` yields the following error:

ERROR: MethodError: Cannot `convert` an object of type var"#M_new#7"{Float64} to an object of type typeof(M)

So my question is: What is the recommened way when you want to get rid of this performance warning? So how can I use tuples here effectively?

Please note: I am aware of the fact that for this MWE there would be absolutely no need to use a `PresetTimeCallback` and change the parameters. The easiest way would be to construct a suitable piecewise function `M` and work without a callback. But I did that for the illustration of my acutal, of course, way more complex problem.

1 Like

Every function is a distinct type in Julia, so code with mutable function values will always be type-unstable which is what this performance warning is for.

I think the best option would be to define `M` once and parameterize it more fully. This may be similar to your piecewise idea, and it is likely to have the best performance.

``````p = (l, A, 1.0)
M(p, t) = p[3] * p[2] * sin(t)
``````

Of if you need more flexibility

``````p = (l, A, :initial)
function M(p, t)
mode = p[3]
if mode == :initial
...
elseif mode == :next
....
...
end
end
``````

If there is some reason that won’t work and you have a small number of functions, you could store them all in the tuple and modify a function index:

``````M1(t) = A * sin(t)
M2(t) = 2A * sin(t)
ode_paras = (l, A, M1, M2, fidx)

M = p[p[5]]

if t ≥ 5.0
# tuples are immutable so replace the whole thing (Accessors.jl can make this less verbose)
integrator.p = (integrator.p[1:4]..., 4)
end
``````

If you have more details about your real problem, perhaps there is a better solution still!

1 Like

It seems as if your are completely right here.

I ran the original array version that I posted yesterday and used `BenchmarkTools` which gave me

``````BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max):  392.400 μs …   3.845 ms  ┊ GC (min … max): 0.00% … 84.24%
Time  (median):     407.600 μs               ┊ GC (median):    0.00%
Time  (mean ± σ):   451.262 μs ± 274.042 μs  ┊ GC (mean ± σ):  6.08% ±  8.62%

█▆▄▁                                                          ▁
████▇▄▄▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅ █
392 μs        Histogram: log(frequency) by time       3.01 ms <

Memory estimate: 475.16 KiB, allocs estimate: 16271.
``````

Then I compared that with your flexible solution that uses the symbols `:initial` and `:next` and got the results

``````BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max):  145.700 μs …   3.529 ms  ┊ GC (min … max): 0.00% … 92.74%
Time  (median):     157.400 μs               ┊ GC (median):    0.00%
Time  (mean ± σ):   178.789 μs ± 189.894 μs  ┊ GC (mean ± σ):  8.00% ±  7.07%

▃▆▇█▇▇▆▄▃▄▅▆▅▅▄▃▃▂▁▁▁▁▁▁▁▁                                   ▂
█████████████████████████████▇██▇██▇█▇▇▇▆█▆▇▆▆▆██▇▇▅▅▅▄▅▅▆▇▆▅ █
146 μs        Histogram: log(frequency) by time        256 μs <

Memory estimate: 264.36 KiB, allocs estimate: 2835.
``````

Needless to say that the performance warning does not appear here anymore.
I haven’t tried it out for my actual problem yet, but thinking about it makes me confident that your idea can be realized for it.

Nevertheless, I also found an interesting other solution that might be worth mentioning for those cases where for some reason you cannot do it the way you suggested.
The solution works with `NamedTuple` only needs little modification compared to the MWE:

``````#ode_paras = [l, A, M]
ode_paras = NamedTuple{(:p1, :p2, :p3), Tuple{typeof(l), typeof(A), Function}}( ( l, A, M))
``````
``````#integrator.p[3] = M_new
integrator.p = merge( integrator.p, (p3=M_new, ))
``````

This gets rid of the performance warning, too, is, of course, more efficient than the array version, but not as efficient that simply parametrizing `M`:

``````BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max):  259.800 μs …   3.652 ms  ┊ GC (min … max): 0.00% … 88.50%
Time  (median):     273.000 μs               ┊ GC (median):    0.00%
Time  (mean ± σ):   307.277 μs ± 226.273 μs  ┊ GC (mean ± σ):  6.66% ±  8.10%

▂▅██▆▃▃▃▄▅▆▆▄▃▃▂▂▂▁▁▁▂▁▁▁▁▁▁                                  ▂
████████████████████████████████▇█▇▇▇▅▆▆▆▅▄▄▃▂▃▄▂▂▃▅▆▆▅▄▅▂▄▂▂ █
260 μs        Histogram: log(frequency) by time        451 μs <

Memory estimate: 388.67 KiB, allocs estimate: 10763.
``````