Hi @Julia,
The team I’m working with is writing a simulation package, with a core dudt!(du, u, parameters, t)
function given as an argument to ODEProblem
and solve
from DifferentialEquations
. The code in dudt!
is complex because the target system features a variety of different functional forms depending on user configuration, but not all this parametrization resolution is actually useful to solve
.
This week, I switched paradigm and made use of Julias metaprogramming features to instead hand some dudt! = eval(generated_dudt(parameters))
out to ODEProblem
instead. My function generate_dudt
resolves parameters to output an unevaluated function expression like
:(function (du, u, _, t)
# Various lines generated by the function (dummy examples)
var_1 = u[2] - 40*u[3]
var_2 = u[44] / 2 * u[7] - 2
# ...
du[1] = var_1 * u[1] - u[9]
du[2] = u[7] / var_2 + u[1] - u[2]
# ... until d[end]
end)
This worked like a charm and I found some x20 to x100 speedup factor with the (rigid, dedicated) generated dudt!
instead of the (flexible, general-purpose) handwritten one. Thank you julia! In addition, I found the metaprogramming experience really, really smooth: the reflective model is amazing
Of course: the bigger the generated expression, the longer the compilation time. I am currently hitting some kind of a wall when the number of lines in the generated function becomes large (≈600 lines with SyntaxTree.callcount(xp)
≈ 20,000). This is too bad because the situation corresponds to the parameters with biggest speedup factor.
I understand that larger expressions take longer to compile, but I don’t exactly understand why, which part of the compilation process is actually exploding, and what I can do to alleviate the burden for the compiler (explicitly type all generated variables? pick better names? split long expressions into multiple lines?). How can I investigate?
My naive attempt so far has been to @ProfileView.profview dudt!(dummy_arguments...)
and stare at the generated picture, but it’s not very informative, with a large empty base boot.jl eval
red layer, and only a fraction (≈15%) of it occupied by some typeinfer.jl
script. How can I get more information about what’s going on in there? Are there good tips around to peep into julia’s compilation process?