I made some code changes which I guessed would improve memory usage. @time
shows a huge speedup, > 10x, and a >98% reduction in memory used. But when I use --track-allocation
it seems as if my changes made things much worse.
Are the results of --track-allocation
sometimes an unreliable guide to the allocation behavior of the code run normally?
Valentin Churavy @vchuravy wrote
The way it [`–track-allocations] functions is to change the emitted LLVM IR to insert code that tracks allocations occuring.
This often leads to false-positives since these statements interfere with optimization (in particular escape analysis)
Since the profiles suggest unoptimized code, perhaps my problem is related.
Running Julia 1.8 on Windows. The issue from which the previous quote was drawn indicates 1.9 might behave better.
I would love to provide a small example, but have been unable to isolate the problem. See the discussion here for how this got started. In brief, I started with lots of memory allocations and reduced many of them to 0 by adding concrete type information to key types in my system. The problems described here occurred when I tried to push things further for remaining hot spots. I added more parameterized types, including parameterized functions, and the total reported allocations went up, including many that had gone to 0.
The code is tricky to trace since it involves calls to functions stored in variables in structures, as well as functions defined on the fly, e.g., f(z) = ev.f(z, wa)
. ev.f
is a function stored in a variable, and the definition pulls wa
from outside the function definition–perhaps what “escape analysis” is about? But when I stepped lower and lower in the call sequence, using @code_warntype
on each call, the resulting code appeared optimized with stable types. This is consistent with the information reported by @time
, but not with the results of --track-allocation
.
The code is executing in a multithreaded context. There is at least one report of strange behavior of --track-allocation
with multi-threading; it doesn’t look as if the question was ever fully resolved.