Hi, I have a fairly large project that has a number of functions that deal with parameters of unspecified type (mostly passing them to the next functions that know what they will be getting). In the end, I’m relying on runtime dispatching in a number of places and it was one of the main reasons why I chose to use Julia for this project.
Allocation numbers measured by @time are in the same ballpark (I’m running tasks until timeout, so there are fluctuations every time), memory usage is not growing if I’m running the same task again and again, so there is no memory leak, leaving compilation caches the main culprit.
What has changed in this area in 1.9? And how can I track memory usage from compilation caches? I can’t rely on total allocation metrics because they mix everything together, and I can’t make everything type stable, because a lot of my code should be generic.
What’s new in 1.9 is precompiled packages now have native code, so on disk much larger. I’m not sure if that’s you issue, but it can be disabled, something you could try, if you’re thinking of some in-memory (cache) structure if might be related, or a result of this. Or just use 1.8 for now… or 1.10/nightly to see if it’s better.
From the history file:
Package precompilation now saves native code into a “pkgimage”, meaning that code generated during the precompilation process will not require compilation after package load. Use of pkgimages can be disabled via --pkgimages=no ([#44527]) ([#47184]).
Experiments with the current 1.10 nightly showed the same 12Gb per worker memory usage as on 1.9.0, regardless of the --pkgimages=no option. So it’s probably not the problem you’ve mentioned.
I should also note that memory usage is growing steadily while it runs different tasks (and probably explores more combinations of parameter types), and not just eats everything from the start, what I would expect from precompiled packages.
Is this option propagated to child processes? I tried to run it with --heap-size-hint=3G and --heap-size-hint=1G with no effect on memory usage whatsoever.
Yes, it looks like that’s the issue. I’ve switched from using the -p flag to launching workers with addprocs(count, exeflags = "--heap-size-hint=1G") and they actually stopped eating all the memory in the world. So it looks like this issue is kind of solved for me for now, but if people working on GC can look into it further, it would be great.
yeah. I’m becoming more convinced that this gc change was probably a mistake. it fixed some performance issues but does result in Julia eating a bunch more ram sometimes.