Julia 1.9 uses more memory for compilation caches that 1.8

Hi, I have a fairly large project that has a number of functions that deal with parameters of unspecified type (mostly passing them to the next functions that know what they will be getting). In the end, I’m relying on runtime dispatching in a number of places and it was one of the main reasons why I chose to use Julia for this project.

After the release of Julia 1.9, I noticed that memory usage increased significantly, up to the point where GitHub Actions workers are killed during testing. Here is an example run, where tests with 1.8 pass while 1.9 are killed Don't stop tests on one julia versions if second one fails · andreyz4k/ec@9b78715 · GitHub.

Allocation numbers measured by @time are in the same ballpark (I’m running tasks until timeout, so there are fluctuations every time), memory usage is not growing if I’m running the same task again and again, so there is no memory leak, leaving compilation caches the main culprit.

What has changed in this area in 1.9? And how can I track memory usage from compilation caches? I can’t rely on total allocation metrics because they mix everything together, and I can’t make everything type stable, because a lot of my code should be generic.

What’s new in 1.9 is precompiled packages now have native code, so on disk much larger. I’m not sure if that’s you issue, but it can be disabled, something you could try, if you’re thinking of some in-memory (cache) structure if might be related, or a result of this. Or just use 1.8 for now… or 1.10/nightly to see if it’s better.

From the history file:

Package precompilation now saves native code into a “pkgimage”, meaning that code generated during the precompilation process will not require compilation after package load. Use of pkgimages can be disabled via --pkgimages=no ([#44527]) ([#47184]).

I think you’re seeing the problem fixed by Don't permalloc the pkgimgs, but add option for PkgCacheInspector by gbaraldi · Pull Request #49940 · JuliaLang/julia · GitHub which will be in 1.9.1.

1 Like

I tried to run a test on 1.9.0 with --pkgimages=no and still got ~12Gb per worker process memory usage vs ~2.3Gb per worker on 1.8.5.

1 Like

Experiments with the current 1.10 nightly showed the same 12Gb per worker memory usage as on 1.9.0, regardless of the --pkgimages=no option. So it’s probably not the problem you’ve mentioned.

I should also note that memory usage is growing steadily while it runs different tasks (and probably explores more combinations of parameter types), and not just eats everything from the start, what I would expect from precompiled packages.

oh, that might be changes in how aggressive gc is. you can set a soft limit on heap size in 1.9. that may fix it.

Is this option propagated to child processes? I tried to run it with --heap-size-hint=3G and --heap-size-hint=1G with no effect on memory usage whatsoever.

Can it be some weird side-effect of optimizer: inline abstract union-split callsite by aviatesk · Pull Request #44512 · JuliaLang/julia · GitHub?

I don’t think it is propagated (although it possibly should be)

Yes, it looks like that’s the issue. I’ve switched from using the -p flag to launching workers with addprocs(count, exeflags = "--heap-size-hint=1G") and they actually stopped eating all the memory in the world. So it looks like this issue is kind of solved for me for now, but if people working on GC can look into it further, it would be great.

2 Likes

yeah. I’m becoming more convinced that this gc change was probably a mistake. it fixed some performance issues but does result in Julia eating a bunch more ram sometimes.