Julia execution inside a container with restricted RAM

Hello, is any way to restrict Julia’s process in memory consumption when that’s executed inside a Docker container? Sometimes we are getting out-of-memory kills just because Julia’s garbage collector didn’t do memory cleanup at the proper time. As a result, Kubernetes restarts these containers.


You can probably increase your containers’ memory limits, or you can use GC.gc().

GC.gc() is asynchronous as I know. I can add it at end of query processing, but there is no guarantee that memory will be freed.

Container limits - that might be expansive if I need to have 2x 3x times more RAM than really used…

What it would be good to have - something like forced garbage collection on the RAM limit reach. Something like -Xms and -Xmx options in JVM.

1 Like

Updated 32 bit heuristics by chflood · Pull Request #44805 · JuliaLang/julia · GitHub adds one internally. It would be a very easy PR to Julia to make it a command line option.

1 Like

That option would be very useful with any cluster/cloud based deployments.

1 Like

I thought someone had worked on making Julia cgroups aware. I’m not sure where the merged PR is right now.

+1 for something like this. I asked about this here as well: https://discourse.julialang.org/t/is-there-a-way-cap-memory-usage-to-keep-mybinder-from-crashing

Fwiw, I found that using package compiler helped reduce the memory footprint when loading packages.

For Docker-based deployments, we are using binary compiled sysimages only with full code coverage by unit tests in a tracing stage. But gc is a different issue. When we have a bunch of queries, we cannot guarantee that memory will be available even when we 100% know the memory consumption per query and the number of queries in processing.


New command-line option --heap_size_hint as a part of Julia 1.9