Why does Julia have around 200MB of memory overhead, even for small programs?

I was looking at Julia’s performance in the Computer Language Benchmarks Game. Julia does pretty well in terms of speed, but I was noticing that Julia has a memory footprint of around 200MB for every benchmark, even ones where other languages are only using around 3MB. For example see the pi digits benchmark linked above. I found this discourse.julialang.org thread from 2017 discussing the issue, but I can’t tell if there’s been any updates since then.

So does anyone know the reason behind this overhead? If it’s due to the JIT (just an inexperienced guess from me), is the memory freed before the program starts? Are there any plans to fix this problem?

It’s the JIT compiler and entire runtime and garbage collector.

1 Like

The ability to separate the runtime and the compiler appears to be coming soon:

To me, this is huge news, since it increases the places it is plausible to use Julia.


About 100MB is due to the sysimage; on Linux, if you check /proc/$(getpid())/smaps, you’ll see that the sysimage (sys.so) gets mapped in and has a resident size of 100MB (although not all of it is “dirty”, so some is potentially shared with other processes?).

The PR linked above will help somewhat, by removing the LLVM dependency and the codegen portion of libjulia , but there will still always be that extra 100MB.


Treeshaking could reduce the sysimage size though, right?

1 Like

Yes, but treeshaking is also not trivial to do (correctly), given that it requires knowing all functions and data which could be accessed by a program (which isn’t always obvious or bounded in a dynamic language like Julia).

If you do it statically, you can end up marking everything as live (which means nothing is stripped), and if you do it dynamically (such as by using coverage information or a tracing debugger/profiler), you need to make sure your precompilation script covers everything that you’ll want end users to have access to.