I have received some queries from coworkers about Julia’s memory overhead. Members of our HPC team have expressed concern that Julia seems to eat about 160-200MB of RAM per Julia session (our current deployment model is to run many single-thread jobs on AWS nodes with relatively small RAM, and additional RAM is an expensive resource on AWS. The speed and RAM usage of my algorithm itself seems quite reasonable, but fitting 200MB more data in could be valuable to us).
So I have a couple of questions, which might be answered elsewhere but I couldn’t find an up-to-date answer.
- When I start julia it seems to use ~ 160MB of RAM
$ /usr/bin/time -v julia -e 1 Command being timed: "julia -e 1" User time (seconds): 0.22 System time (seconds): 0.16 ... Maximum resident set size (kbytes): 158612 ...
Does someone know what a breakdown of this 160MB is? We have LLVM, BLAS, LAPACK, libgit, etc as well as the compiler, system image, etc. I found a post the other day for an older version of Julia that was talking about 38MB of RAM usage - what has changed since then?
- Is there some way of cutting this back? For instance, if I want to deploy something that doesn’t use BLAS and LAPACK, could I make a Julia build without them? Alternatively, are there any environment variables I can set that affects the amount of RAM used? (e.g. I read somewhere that OpenBLAS allocates a lot of RAM on linux because it tries to allocate scratch space for many threads, so setting it to 1 thread might help?)