Julia GC, heap fragmentation, out of memory, push!/append!

Nice. I didn’t know that. Guess I’m too old school :wink:
Although I deploy on Linux I develop on Windows and there you can still get into the situation where you get to 100% RAM taken and then the julia GC frantically trying to free memory and very large (GBs) fluctuations in julia memory but never stabilizing as long as you keep working (doing requests).

Thanks for clarifying, not sure how often in practice the address space gets (very) fragmented in practice. I may have downplayed the risk too much. I guess there’s a reason Ruby looked into compacting GC (e.g. for long-running web servers), may or may not?) apply less to more typical HPC use of Julia. Anyway there’s a solution with MESH I posted in my latest post. I think you can do that right now with Julia without code changes in Julia, maybe there’s already some benefit (or none). It may need the same simplification of Julia code I mentioned for mimalloc, to see all the free that would happen.

This assumes the OS doesn’t do anything clever with the holes in virtual address space, and I suppose it can’t because they were allocated as memory (maybe used previously), and the OS never informed not free/garbage by now. I suppose it could, and would need help from the libc. Or just allocate very large swap file, it’s cheaper than RAM. Or use MESH.

Incidentally, it’s very likely this issue bit me recently…

Many poople is being hurted by this problem, I am facing the same problem.

I have a computer with RAM 128G, but because of this problem, it turns out that the big RAM is useless.

I really think so, even more now since FYI: Python 3.13 beta1 is out and it has:

  • A modified version of mimalloc is now included, optional but enabled by default if supported by the platform, and required for the free-threaded build mode.

I didn’t check which:

Latest release tag: v2.1.6 (2024-05-13).
Latest v1 tag: v1.8.6 (2024-05-13).

Mimalloc was the best malloc some time back when I checked, and I doubt it has changed. Do we need the standard one, or same as Python’s modified? As with OpenSSL I’m thinking might there be a conflict, I’m not sure you can different mallocs in the same program, or in general different .so versions, except by workarounds (as discussed at Julia’s OpenSSL issue).

Off-topic, but Python 3.13 is going to be a very intriguing, and important released with:

I’m not sure it was worth reviving the thread.

An arena allocation scheme partly addresses the original issue.

As for distinct allocators, I’m not sure if that is a particular issue as long as you don’t try to free something you did not allocate.

For example, I played with integrating alternative allocators in ArrayAllocators.jl:

The basic idea was to use unsafe_wrap with own = false to tell Julia not to free that memory by itself.

The one problem with this is I was not able to tell the Julia GC how much memory I had allocated outside of it, while still having to rely on finalization to cleanup.

1 Like

Perhaps, but I tried running a Genie webserver for running a survey over a few weeks, and the memory usage kept on growing and growing. After a few server OOMs, I set it to restart (and reload the data it previously had in memory), and that worked but lead a ~5 minutes of outage every day (which people hit, since I had thousands of respondents). I suspect this is an instance of the same issue, and harder to solve by “just using an allocation arena for your array”.

As such, I’m rather interested in the prospect of trying something like Mesh in Julia, but I see that Windows support is still a WIP there.

Was this with Julia 1.10?

1.8, this was a while ago.

The GC improved at lot in 1.10, I have no more OOM events since this version…

That’s good, I do still wonder if the Mesh allocation might be beneficial though.