Simulate manual memory management to reduce RAM usage?

Julia can avoid unnecessary GC by allocating immutable objects on the stack, but I’m wondering if it’s possible to push further: for heap-allocated local objects that do not “escape” the function (e.g. returned or passed to another function), is it possible for Julia’s compiler to deallocate the memory as soon as the object is out of scope, i.e. at the end of the function, similar to what happens in C++? If not, is there a reason why this is undesirable in Julia (and GC’ed languages in general)?

Moreover, is it possible to reduce RAM usage by manually deallocating an array a by calling resize!(a,0) and sizehint!(a,0), before it goes out of scope? Will Julia free the memory immediately following these two commands?


It’s not that this is impossible, it’s just an optimization that’s not yet fully done for e.g. arrays. Mutable types that don’t escape already have a similar optimization (though not always applicable and not a guarantee), which is why and immutable types containing references (e.g. views) usually don’t allocate anymore. It was a big deal when this was done in 1.5. There is talk about more optimizations of that sort though, especially for arrays.

No. resize!ing to 0 doesn’t necessarily deallocate the array. You may even inadvertently make it hang around for longer, because you interacted with it. Last I checked, julias’ GC is a generational mark-and-sweep GC, so touching it more often will make the object stay alive for longer, because it’s not checked as often for being alive.

You can try explicitly inserting GC.gc() to trigger a sweep, but this is not guaranteed to free memory. The best way to avoid high RAM usage is to not allocate in the first place, use mutating functions that don’t allocate etc.


Yes, there’s a brand-new package for that, see at:

I would try it (in about a week, in case there are bugs). It’s not just for small objects. It’s faster, but actually doesn’t reduce “RAM usage”, just allocation overhead.

Actually, allocating many small non-escaping vectors with julia’s GC will in fact increase memory usage relative to what Bumper.jl is doing, because julia’s GC will allocate a bunch of them and then wait until a sufficiently high level of GC pressure accumulates before freeing them.

So instead of just re-using the same memory buffer for each subsequent object like Bumper can do, your memory usage will grow and grow and grow until the GC runs.

1 Like