I am so happy and can not help to share here. I have a multithreading script that reads tons of CSV files, processes them, and saves each back to disk. before 1.10, I see a continuous growing of memory usage like insanely to eat up all 2T memory and eventually crashes with out-of-memory issue. Now with 1.10, the memory usage stabilizes at only 50G. What an enormous improvement!
Great. Let’s hope that libuv
issue also gets fixed soon and the multithreading pain goes away forever.
I think some of the memory issues may be backported to 1.9. Our GC algorithm was previously somewhat “artisinal” and could make some pretty dumb decisions.
Thank you for posting this. It gives me hope that it might fix a problem we have been having (multi-threading with OpenModelica via OMJulia): memory consumption creeps up continuously until it exhausts the system memory of 128GB (not TB like you but still).
Looking forward to this. I too frequently have memory problems with multi-threaded code, possibly related to GC not doing its job.
If 1.10 might take some time to be stable, then this would be very appreciated!
I am wondering what has caused this dramatic improvement?
The short answer is that GC tuning is a complicated tradeoff between time and space. Running GC more often keeps memory usage smaller, but takes more time. To make things more complicated, the runtime only knows when objects are allocated, not when they stop being used. As such the algorithms that trigger when to run GC are a pile of heuristics that if you get wrong, you end up using way too much ram or spending 90% of your time in GC.