If I serialize a model to a file and then restart julia and read in that model again, the memory usage is much lower so I get the impression that the CG is non-compacting, i.e. memory becomes fragmented and a lot of memory is wasted. Is this correct?
If so are there any plans for a better GC?
Afaiu, the GC will never ever become compacting, for reasons of C interoparability.
Consider pulling a deepcopy
of your data. This is, by the way, not just for fragmentation but will also allocate such that any operation that traverses your structure in deepcopy
-order is more cache-friendly.
This is not entirely ture. C escapting is something that could be handled with moving. If the need really come up. Implementing that should only come after implementing all other tricks available though and the julia GC is very far from that.
1 Like
Not as much. Size segregatted pool helps a lot here.
I’d love to see the GC design documented. Right now I couldn’t find anything. Java has several GCs and the working of each one is documented in detail.
gc.c
is rather well-documented (in the code).