Is it a bug with Base.summarysize or something very inefficient about SharedArray? A 1D SharedArray only have a small overhead but a 2D SharedArray uses 2x more memory?
I’d take anything that summarysize says with a large grain of salt.
It doesn’t count the header (40 bytes on a 64-bit platform), nor the actual amount allocated, just the sizeof the vector (what is used).
Thanks for the insight. Do you know of any good way to accurately measure how much memory it is using? I have a use case that I could load up to 100 GiB into shared memory and it would be a bummer if it fails miserably for my lack of understanding
I think it’s just double counting, because of the behavior of summarysize. It basically recursively counts how much memory is used by all objects reachable. A SharedArray has a field loc_subarr_1d and another s. The latter holds the whole array and the former holds a 1d view of the array. Mutating one mutates the other, so it’s just double counting. It’s probably doing something like:
mysizeof(f) = sum((sizeof(f), (mysizeof(getfield(f,i)) for i in 1:nfields(f))...))
But it’s weird that it’s not double counting for the 1D case . Anyways, I think it deserves some attention from devs.
Maybe adding a bug keyword or something will get more attention more quickly. In the meantime, I think the equivalent of task manager in Windows or system monitor in Linux will give you a rough idea of memory used by large enough data.