In-memory and large memory compute - and useful Julia features?

Following the discussion on posits, I would like to explore another topic. Perhaps there is a better place for this, however I do find the community here helpful and ‘sparky’.
I Also have probably said the same things in other posts.
In the past, when working on meshing of large models and partitioning them to run CFD models is struck me that perhaps software developers start with a small model, and allocate arrays to contain the whole model and process it. Out in the wild, engineers will always want to push the boundaries and will demand smaller cell sizes/larger models. This leads to systems with larger and larger RAM - which are available these days but terabyte sized memory systems still cost lots. Also power is needed for DDR memory - it is continually refreshed and just look at the heatsinks on your RAM chips.
We have a new generation of storage class memory (Intel Optane) and NVMe storage which are giving new layers in the storage hierarchy.

I should ask a Julia specific here - what features of Julia can help developers think beyond “dimension an array big enough to hold everything in RAM”

I will kick off by mentioning the reshape function. I guess this is not unique to Julia.
But the concept that you can form a ‘new’ array with a different ordering of dimensions yet still be using the same data block is pretty cool at least to me (*)

(*) I used to program in FORTRAN with COMMON blocks. I think similar things were achievable with them. But main memory was WAY more limited that today and any ‘tricks’ to save memory were gratefully received. 4MB of memory in a mainframe anyone?