I realize your joking, but that wouldnāt solve your problem in practice (only in theory). To put the number in perspective:
Even if you could install than much RAM (you canāt), then scanning that much memory (even once) would take 178 days (by one CPU, so you need many, and a cluster and/or a grid), assuming 819.2 GB/s for HBM3 memory, and much longer for commodity memory, but actually you would be working on network speed not RAM speed.
I couldnāt immediate see how much flash RAM or flash memory is manufactured per year, what you would need, so just buying it or waiting for it to be manufactured in nontrivial.
64-bit CPUs might seem like they can address enough memory, but they canāt address that much, not 2^64. RISC-V has 128-bit defined, but Iām not sure yet manufactured, and I ignore that as I think addressing may be similar for that and their 64-bit RISC-V and other CPUs.
āCurrent AMD64 processors support a physical address space of up to 2^48 bytes of RAM, or 256 TiB.[18] However, as of 2020, there were no known x86-64 motherboards that support 256 TiB of RAMā
Iām not sure how much memory you can install, I have 128 GB in my desktop (and 512 GB in some server I ran years back). But you need to be able to address all of the memory, and thatās a problem, or avoid that issue in (parallel/MPI) software, slowing down even more.
āIntel has implemented a scheme with a 5-level page table, which allows Intel 64 processors to support a 57-bit virtual address spaceā
Thatās only 128 PB you can address, so a single computer is not enough. You need a cluster of 4 such large computers, or more smaller, likely a grid. So you will be working at network speed.
Iām not sure if something changed with (what I just discovered) regarding addressing:
āIn 2020, through a cross-vendor collaboration, a few microarchitecture levels were defined, x86-64-v2, x86-64-v3 and x86-64-v4.[38]ā
Fugaku, the most powerful supercomputer has:
ā1.6 TB NVMe SSD/16 nodes (L1)
150 PB shared Lustre FS (L2)[1]
Cloud storage services (L3)ā
so you need 4 such, ignoring the cloud capabilities, working of much slower disks. Or a gird of ordinary (desktop) computers.
Iām not sure whatās the capacity of BOINC the largest grid (in RAM or storage). But assuming a generous 128 GB RAM (and not using the disks) you only need a grid of 408.000 such computers desktop computers.
IBM mainframes had the most memory (in a single address space), last time I checked, and claimed recently (before Apple M1) fastest CPUs for single-threaded and multi-threaded (maybe not floating-point, also not relevant here), and for z15:
the āL4 Cache increased from 672MB to 960MB, or +43%ā with the new add-on chip System Controller (SC) SCM. Both it and all levels of cache in the main processor from level 1 use eDRAM, instead of the traditionally used SRAM. āA five-CPC drawer system has 4800 MB (5 x 960 MB) of shared L4 cache.ā
The L4 cache is tiny (for this) so ineffective. As I wrote, youāll be running at RAM speed, at best.