A lot of (Monte Carlo) simulations can be done simultaneously and independently on many nodes of an HPC, each generating large solutions for later analysis. However, a straight
pmap will try to build a giant output that likely won’t fit on any node, and then just crash the system.
Are there distributed databases I could instead write information to, or instead somehow serialize the types separately and concatenate them into one big data file?
Related to this post is the other post, discussing what to actually save:
and the DiffEq issue monitoring the updates: