I am doing some process that uses a ton of memory.
The way I thought of doing it is to request 2 nodes via slurm and then send the memory-intensive part to a second node.
My process looks like:
salloc --nodes 2 -C haswell -q interactive -t 00:05:00
Now, I want to launch workers in the second node specifically, but if I do
using Distributed, ClusterManagers addprocs(SlurmManager(6))
I have no control over where are the workers launched.
Is there a way to control where the workers are going to be launched?