I’ve encountered a few times that my code running interactively on an LSF cluster reaches the maximum of the (by me) allocated memory. This memory is not the memory limit of the node and instead of Julia throwing an
OutOfMemoryError(), as it would if the whole node is allocated or it was running on a desktop machine, the job scheduler terminates the job due to excessive memory allocation.
Is there a way to feed a maximum memory into Julia, e.g. an environment variable
JULIA_MAX_MEMORY in analogy to
JULIA_NUM_THREADS, to avoid a termination of the job?
Thanks for any hints.