Hi everyone,
I’ve encountered a few times that my code running interactively on an LSF cluster reaches the maximum of the (by me) allocated memory. This memory is not the memory limit of the node and instead of Julia throwing an OutOfMemoryError(), as it would if the whole node is allocated or it was running on a desktop machine, the job scheduler terminates the job due to excessive memory allocation.
Is there a way to feed a maximum memory into Julia, e.g. an environment variable JULIA_MAX_MEMORY in analogy to JULIA_NUM_THREADS, to avoid a termination of the job?
Wait, I found a mistake in the reference I’ve linked. You need to close the file in the referenced function. Otherwise you may get an error message like “too many files open” if you call that function very often. For convenience, here is the repaired version:
I use this mainly in callbacks in JuMP.jl. Clearly, you cannot avoid all OutOfMemoryError() with this method if you put your “julia memlimit” too close to you physical limit. However, for me it works fine and I really have this errors very rarely now.