Parallel computation with variable memory consumption

Hi all,

I have a batch job that I’d like to run in parallel and leverage all cpu cores. However, each process in the batch may consume different amount of memory - some big some small. Naively, I would code a for-loop with @parallel macro but it would end up hitting the max memory when it happens to process too many big tasks at the same time, and it would start swapping and defeats the performance (or crash when swap space got used up.)

It would be nice if the scheduler could pause and not command any new task until a certain free memory is available. I would be able to pre-estimate possible memory allocation for each task, so the scheduler just need to wait until other processes to finish up and free the memory before going further.

Does anyone encounter similar problem before? How did you solve it?

1 Like