You can check available memory with functions like CUDA.total_memory() and CUDA.available_memory(), just like in Base for checking CPU memory (where similarly you can’t just go and launch an unbounded number of tasks without running into OOM).
You can check available memory with functions like CUDA.total_memory() and CUDA.available_memory(), just like in Base for checking CPU memory (where similarly you can’t just go and launch an unbounded number of tasks without running into OOM).