CUDA How to limit the gpu memory available?

Yes, you can just split up the device and pick one using CUDA_VISIBLE_DEVICES. CUDA.jl should support them directly as well (i.e. using CUDA.device! to select them instead of limiting which ones CUDA.jl can see via CUDA_VISIBLE_DEVICES).

1 Like