CUDA How to limit the gpu memory available?

Hello,

With other lab members we share a jupyter hub server with a single gpu (A30).

The problem is that when one is using FFTW or CuArray operations he fills the entire memory.
When one runs a code with only low level kernel (to get ride of array operations) the memory used is constant around 1 or 2Gb (which is ok because we have 23Gb).
Unfortunately I don’t want to rewrite the FFT operator, then I was wondering if there is a way to limit the amount of memory that a simulation can use ? (knowing that the code run fine on an other gpu with only 2Gb of memory)
I saw this

CUDA.usage_limit[] = 2^10

but I think it is not working anymore ? (usage_limit not defined)

Thank you for your help,

2 Likes

NVIDIA removed this capability from their stream-ordered allocator. I’ve filed an issue, and they are considering moving it back. It might be possible to implement an alternative awaiting that, Re-instate memory limit · Issue #1670 · JuliaGPU/CUDA.jl · GitHub.

2 Likes

Thank you for your answer!

But I have to admit that I don’t understand the alternative.
As I am using Julia mainly for hydrodynamics/non-linear physics simulations and data analysis I am completely lost.
Can I set a ratio of memory used/memory reserved ?
Or can I change the CU_MEMPOOL_ATTR_RELEASE_THRESHOLD ?
Or I have to wait for changes from Nvidia or install a previous version of CUDA ?

Thank you again:)

It’s currently not supported. We could add a workaround to CUDA.jl, but that needs somebody to do the work (hence the issue I linked to). Alternatively, you could wait for NVIDIA to add the feature back.

1 Like

I am sorry, I was not familiar (and I am still learning about) with the “collaborative development” : package, GitHub, issue, etc…
I’ll wait for NVIDIA :crossed_fingers:
Thank you,

Would it be possible to use MIG (Multi-Instance GPU) with CUDA.jl ?

Yes, you can just split up the device and pick one using CUDA_VISIBLE_DEVICES. CUDA.jl should support them directly as well (i.e. using CUDA.device! to select them instead of limiting which ones CUDA.jl can see via CUDA_VISIBLE_DEVICES).

1 Like

Nice ! I’ll try to do that ! Thank you !