Is there a way to explicitly free GPU memory?

Hi folks,
I’m doing some CUDA coding using CuArrays, CUDAnative and CUDAdrv, in my jupyter notebooks. Since I’m developping, I write and test functions constantly. But then it happens that every now and then I have to restart the kernel as I’m getting messages of the form

Out of GPU memory trying to allocate 23.438 GiB
Effective GPU memory usage: 10.71% (432.750 MiB/3.945 GiB)
CuArrays GPU memory usage: 169.346 MiB
BinnedPool usage: 1.346 MiB (1.346 MiB allocated, 0 bytes cached)
BinnedPool efficiency: 99.97% (169.289 MiB requested, 169.346 MiB allocated)

as if the memory is not being cleared after execution, no garbage collecting whatsoever.
So what would be the simple way to make the system free memory?

Thanks,

Ferran.

1 Like

I used to have the same problem, specifically with KNet. I’d constantly have to kill NVIDIA processes, restart kernels, etc. Flux wasn’t a problem though.

I could be wrong but I think if you delete the instance of the variable or set it to a 0.0 float or nothing it will decompose it or atleast make it far more managable. That being said it’s been a long time since I worked on a GPU in Julia so this could all be deprecated knowledge by now.

You’re trying to allocate 23GB on a 3GB GPU and wonder why it goes OOM? :slightly_smiling_face:

Anyway, in the other cases you need to make sure you don’t have references to GPU arrays anymore, which is the same as how Julia on the CPU manages memory. Setting it to nothing like @anon92994695 mentions would work, or make sure you only allocate in e.g. a function and don’t assign to global variables.

Well the 23.43Gb allocation was of course a problem with too large an array, and probably not a good example. Still what I wanted to point out is the fact that many times I find the GPU’s memory is not cleaned and wanted to know a way to free it by hand. Probably now I know (though I’ll have to try it).
Thanks a lot,
Ferran.