I’m doing some CUDA coding using CuArrays, CUDAnative and CUDAdrv, in my jupyter notebooks. Since I’m developping, I write and test functions constantly. But then it happens that every now and then I have to restart the kernel as I’m getting messages of the form
Out of GPU memory trying to allocate 23.438 GiB Effective GPU memory usage: 10.71% (432.750 MiB/3.945 GiB) CuArrays GPU memory usage: 169.346 MiB BinnedPool usage: 1.346 MiB (1.346 MiB allocated, 0 bytes cached) BinnedPool efficiency: 99.97% (169.289 MiB requested, 169.346 MiB allocated)
as if the memory is not being cleared after execution, no garbage collecting whatsoever.
So what would be the simple way to make the system free memory?