Using Unified Memory with oversubscription

Hi,
I recently learnt the existence of Unified Memory in Cuda, and I’m trying to explore what is possible to do with it in Julia.
I would like to use it with oversubsciption, for instance by allocating one array that is too large for my GPU memory but which fits in the system RAM.
This works fine, and I can easily perform some GPU operations on it by using views on some slices that fit in the GPU memory. However, I can not perform GPU operations on it entirely, slice by slice, because the memory allocated on the GPU by the previous operations on the unified buffer is never freed (or unmapped, I don’t know the exact term to use).
So my question is: are there some basic operations that could synchronize the unified buffer with the system RAM and forget about the previously mapped chunks on the GPU ?
I’m not sure the terminology I use is totally correct but I hope you understand my question.
Thank you,
Nicolas

The driver should unmap memory automatically; there’s no API calls I know of. What kind of errors or issues are you encountering?

Hi,
Thank you for your reply, I’m sorry I realize I can’t reproduce the problem, now everything seems to work as you mention… Before I had an error after accessing too many slices but I don’t know where it came from. It seems that if CUDA crashes for another reason then it’s better to start from a fresh repl.
Sorry again.

Yes, CUDA has plenty unrecoverable errors. Once you run into, say, an illegal memory error, or certain launch failures, it’s better to restart Julia.