I encountered an issue that is in this thread Allocator very slow to reclaim memory after running for sufficiently long · Issue #137 · JuliaGPU/CUDA.jl · GitHub. For a machine learning workload, I noticed that if I don’t manually disable and enable the GC in Julia while using the GPU (i.e. disable GC before using the GPU and enabling after GPU is done working), the code significantly slows down.
Is this safe with regards to memory leak and segfaults? The site says that manual memory management is not needed, so I was hoping I don’t have to code at a lower level. Memory management · CUDA.jl.
What libraries allow reverse automatic differentiation of a cost function involving a derivative obtained using another automatic differentiation library or finite differencing? I looked through threads and see that nested AD may not be supported right now.