I implemented a complex neural net by using Knet/AutoGrad. Then, I realized that RAM usage of my experiment gets bigger and bigger slowly in the training even though it was a GPU experiment. In simpler models, I don’t have this kind of problem (constant, consistent memory usage after a while). I detected the section that causes the problem, but I still need to debug it in order to find what I do wrong and fix it. I tried the following so far (but nothing yet),
- call
gc()
in somewhere relevant - used valgrind to check if there are any leakage
- call
whos()
periodically and analyze memory usage
I just need some idea or tool for general debugging this kind of memory issues in Julia. Thanks a lot!