Hi everyone,
I want to know if using an in-place version of the Lux layers is better. I noticed a related pull request https://github.com/LuxDL/Lux.jl/pull/463, but I don’t know why this pull request was canceled.
Since the memory usage(GPU memory) keeps increasing when I train a Lux model, I think the in-place layer will reduce the memory allocation.