Hi,
what are the current options for upsampling layers in Flux that run on GPU? I am using ConvTranspose, but I would like to try out something else. I found Upsample(scale, mode) in documentation but it is extremely slow.
Thanks for an answer,
What mode are you using for upsampling? Some will be more efficient than others because they essentially do less work. I would also not expect any of them to be slower than the equivalent operation in other libraries, if that is your concern.
The code I am trying to replicate in julia uses bilinear upsampling so I hoped that it would work, but unfortunately there are some scalar operations in that function that make it extremely slow on gpu. Anyway, thanks for your reply.
Model Reference · Flux shouldn’t be doing any scalar operations, there’s a fully GPU-optimized kernel in NNlibCUDA.jl. Can you confirm you’re using
Upsample(:bilinear, ...)
or NNlib.upsample_bilinear
?
Hi, so the problem was actually with old versions of packages. I feel bad that I have missed something so obvious. It woks smooth with the newest versions. Thanks for your help.