Creating deconvolution layers in Flux compared to PyTorch

Hello,
Trying to port a written model from pytorch to flux. And I encounter some differences between the two libraries as expected. And wonder how to imitate the original one.

While creating deconvolution layers in Pytorch the output channel of previous layer might be smaller than the actual layer’s input channel. When I try to do the same thing in Flux, it is not permitted. Is there any way that I can imitate this behavior in Flux?

B.R.

Looking at the code you linked, I think you are skipping a concatenation.

        # upsampling/decoding 
        u0 = self.upsample0(d4)
        # skip-connection
        c0 = torch.cat((u0, d3), dim=1)

The skip-connection concatenates the output of d3 (90 channels) with u0 (90 channels) along the channel dimension to provide the input to u1 (180 channels).

cat looks just the same in Julia, so that should be a straightforward translation.

Thanks for the feedback. I was trying to use SkipConnection method to do that. Right now, the problem that I am facing is that the size of the u0 and d3 are different. u0 is of size (27,10, 90,2) and the d3 is of size (28, 18, 90, 2). How can I concat them to make it 180 channels ?

Any suggestions ?

You can set pad=SamePad() when creating the conv layer to apply padding so that the output feature maps have the same size as the input maps.