This is a very different error. If you compare the stack traces they show completely different things.
I think this points to the source of the new error:
[10] conv(::CuArray{Float32,4,Nothing}, ::Array{Float32,4}, ::DenseConvDims{2,(3, 3),3,32,(1, 1),(1, 1,
Your model is attempting to do convolution between an array on the gpu (the CuArray) and one which is still in the cpu (the Array). I think it would have been nicer if this generated an error message right away, but I guess the library doesn’t want to be prejudicial and attempts the operation anyways, leading to a low level error.
Looking at this line:
[14] Conv at C:\Users\CCL\.julia\packages\Flux\8xjH2\src\layers\conv.jl:137 [inlined]
I see that the second argument to the conv function is the weights.
It seems like despite the fact that you did model FCN() |> gpu, the weights of the first conv layer (and possibly others too) where not transferred to the gpu.
The gpu function is only half-magical, so if you happen to have wrapped the Chain or any of its layers in other functions or structs (like the test struct in your mwe) you need to make sure the function Flux.functor is defined for each one of those structs. See here and here.