Cheers,

Let `chain`

be a `Flux.Chain`

of FCNs. I want to execute each of the chained models independently with the same input array, then concatenate all outputs as the model result. I did the following:

using Flux: Chain, @functor, Conv, relu

struct Mymodel

chain

N

endfunction Mymodel(ch_in, ch_out, N)

models = [Conv((1,1), ch_in => ch_out, relu) for i in 1:N]

chain = Chain(models…)

return Mymodel(chain, N)

endfunction (m::Mymodel)(x)

res = map(i → m.chain[i] (x), 1:m.N)

return cat(res…, dims=3)

end@functor Mymodel

The model trains and executes as I expected, except for the unusually long training time apparently added by the `map`

instruction line. Would appreciate if someone could shed a light on the root cause and perhaps suggest a more efficient way to build the logic.

Thanks in advance.