Would this work for that? The input is an array of inputs to the layers and the output is the concatenation of the layers’ outputs. Creating it would be something like Chain(Concat(Chain(A1, B1), Chain(A2, B2, C2)), D, E)
using Flux
using Flux: @treelike
struct Concat
layers::Array
end
@treelike Concat
function (c::Concat)(inputs::Array)
output = []
for i in 1:length(c.layers)
append!(output, c.layers[i](inputs[i]))
end
output
end
The channel dimension is typically synonymous with “features”. We typically assume that a network is learning features and combining multiple networks for further processing would be for combining features in another network.
That’s because the previous respondents for some reason assumed that the layers where convolutional and the input two-dimensional (images). Dense, on the other hand, takes in and returns a vector (or a batch of vectors, stored as a matrix). This will work: