I need to apply different activation functions to different outputs of a Flux layer. This requires a custom layer. I can modify an existing custom layer so that its only job is to apply a per-output activation function as shown:
function (a::Nonneg)(x::AbstractArray) x_out = x # or a.W * x .+ a.b return vcat(softplus.(x[1:1,:]), σ.(x[2:2,:]), σ.(x[3:3,:])) end
σ is the activation function that’s part of the layer’s
softplus is hard-coded to a particular output to enforce non-negativity. But I would really like to not have to hard-code any functions or the (multiple) indices at which they are to be applied. What’s a good approach to this?
There was a similar topic a few months back that used a
struct to apply an activation function only to a (hard-coded) subset of the outputs, but it reportedly didn’t work for arrays of dimension larger than one (as I’ve confirmed). There might be a simple fix. If so, then maybe one can replace the struct’s
activationfn with an array of activation functions?