Gradient of NN not changing with different inputs

While playing around with some neural networks (NN) I found out this

using Flux
using Random
using Zygote
using ForwardDiff


n = 1
m = 5
hidden = 10

x, y = rand(m), rand(n) # some data
model = Flux.Chain(Flux.Dense(m, hidden), Flux.Dense(hidden, n))

# Gradient with ForwardDiff
g = z -> ForwardDiff.gradient(w -> model(w)[1], z)
# Getting the weights of the model as an array
ps, re = Flux.destructure(model)

display(g(x)) # Checking with original data
display(g(ps[1:m])) # Checking with weights

# Gradient with Zygote
gs = Zygote.gradient(w -> model(w)[1], rand(m)) # Notice a different random vector
gs = Zygote.gradient(w -> model(w)[1], zeros(m)) # Now with zeros

Now, the result is always the same

5-element Array{Float64,1}:
5-element Array{Float32,1}:
(Float32[-0.7097915, -0.14294693, -0.04312831, 0.28663906, -0.40465975],)
(Float32[-0.7097915, -0.14294693, -0.04312831, 0.28663906, -0.40465975],)

Is there a reason why this is the case? I should be inclined to believe that this is because
the input is not being evaluated at all.
Does this mean that the gradient taken is with respect to the weights of the model?

1 Like

Your model is linear.
A linear model has the same gradient everywhere.

To make a nonlinear model you need to pass an activation function into Dense as the 3rd arg.
Otherwise it defaults to identity and thus you get a linear model.

1 Like

That’s it. How could I have missed that? Thanks a lot!