How to stop taking gradient of neural network in Flux.jl?

This is my neural network:

net = Chain(Dense(6, 16, relu), Dense(16, 16, relu), Dense(16, 8))

Consider that I have a function like:

f(x) = sum(net(x))

If I want to stop taking gradient of f(x) with respect to x, I only need to run this line of code:

Zygote.@nograd f

How could I stop taking gradient of f(x) with respect to weights and biases of net?

Just don’t include the net params in your call to gradient? Are there other parameters that should be differentiated? A MWE (or minimal intended example in this case) would be appreciated.

1 Like