Stop_gradient/detach equivalent in flux

Hi,

is there an equivalent to Tensorflow’s stop_gradient/PyTorch’s detach in FluxML?

Thanks!

Rasmus

Hi,

I don’t know if there is anything built in, but this issue has a good tip in the comments for how to do it yourself: https://github.com/FluxML/Zygote.jl/issues/175

Summary:

stop_gradient(f) = f()
Zygote.@nograd stop_gradient

# Now you can use it like this (note: does not work in REPL due to global scoping rules):
a = 3
b = 4
c = stop_gradient() do
     a + b
end
3 Likes

Great! Thanks a lot, it made my code work :slight_smile: example over here: Autoencoder for telecommunication (Constellation shaping)