Which function in julia.Flux is similar to "detach()" in pytorch?

When I train a network using pytorch, sometimes I do not what a tensor participate in gradient backward. But however, I cannot find a function that is similar to “detach()” in julia.Flux. So when I use julia to train a network, how can I make a tensor do not participate in gradient backward? :worried:

Probably ignore_derivatives is what you are looking for.

3 Likes

yes, it is just the function I need! thank you very much! :smiling_face_with_three_hearts: