My basic task is to get derivatives of a function that outputs an array, wrt each of the input parameters. Below I copy a toy MWE to try to illustrate the point. The desired result of taking a gradient will be the gradient of each vector element. So far, the best solution I’ve found is to do the gradient of each vector element one-by-one as shown in the example.
Is there a better way to do this?
The real goal is to include the complicated 2D equivalent of toy() as part of a neural net in Flux, so I’d like to just have the derivatives work automatically behind the scenes - but maybe I need to code some special implementation of loss function and its derivatives.
function toy(x) t = [ x*i for i in 1:10] return t end # check the function output if one wishes toy(2); # evaluate gradient of each vector component wrt x, at x=2: toy_grads = [Zygote.gradient((x)->toy(x)[i],2) for i in 1:10] # here toy_grads will have expected values