Zygote gradients for functions with Array output

My basic task is to get derivatives of a function that outputs an array, wrt each of the input parameters. Below I copy a toy MWE to try to illustrate the point. The desired result of taking a gradient will be the gradient of each vector element. So far, the best solution I’ve found is to do the gradient of each vector element one-by-one as shown in the example.
Is there a better way to do this?
The real goal is to include the complicated 2D equivalent of toy() as part of a neural net in Flux, so I’d like to just have the derivatives work automatically behind the scenes - but maybe I need to code some special implementation of loss function and its derivatives.

function toy(x)
    t = [ x*i for i in 1:10]
    return t
end
# check the function output if one wishes
toy(2); 
# evaluate gradient of each vector component wrt x, at x=2:
toy_grads = [Zygote.gradient((x)->toy(x)[i],2) for i in 1:10]
# here toy_grads will have expected values 

sounds like a jacobian, more than a gradient, and a quick check is that zygote has some problems with jacobian calculations:

if the length of the vector is more than the length of the input, maybe a forward differenciation can be more useful

I agree it is more like a Jacobian; there was nothing in the Zygote documentation that seemed to help working with that approach though perhaps for the reasons you cite.

In my real life application, the function of interest generates an array of time series (hence the 2d array application), and in the spirit of ∂P, I’d like to be able to directly generate the required derivatives with some simple call to a Zygote function since Zygote seems to be where all the action is these days.

And I’d like to be able to incorporate such derivative calculations into Flux tasks eventually.

I was also wanting to check and see if there was something I was missing conceptually.