Simple optimization problem via flux

I have a prediction function, called NTCP which takes several fixed input parameters: dpf,nf,ab. It also takes in some parameters that I am looking to optimize: n, TD, Y as well as my input data dvh_all. The fixed parameters and the input data are 1xL arrays and the parameters to optimize are single float constants. So, I call this function as
NTCP.(dvh_all, dpf, nf, ab, n, TD, Y)
Which returns a 1xL array of outputs.

This function returns the probability of a binary outcome (1 for certainly yes, 0 for certainly no). I have some training data, y, as well, and I am looking to find parameters n, TD, Y, that minimize the cross entropy loss function. I was confused about the syntax with regard to how I’d do this.

In Flux, I know that for a model (i.e. a neural net) it is easy to specify the fitting parameters as just params(m), but how would I do this for a simple function?

Currently I was trying something like

loss(dvh_all, y) = crossentropy(NTCP.(dvh_all,dpf,nf,ab,n,TD,Y),y)

followed by

train!(loss, [n,TD,Y], dvh_all, opt; cb)

but I’m quite sure that this syntax is incorrect, I’m not sure where Flux takes in my training data or how to refine this syntax such that Flux actually trains only the parameters I want according to the specific loss function.

Can anyone explain to me how in this case I should specify the parameters and data when calling Flux.train!? I feel like there is a one line solution to this that I’m missing as I’m new to Julia.