Efficient Way of Taking First and Second Order Derivatives of a Neural Net

Hey everyone, hope you’re all doing well and staying safe! I’m just going to preface this by saying that I am a still solidly a novice when it comes to both machine learning and Julia, so please bear with me here. I am also using Flux for my neural nets.

I’m working on a project that requires taking a whole bunch of derivatives of a simple feedforward neural network:

  • I need the first order derivative with respect to every weight and bias of the FNN
  • I need the second order derivative with respect to each input of the FNN

I’ve done a lot of digging and I’m aware of the gradient function, which I think might be able to help me with my first point, but I’m not entirely sure. Please let me know if this is the case.

As for my second point, I’m stumped. I haven’t been able to find anything too particularly efficient for this, and it’s the main point of confusion I’m having for this project. Any suggestions would be greatly appreciated!

Thanks in advance for your time and patience!

I don’t think there isn’t anything particular concerning the second derivatives, you can still use backpropagation (i.e. the chain rule) with respect to the loss function you define in the network…

1 Like