I landed again on chris olah’s post from 2015 and it reminded me of the blog post discussed here:
http://colah.github.io/posts/2015-09-NN-Types-FP/
At present, three narratives are competing to be the way we understand deep learning. There’s the neuroscience narrative, drawing analogies to biology. There’s the representations narrative, centered on transformations of data and the manifold hypothesis. Finally, there’s a probabilistic narrative, which interprets neural networks as finding latent variables. These narratives aren’t mutually exclusive, but they do present very different ways of thinking about deep learning.
This essay extends the representations narrative to a new answer: deep learning studies a connection between optimization and functional programming.