Simple NN from scratch

Just published a simple example script of neural network simpleNN.jl from the original post: macOS Python with numpy faster than Julia in training neural network.

There are a lot of optimization tips in the original answer from @ChrisRackauckas, that are implemented in the first version of the script. Some additional sequental changes and improvements are shown in versions 2-4. And the point is, when I use matrix multiplication (v3), training epoch becomes x10 faster.


Do you know how the same problem coded using Flux compare in term of time/efficiency?

This is the closest example (with quite different architecture):

However, I didn’t test it with batch processing.

1 Like