I’m working on a new theory on spiking neural network using Julia, still work in progress, not released yet.
The problem of SNN is that it has not come to a consensus like normal NN, so you cannot have a standard framework for people to use…
As far as I know the people who use SNN has very different dataset/structure/training algorithm/computational model/purpose.
Some are trying to build neural simulator as realistic as possible, to mimic the pattern happen in brain.
Some are fitting a NN trained by traditional BP and transformed into SNN.
In term of computation, there are event based SNN, time-step based, rate based model, there are also people doing it by ODE solving. Most are just using CPU computation only. And most framework can only simulate very small number of neurons on a single computer.
In term of learning algorithm, some are experimenting with variations of STDP or Hebbs rule, on either spike train recorded from device or brain’s biological data. There are also people doing STDP learning on DL dataset like MNIST. And there are also people trying to get BP to work on SNN.
My personal goal on this project is to have a general learning theory on SNN learning, implemented with CUDA.jl(GPU acceleration), it’s a time-step based model, so it’s not as realistic as those event-based or ODE based model and has it’s limitation. But it’s fast and it allows me run experiment on larger scale of neurons, I believe with only neurons at a scale can show more interesting phenomenon.
But as a independent researcher, I’m doing it rather slow…