Spiking Neural Networks

It will. And technically you could do it right now, it’s just not convenient yet and the component interfaces need documentation. Initially the focus has been on small models with conductance-based HH dynamics. But the type system in Conductor.jl is being intentionally setup to allow substitution of arbitrary (e.g. simplified/artificial) dynamics. Going in the other direction, we’ll be extending to spatial models (e.g. branched cable models of neurons), alternative ion channel model types (e.g. Markov/Jump systems) etc.

1 Like

Cool!

On a related note, I recently tried to simulate a network of simple escape noise neurons with DiffEq. Each neuron is modelled with some input current I(t) and membrane potential u(t) that evolve according to

\begin{align} \tau_s \dot I &= -I(t)\\ \tau_m \dot u &= -(u_0-u(t)) + RI(t)\\ \end{align}

The neurons spike stochastically with rate \rho(t) = f(u(t) - \vartheta). At every spike of neuron i its membrane potential gets reset to u_\mathrm{reset} and the synaptic current of each postsynaptic j neuron gets increased by w_{ji}.

For the following proof-of-principle I used \tau_s = 2, \tau_m = 10, u_0 = R = 1, u_\mathrm{reset} = 0 and \rho(t) = 0.1\exp(u(t) - 1). This is the code I used

function update(du, u, p, t)
    du[1:p.N] .= -0.5 * u[1:p.N] # current I
    du[p.N+1:2*p.N] .= .1 * (1 .- u[p.N+1:2*p.N] .+ u[1:p.N]) # potential u
end
rate(i) = (u, p, t) -> .1 * exp(u[i] - 1) # escape rate
spike(i) = function(integrator)
    integrator.u[i+integrator.p.N] = 0 # reset membrane potential
    integrator.u[1:integrator.p.N] .+= integrator.p.W[i] # update postsynaptic currents
end

N = 50 # number of neurons
p = (N = N, W = [.1*randn(N) for _ in 1:N],)
prob = JumpProblem(ODEProblem(update, .1*rand(2N), (0., 1000.), p),
                   [VariableRateJump(rate(i), spike(i)) for i in 1:N]...)
sol = solve(prob)

It works, but it doesn’t scale, i.e. already for N = 100 neurons creating the JumpProblem takes a really long time (I aborted after a few minutes). Is there a better way of doing this?

VariableRateJumps are super duper expensive. I would use any other noise form over them if you need to scale.

1 Like

Well, I think mathematically it as a variable rate jump process. Is there another way to simulate this with DifferentialEquations without using VariableRateJump? (In practice we usually use some hand-crafted version of Euler-Maruyama, but I wanted to give DifferentialEquations a try here).

Crude SDE approximations will be a lot faster. We can make VariableRateJump a lot faster, but it’s always going to be a computationally hard mechanism. Modeling those as regular jumps could be better too with a tau-leaping approximation.

1 Like

Hi,

Do you have a working version of this network by any chance? I am planning to write it but if one is already available… :wink:

I don’t have implementations of networks of the Bellec et al. paper. But I used this approach with toy networks to compare to other fitting methods in the presence of hidden neurons (might end up in a follow-up to Fitting summary statistics of neural data with a differentiable spiking network simulator). What exactly are you interested in?

The code (python) with that paper is fairly slow. I am using it for some research and I wanted to try a Julia version to see if there is any speed gain.

Hello,

I append to this discussion because it seems the right one.

Would anybody of you be interested in preparing a Birds of a feather / Interest group at the next JuliaCon about SNN?

I am about to propose it, but I would be happy of preparing it in collaboration with someone. In case contact me and I will forward you the short text am about to submit for the conference.

Best,

1 Like

i would attend a BoF on computational neuroscience.

i’ve submitted a juliacon talk proposal titled Training Spiking Neural Networks in pure Julia. we plan to make the corresponding code public in the next month or so after we submit a manuscript.

3 Likes

Your project sounds very interesting, but indeed it is a talk and quite specific to a type of network and supervised learning scheme.
Instead, I was thinking about a more broad discussion that reflects the opening of this thread, the opening talk structure would look like this:

  • What an SNN simulator in Julia is supposed to achieve (experimenting with new neuron models and protocols versus large scale simulations), can Conductor.jl cover the whole spectrum of simulation types? Or a different, compatible package would rather do the job?

  • What would such a simulator look like (I personally really like the ideas behind AStupidBear/SNN [2])? How do you instantiate new models and define the equations such that it stays flexible and fast?

  • And how do you achieve it? Is EqDiff.jl support the best for all models, or when we move towards more abstract types of neurons we better have time-step-based models? and can we use shared memory parallelism or Cuda in that case?

I think that there are several people who are more competent than me for opening this discussion (like many of the people who answered this thread), but I would really like to build it up and see if we can get to a community consensus on what is worth to work on.

So I will wrap up the discussion that has been going on here and hope that the same people will join during the BoF. I will keep this thread posted on the talk I intend to prepare to open the discussion so that we can keep in it all the relevant discussion points.

Best,
Alessio

[1] W. Nicola and C. Clopath, ‘Supervised learning in spiking neural networks with FORCE training’, Nature Communications, vol. 8, no. 1, p. 2208, Dec. 2017, doi: 10/gcr4j2.
[2] GitHub - AStupidBear/SpikingNeuralNetworks.jl: Julia Spiking Neural Network Simulator

I proposed this BoF for JuliaConf22:

As said in previous posts, I would be happy to collaborate with others in building up the introductory talk.
I will keep you updated with the content of the introductory talk in the coming months.

4 Likes