[ANN] NeuronBuilder.jl: A differentiable neuronal simulator

NeuronBuilder.jl is a quick package for building small networks of detailed, conductance-based neurons out of ion channels and synapses. The idea is that it’s easy to use this template and add your own ion channels / synapses, with your choice of dynamics.

What’s the point of this package?

  • It’s very natural to create neurons from ion channels. You can do it here without specifying large systems of complicated differential equations, just by concatenating sets of ion channels with a soma.

  • parameter estimation usually relies on zero-order optimisation algorithms which scale very poorly with the number of parameters

  • having a neuronal simulation that’s differentiable makes it possible to do sensitivity analysis and construct cost functions (e.g. minimising certain features of interest, see the animation below :arrow_down:) with which to do the parameter estimation.

  • NeuronBuilder uses ModelingToolkit to build a symbolic ODE system. This can then be solved with OrdinaryDiffEq or DifferentialEquations, taking advantage of the fast solvers they offer :+1:

  • it can also interface with other packages like ForwardDiff, making it suitable for differentiation! :star2:

  • simulating single neurons with NeuronBuilder takes no more than 0.1 seconds and differentiating in Julia is, as we all know, also fast :wink:

Example use case:

  • Because the neuron is differentiable we can interface this with MinimallyDisruptiveCurves.jl to find changes in maximal conductances that maintain average intracellular calcium over the trajectory timespan.
  • The parameters are the 7 maximal conductances of each ion channel.

animation11
Code coming soon in here: MinimallyDisruptiveCurves.jl


Note:

If you want a more flexible platform to build neuron models from basic components you should check out the more comprehensive package Conductor.jl and for an awesome C++/Matlab neuronal simulator see Xolotl. Both of these were valuable resources for the development of our new package.

15 Likes

How do you differentiate through the spikes? That seems really difficult.

1 Like

Biological neuronal spikes are really just very sharp curves when simulated at small timesteps, so they are differentiable (although many simulators do not treat them as such); I assume this is what @AndreaRH is doing here?

Very cool package! I love seeing this work being done in Julia; the promise of efficiently simulating biologically-plausible neurons is what drew me away from MATLAB to Julia in the first place :smile:

2 Likes

Amazing to see!

Using differentiation with https://diffeqflux.sciml.ai/dev/examples/prediction_error_method/ would likely be helpful in this case.

1 Like

Like @ jpsamaroo said, technically, having a higher accuracy to your ode solution gives you a more accurate gradient. Here’s a relevant link about the numerics:

But for parameter estimation and MinimallyDisruptiveCurves accurate gradients might not be necessary, as long as you have a differentiable cost function that’s meaningful.

Thanks! :slight_smile:

Yes, the longer term reason for having this package was to explore which types of cost functions differentiably express behaviours we are scientifically interested in, such as bursting, spiking with a particular frequency, etc.

The PEM method is indeed really relevant. We actually have a person in our group working on implementation of a specialised PEM method that takes advantage of the regularities in conductance based neural models (see Feedback identification of conductance-based models - ScienceDirect).

the loss functions of e.g. L2 error on neural models have the same features (multiple disingenuous local minima, very sharp global minimum) as on your pendulum example in the link you said, and using PEM seems to prevent that happening, for reasons that are more rigorously explained in the paper i linked.

They are. Without accurate gradients you tend to get stuck in worse local minima and such. More accurate gradients are very helpful in parameter estimation, and in fact I usually find that one needs to lower tolerances when doing estimation.

That’s interesting. It goes counter to my intuition. I would have thought some noise in the gradient means you bounce out of a bad local minimum, where a perfect gradient might get you stuck at it (ignoring second order methods for now). For neural networks, stochastic gradient descent works pretty well.

I haven’t (yet) tried much parameter estimation where the objective function involves the solution of a differential equation. I guess there must be something different about the shape of the ‘typical’ loss functions, and/or the statistics of the gradient errors from inaccurate gradients, that implies your comment. I don’t have an intuition as to what these could be though…any relevant resources or intuitions you could provide? Thanks :slight_smile:

One bad gradient can be pretty disruptive. It can shoot you right out of a parameter space by accidentally being really large. This happens because when the tolerance is low the space becomes bumpy, and bumps have a very large gradient. So even with local minima, accurate gradients are necessary to ensure you do completely fly out, and then things like ADAM can help something with “accurate enough” gradients converge well. I don’t know of any good papers on this, but we have lots of examples (especially in clinpharm from Pumas) demonstrating this phenomena.

1 Like

Gradient clipping (Gradient Clipping Explained | Papers With Code) is a popular technique to avoid being flung far away when encountering a large gradient.

1 Like

thanks for the intuition both