Hello there! I’m somewhat new to Julia. I’m scoping out a spiking neural network/ML project, and I’ve been using Brian 2 simulator a lot in Python. It would consist of training recurrent spiking neural networks to perform a task and then experimenting on them. I’m wondering if it would be worth it to train my networks (probably hundreds of times for different tasks) in Julia before transferring weights over to a Brian network to do the rest of the tricky stuff already implemented there. My question is, would this be worth it over sticking with Brian the whole time? Brian compiles code with Cython beforehand and the best solvers it uses appear to be GSL (Numerical integration — Brian 2 2.5.1 documentation). I’ve seen the benchmarks showing a giant improvement over SciPy/Numba, but I’m wondering how Cython/GSL might fare since GSL isn’t in those benchmarks. All I’ve found is this blog post by Chris Rackauckas which suggests GSL just wasn’t worth including in those benchmarks.
Since it already uses Cython, the function evaluation itself will have somewhat similar performance, inlining aside, Julia DiffEq stuff is state of the art, so you can probably get some level of speedup but the only way to know for sure is to try it out.
If it’s slower it would also be interesting because then it’s an area where we can find speedups
It won’t be as much as the SciPy/Numba because it will avoid the overhead of the function calls, but then you’d still be comparing gsl_rk8pd
(which is not Dormand-Prince 8-5-3 but actually an older tableau than that, so less efficient by about 4x than even dop853
or DP8
) to things like Vern9
, which
the only reason to do so is because the Verner methods just literally didn’t exist back when the people were writing the software I guess (and dop853
doesn’t fit nicely into most tableau implementations because of its odd error estimator, so most “generic” RK stepper implementations just completely left that one out).
FWIW, I think the bigger issue with Brian is it doesn’t make use of acausal modeling to eliminate and pre-tear the system, so we’re building a ModelingToolkit-based neuronal simulator to demonstrate the advantages there.
But yeah, it won’t be fully ready for quite some time, so unless you’re looking to help develop it I’d say use Brian for what you do until the paper is out on this.
Thank you Chris, this is really helpful. Sounds like Julia would be at minimum an order of magnitude faster than Brian then–so I’ll consider it if the spiking networks aren’t too hard to code by hand or using Conductor in its current state.
Also, Conductor does seem like a really good idea. The Brian team has done by hand in Python some really hard things that from what I understand would come very easily in Julia–compilation and unit checking, for example.