using Printf
const default_float_fmt = "%.16g"
# This is approximately (but not entirely)
# what the default `show(::Float64)` does.
const float_fmt = Ref(default_float_fmt)
function set_float_print_fmt(fmt_str)
float_fmt[] = fmt_str
fmt = Printf.Format(fmt_str)
Main.eval(:(
Base.show(io::IO, x::Float64) = Printf.format(io, $fmt, x)
))
return nothing
end
# We don't specify `::MIME"text/plain"` in `show`, so
# that we also get compact floats in composite types
# (like when displaying a NamedTuple).
# Disadvantage is that we cannot use `show(x)`
# to see full float repr anymore.
set_print_precision(digits::Integer = 3) = set_float_print_fmt("%.$(digits)G")
set_print_precision(digits_and_type) = set_float_print_fmt("%.$(digits_and_type)")
# Something like `with_print_precision(3) do … end` wouldn't work:
# it can't be a function, must be a macro.
macro with_print_precision(p, expr)
oldfmt = float_fmt[]
return quote
set_print_precision($p)
$(esc(expr))
set_float_print_fmt($oldfmt)
end
end
It’s missing documentation.
And I’m sure Julia experts could come up with sth better. eval ing type-pirating show methods in Main is not ideal
What do you mean? Quarto+reveal has grids. I’ve been using it a bit recently and I’m liking it. Of course some things are harder than in normal PowerPoint but overall I think it’s saving me time. With a little extra web-dev CSS knowledge, it’s easier to make custom formatting consistent too (e.g., make a CSS class and stick it on divs/spans)
@tfiers You say you switched to Julia to use units in neuron simulations…but there’s already Brian in Python. Is there a reason you don’t use Brian? I’ve toyed with the idea of using Julia if I want to do something big that would be slow with Brian, but then I’d have to implement a bunch of things from scratch. Any thoughts?
I love Brian and the people that made it!
It’s high quality software and extremely well documented.
I don’t personally use it for my own work because:
I wanted to develop a spiking neural network simulator myself, to understand how they work (queue Feynman’s “what I cannot create, I do not understand”). It’s also excellent procrastination bait while doing a PhD
Brian’s DSL (for specifying e.g. differential equations) is input via strings, i.e. is not syntax highlighted. The equations and functions you specify can also not be easily tested and re-used on their own. Not a big issue, but I still found it important. With fast native Julia functions (and optionally, macros) you can specify your model in ‘real’ code. Compare Hodgkin-Huxley in Brian and this Julia HH specificiation
I tried btw to use DifferentialEquations.jl first, and also the neuron simulator Conductor.jl that is based on it (with ModellingToolkit.jl). But it seems those libraries are not designed with very many discrete events (spikes) in mind.
My implementation with Callbacks was slow, even after reading and watching a bunch of SciML tutorials on performance. (Plus, there’s the long-ish package load times, and the frequent long re-pre-compiles of the big SciML dependency tree – neither of which you have with a small bespoke package).
A dumb handwritten Euler integration loop (similar to what Brian does) gave much faster results that where good enough
What I’m after is more like being able to put text, figure, image, callout, or any other block anywhere at any size. If you used Slides, you may see how it can be achieved there in Reveal.js.
It is not doable programmatically, so I think 99% of the effect would be using grids (Like the guides in PP, etc…). There is the ability to use RevealJS guides, but the problem is the size. You can’t chose arbitrary size for blocks.
Interesting…I was going to ask about this. I was excited about ModelingToolkit/Conductor.jl, so that’s disappointing. I can relate with the PhD procrastination…if I were ruthless in my time management I probably wouldn’t be dabbling with Julia at all
If my Brian work gets cumbersome enough, I may reach out about Firework.jl. Out of curiosity, how much faster has it usually been than Brian?
Spiking neuron models are stiff, right? You’d probably benefit from using an implicit integration method. And 2nd- and higher-order integrators are a pretty marginal increase of complexity.
I have ultra-simple 2nd, 3rd, and 4th-order implicit and explicit Runge-Kutta integrators I wrote for teaching if you want them.
Hi John, thank you for your input. That sounds interesting!
I am no differential equations expert. But I think only Hodgkin-Huxley-type models are really stiff? (Namely, during a spike; they are quite linear when below the spiking threshold).
Simpler models ‘fake’ the spike. Two common 2D neuron models are AdEx and the Izhikevich neuron (implemented above). They both run away to ∞ above their spiking threshold, modelling the spike upstroke. But, an artificial reset is introduced at a certain point: the voltage is reset, discontinously, to well below the threshold again.
The simplest and most common neuron model (the leaky-integrate-and-fire or LIF neuron) does not even model the upstroke, and is completely linear.
@kjohnsen, I haven’t done a performance comparison yet. Would be interesting to see!
Also, if you are modelling small neuron networks, definitely try Conductor.jl!
Thanks for the info and links. I’m interested and will try my hand at simulating these, if only to expand my range of applications in teaching!
FWIW I often find myself deciding between roll-my-own implementations of algorithms because oddities in the problem (like discontinuities in the ODE) challenge library algorithms. Or because the need to share algorithm internals back to the calling function, or learning a general framework and conventions of a library is harder than implementing a small subset myself.
But I do far less rolling-my-own since switching from C++ to Julia, due to Julia’s vast improvement in interoperability of libraries.
(getting a bit far from OP intent, perhaps should split off )
That right there is the crux of it. When you say “A dumb handwritten Euler integration loop (similar to what Brian does) gave much faster results that where good enough”, I’m sure that’s correct. That makes no contradiction of course to DifferentialEquations.jl being efficient, the issue of course is that any definition of efficiency has some implicit definition of accuracy. If ones definition of accuracy is loose, then a simple Euler implementation will win every single time because that will take the least number of calculations.
But in a lot of cases for spiking neural networks, 2 digits of accuracy isn’t even required. “It looks vaguely reasonable when I plot it”, the eyeball test, is many times the test for accuracy, and if that’s the case then yeah you cannot do better than Euler.
So it’s more of a modeling choice.
These now are example tutorials in the docs:
And that said…
What’s going on with ModelingToolkit.jl/Conductor.jl is that indeed there is a lot to offer once you get to this part of it. The nonlinear tearing can improve the solve in some nice ways. We’re creating new primitive problems, the ImplicitDiscreteProblem, for f(u_{n+1},u_n,p,t_n) = 0 which is then a good primitive to build a lot of the simple methods in a way that can use all of the implicit integrator tricks. That and Rosenbrock23 are a nice combo, so there will be more to say here soon.
BTW, you might want to check out the SimpleDiffEq.jl code. It’s ultra low overhead implementations, but also good for teaching. See for example the GPUATsit5 (GPU-kernel compatible adaptive Tsit5):
Of course it’s missing a lot (for example, no callbacks), but it’s nice way to see a full correct implementation with adaptivity and all. There’s an RK4 too:
We should probably add a SimpleEuler, for completeness and because sometimes a 0-overhead Euler can be useful.
Anyways, there’s still a lot more libraries can offer by offering new primitives and new forms.
On accuracy: very true that ‘the eyeball test’ is often the only requirement.
Something more involved that’s often done is to try and reproduce the voltage trace of a real neuron that’s injected with a known current signal.
Would be interesting to see how different integration schemes compare here.
(I’m sure there’s literature about this somewhere – but that’s not my PhD topic alas so I have no refs to provide here).
EDIT: the more obvious comparison is a ‘ground truth’ integration with an advanced algorithm and minuscule timestep, I suppose
This is exciting. Looking forward!
Agree. (Maybe from @kjohnsen 's initial question about Brian)