Julia for Scientists: summary slides

I wrote some slightly monstrous code a while back (listed below), to have such a print-precision ‘context manager’.

Example usage:

julia> ℯ^99
9.889030319346946e+42

julia> @with_print_precision 3 begin
           display(ℯ^99)
       end
9.89E+42

julia> ℯ^99
9.889030319346946e+42

You can also set the format globally:

julia> set_print_precision("3f")

julia> ℯ^-9
0.000

The code behind the above:

using Printf

const default_float_fmt = "%.16g"
# This is approximately (but not entirely)
# what the default `show(::Float64)` does.

const float_fmt = Ref(default_float_fmt)

function set_float_print_fmt(fmt_str)
    float_fmt[] = fmt_str
    fmt = Printf.Format(fmt_str)
    Main.eval(:( 
        Base.show(io::IO, x::Float64) = Printf.format(io, $fmt, x) 
    ))
    return nothing
end
# We don't specify `::MIME"text/plain"` in `show`, so 
# that we also get compact floats in composite types
# (like when displaying a NamedTuple).
# Disadvantage is that we cannot use `show(x)`
# to see full float repr anymore.

set_print_precision(digits::Integer = 3) = set_float_print_fmt("%.$(digits)G")
set_print_precision(digits_and_type)     = set_float_print_fmt("%.$(digits_and_type)")

# Something like `with_print_precision(3) do … end` wouldn't work:
# it can't be a function, must be a macro.
macro with_print_precision(p, expr)
    oldfmt = float_fmt[]
    return quote
        set_print_precision($p)
        $(esc(expr))
        set_float_print_fmt($oldfmt)
    end
end

It’s missing documentation.
And I’m sure Julia experts could come up with sth better.
eval ing type-pirating show methods in Main is not ideal :stuck_out_tongue:

2 Likes

What did you use to generate the slides?

Good old PowerPoint.
The title slide font is Alegreya ht :: Huerta Tipográfica

Code is copy-pasted from VS Code – which amazingly keeps the syntax highlighting.

I see your point.
The easiness of the layout in PowerPoint is unbeatable. This is where I struggle with Quarto and Reveal.JS.

Though Slides gives you the PowerPoint experience with Reveal.JS.

2 Likes

Yeah. Though the declarativity, semanticity, and ease-of-use (when just adding text) of Beamer and browser-based slides is very attractive too.

I dream of a combination of both
Similar to

Indeed.
I think if Quarto had few grids work with and better way to control the dimensions of elements they will be half way there.

1 Like

I thought this was kind of cool, using the Makie layout api to make slides GitHub - fatteneder/MakieSlides.jl

2 Likes

What do you mean? Quarto+reveal has grids. I’ve been using it a bit recently and I’m liking it. Of course some things are harder than in normal PowerPoint but overall I think it’s saving me time. With a little extra web-dev CSS knowledge, it’s easier to make custom formatting consistent too (e.g., make a CSS class and stick it on divs/spans)

1 Like

@tfiers You say you switched to Julia to use units in neuron simulations…but there’s already Brian in Python. Is there a reason you don’t use Brian? I’ve toyed with the idea of using Julia if I want to do something big that would be slow with Brian, but then I’d have to implement a bunch of things from scratch. Any thoughts?

I love Brian and the people that made it!
It’s high quality software and extremely well documented.
I don’t personally use it for my own work because:

  • I wanted to develop a spiking neural network simulator myself, to understand how they work (queue Feynman’s “what I cannot create, I do not understand”). It’s also excellent procrastination bait while doing a PhD
  • Brian’s DSL (for specifying e.g. differential equations) is input via strings, i.e. is not syntax highlighted. The equations and functions you specify can also not be easily tested and re-used on their own. Not a big issue, but I still found it important. With fast native Julia functions (and optionally, macros) you can specify your model in ‘real’ code. Compare Hodgkin-Huxley in Brian and this Julia HH specificiation

Coincidentally, that last linked file is part of a spiking neural network lib I’m developing. Absolutely not ready for use yet, but feel free to check it out or get in touch if interested (I wanna know people’s use cases and struggles). GitHub - tfiers/Firework.jl: Helping you build spiking neural network simulations in Julia

1 Like

I’m away from computer and can’t check it, but I have reasons to think these lines https://github.com/tfiers/Firework.jl/blob/512da62d8b45d80fbce1dea00c1d495c3bebd895/test/HH.jl#L37-L39 may not be doing what you believe they do.

Ah yes very true.
That file is not actually live code yet (it’s more of a sketch of what an end-user API could look like)

This sibling file does run: Firework.jl/Nto1sim.jl at main · tfiers/Firework.jl · GitHub

I tried btw to use DifferentialEquations.jl first, and also the neuron simulator Conductor.jl that is based on it (with ModellingToolkit.jl). But it seems those libraries are not designed with very many discrete events (spikes) in mind.
My implementation with Callbacks was slow, even after reading and watching a bunch of SciML tutorials on performance. (Plus, there’s the long-ish package load times, and the frequent long re-pre-compiles of the big SciML dependency tree – neither of which you have with a small bespoke package).
A dumb handwritten Euler integration loop (similar to what Brian does) gave much faster results that where good enough

2 Likes

What I’m after is more like being able to put text, figure, image, callout, or any other block anywhere at any size. If you used Slides, you may see how it can be achieved there in Reveal.js.
It is not doable programmatically, so I think 99% of the effect would be using grids (Like the guides in PP, etc…). There is the ability to use RevealJS guides, but the problem is the size. You can’t chose arbitrary size for blocks.

I hope I made what I’m after more clear.

1 Like

Interesting…I was going to ask about this. I was excited about ModelingToolkit/Conductor.jl, so that’s disappointing. I can relate with the PhD procrastination…if I were ruthless in my time management I probably wouldn’t be dabbling with Julia at all :sweat_smile:

If my Brian work gets cumbersome enough, I may reach out about Firework.jl. Out of curiosity, how much faster has it usually been than Brian?

1 Like

Spiking neuron models are stiff, right? You’d probably benefit from using an implicit integration method. And 2nd- and higher-order integrators are a pretty marginal increase of complexity.

I have ultra-simple 2nd, 3rd, and 4th-order implicit and explicit Runge-Kutta integrators I wrote for teaching if you want them.

2 Likes

Hi John, thank you for your input. That sounds interesting!

I am no differential equations expert. But I think only Hodgkin-Huxley-type models are really stiff? (Namely, during a spike; they are quite linear when below the spiking threshold).

Simpler models ‘fake’ the spike. Two common 2D neuron models are AdEx and the Izhikevich neuron (implemented above). They both run away to ∞ above their spiking threshold, modelling the spike upstroke. But, an artificial reset is introduced at a certain point: the voltage is reset, discontinously, to well below the threshold again.

The simplest and most common neuron model (the leaky-integrate-and-fire or LIF neuron) does not even model the upstroke, and is completely linear.


@kjohnsen, I haven’t done a performance comparison yet. Would be interesting to see!
Also, if you are modelling small neuron networks, definitely try Conductor.jl!

1 Like

Thanks for the info and links. I’m interested and will try my hand at simulating these, if only to expand my range of applications in teaching!

FWIW I often find myself deciding between roll-my-own implementations of algorithms because oddities in the problem (like discontinuities in the ODE) challenge library algorithms. Or because the need to share algorithm internals back to the calling function, or learning a general framework and conventions of a library is harder than implementing a small subset myself.

But I do far less rolling-my-own since switching from C++ to Julia, due to Julia’s vast improvement in interoperability of libraries.

(getting a bit far from OP intent, perhaps should split off :slight_smile: )

1 Like

That right there is the crux of it. When you say “A dumb handwritten Euler integration loop (similar to what Brian does) gave much faster results that where good enough”, I’m sure that’s correct. That makes no contradiction of course to DifferentialEquations.jl being efficient, the issue of course is that any definition of efficiency has some implicit definition of accuracy. If ones definition of accuracy is loose, then a simple Euler implementation will win every single time because that will take the least number of calculations.

When I say you shouldn’t use Euler’s method (Why you shouldn't use Euler's method to solve ODEs - Nextjournal), it’s because it has a very difficult time hitting even two digits of accuracy on simple ODEs:

image

But in a lot of cases for spiking neural networks, 2 digits of accuracy isn’t even required. “It looks vaguely reasonable when I plot it”, the eyeball test, is many times the test for accuracy, and if that’s the case then yeah you cannot do better than Euler.

So it’s more of a modeling choice.

These now are example tutorials in the docs:

And that said…

What’s going on with ModelingToolkit.jl/Conductor.jl is that indeed there is a lot to offer once you get to this part of it. The nonlinear tearing can improve the solve in some nice ways. We’re creating new primitive problems, the ImplicitDiscreteProblem, for f(u_{n+1},u_n,p,t_n) = 0 which is then a good primitive to build a lot of the simple methods in a way that can use all of the implicit integrator tricks. That and Rosenbrock23 are a nice combo, so there will be more to say here soon.

BTW, you might want to check out the SimpleDiffEq.jl code. It’s ultra low overhead implementations, but also good for teaching. See for example the GPUATsit5 (GPU-kernel compatible adaptive Tsit5):

Of course it’s missing a lot (for example, no callbacks), but it’s nice way to see a full correct implementation with adaptivity and all. There’s an RK4 too:

We should probably add a SimpleEuler, for completeness and because sometimes a 0-overhead Euler can be useful.

Anyways, there’s still a lot more libraries can offer by offering new primitives and new forms.

8 Likes

This is great!
The epfl book (Table of contents | Neuronal Dynamics online book) is indeed a good resource. (It uses latexml btw (latex-to-html conversion), same as what ar5iv uses).

On accuracy: very true that ‘the eyeball test’ is often the only requirement.
Something more involved that’s often done is to try and reproduce the voltage trace of a real neuron that’s injected with a known current signal.
Would be interesting to see how different integration schemes compare here.
(I’m sure there’s literature about this somewhere – but that’s not my PhD topic alas so I have no refs to provide here).

EDIT: the more obvious comparison is a ‘ground truth’ integration with an advanced algorithm and minuscule timestep, I suppose

This is exciting. Looking forward!


Agree. (Maybe from @kjohnsen 's initial question about Brian)

1 Like