Legend [1] has it that we might never have discovered the black hole at the center of our galaxy if Karl Jansky had not heard static while listening for weirdness in radio waves. The sound of the geigercounter freaking out or the handheld metal detector sliding up and down in pitch as passes the hidden metallic object is etched in our cultural memory. Why don’t we scientists then sonify data?
Visualization remains a barrier to blind scientists. Imagine you get asked to review a paper on a topic that you are a specialist in but have to decline because you can’t see the graphs. It happens [2].
Accessibility is not the whole story. We perceive a much wider band of pitch than color. We also process sound at a much better temporal resolution than we do video, which is something like 24 images per second. (For spatial localization, we are much better off with vision.) I mean, our ears are living signal processing machines.
There’s some really beautiful sonification of telescope images of galaxies and star systems [3]. But these are more for “science outreach”. In order to do actual science, you need something that accurately maps data to how we perceive sound without too many layers of imposed cultural meaning. We need to turn data into sound, not music. Some people do that. They sonify earthquake vibrations [4] and others sonify gravitational waves [5].
The problem is, everybody seems to be developing their own tools for sonification. They are domain specific and a lot of times, buggy. What we need is an equivalent of the grammar of graphics-- a set of semantics that is agnostic to scientific domain and programming language.
I’ve been working on that-- the semantics of sonics-- for the last few years. The heart of it is all implemented in Julia. (Source code is not open. Not yet? Never?). I can take solutions from OrdinaryDiffEq.jl and plug it right in. (the visuals are using Makie):
Or I can read earthquake data from the 2003 Turkey-Syria earthquake using CSV.jl and plug it in to get this:
Or I can add some annotations using PythonCall.jl to call text to speech library (this is a bit broken right now) and let you experience how we discovered the COVID-19 vaccine:
I wanted to ask you this:
- What are the weird, quirky, niche field you work in where sonification might have a place?
- As scientists, what would it take for you to start seriously considering sonification (Edit: other than sonification software being open source)?
[1] Jessica Manning Lovett corroborates the legend in their PhD thesis The Sound Culture of Space Science.
[2] “Accessibility in astronomy for the visually impaired” by Jake Noel-Storr and Michelle Willebrands
[3] https://www.youtube.com/watch?v=NqBfQeJqkfU