Julia for audio live coding?

Hi all, I’m new to Julia and trying to gauge feasibility with regards to audio live coding.
What I would like to do essentially is to use the REPL or the “Run code” function in vscode to create audio callbacks that generate sound using synthesis techniques, that stay running in the background while I write more code.
Upon evaluating new code I would on the fly replace the callback function with another one, with no pause or glitching ( assuming I make sure that the audio stream has no discontinuities etc ).

I was thinking of doing something like using PortAudio.jl and setting up a background thread with an audio streaming function. The streaming function would have a reference to another function inside it and I would update that reference with a new one when I want to change the sound generation.

Replacing the audio gen function with a new one would not need to be instantaneous - I would in fact want to schedule it ahead of time for the next beat or the next bar, which should give time to JIT compile it before replacing the reference.
In practice I would replace the graph in advance and have some kind of fade in to avoid pops and such that the new sound starts at the right time
I might also need some kind some kind of thread synchronisation to do this safely and would probably also want to disable the GC.

Before I set myself on a goose chase, does this all sound doable without major hacking and hair pulling ?
This is all very similar to what gibber does in Javascript ( gibber ) for those who know it, but being able to use all the power of Julia would be incredible.

Neither of these are live coding. In short, you evaluate the exact lines you typed when you press Enter in a REPL, but a live coding environment automatically evaluates your edits and reevaluates other affected parts of your source code (though it may not be equivalent to rerunning the whole thing, depends on implementation). I suggest the paper “Supporting exploratory data analysis with live programming” by Robert DeLine and Danyel Fisher for a better comparison of REPLs and live programming. Jupyter, Revise.jl, and especially Pluto.jl, get closer to live programming.

The documentation is scant and there aren’t Julia examples or tutorials for many of the things people would do with PortAudio. I myself had to dig into source code to tell another user how they can play stereo audio from 2 vectors. The only thing in the docs about callbacks is a call_back argument that only supports C_NULL, and there is an open issue about supporting callbacks. Risking undue criticism, I don’t think you’ll have an easy time using this package.

Really iffy on whether this is feasible, since there’s a whole C side of PortAudio. In pure Julia, this can also be problematic depending on what you mean by “reference to another function” and how you try to update that while streaming. On a broad level, you need to make sure that the stream doesn’t bake in the function at the start, that you can communicate the function at every frame, whether you changed it or not. For a simple example, x = 1; loopprint(x) would run loopprint(1) even if you change x = 2 later because the loopprint only got to access x once at the call site. For a more relevant example, a Task runs as functions existed when scheduled (running in a world age, formally speaking), so even if you managed to edit methods before the Task runs, it won’t reflect the changes.

Being able to change functions at runtime means the compiler can’t infer the return type at some level, so you could help out with type annotations of the return values or FunctionWrappers.jl. I’m not sure if these can fully remove the type instability’s overhead, you’d have to look out for the associated allocations.

Generally JIT compilation is triggered upon the first call, so you’ll have to figure out how you can make sure something is precompiled before you throw it in the stream. Precompilation has been changed in a recent version to be more complete for packages, specifically caching the endpoint native code rather than intermediate code, so I’d check if that more complete precompilation can be used interactively.

If you like JIT compilation, you’d want to keep the GC to handle the garbage. I know you’re trying to avoid stop-the-world jitters, but there’s no known ideal solution in any language. Incremental GCs try to split the work into subsets of frames, but there are still complaints of dropped frames since it overall does the same work + overhead and can thus fall behind. Background GCs put a lot of the work in a background thread, but there are still pauses. AFAIK Julia’s GC does not address soft realtime latency like that, but an upcoming version’s GC can run on multiple threads to save time.

Any development will have hair pulling. Even if the pieces are there, putting together a puzzle is hard and has many setbacks.

1 Like

I wonder if this could be split into a dynamic process that does all the fancy frontend stuff and a static process that just takes serialized instructions and does non-allocating audio generation?

Like, Clojure’s Overtone generating instructions, sending them to C++ SuperCollider. But I wonder if actually both could be in separate Julia processes.

2 Likes

Thank you very much for the detailed answers !
Some interesting points to ponder.
With regards to live coding I see what you are saying but I think it might be splitting hairs a bit, the two known implementations of the concept which are Extempore and gibber, employ an approach similar to what I described.

And indeed an alternative approach like supercollider where you precompile nodes ( UGens ) ahead of time and you construct a graph with them on the fly solves a lot of these problems.
But it’s also a lot less flexible and I prefer the ‘all in one ‘ approach.

With regards to the GC: I can live with having a lot of garbage and memory leaks as long it stays contained ( ie not multiple gigs ) given that a ‘live session’ does not need to run for more than an hour. My intuition is that it would be ok but I could be wrong.

As to hair pulling, yes I write mathematical software for a good part of my day job so I’m quite familiar with the idea :slight_smile:
It’s more a question of : is it doomed from the start or it has a chance to work.
Not being able to force the compilation ( or most of it ) ahead of execution and the thread safety problem do sound like they could be show stoppers.

1 Like

I have no experience working with audio, real-time or otherwise, but I would imagine the biggest factor is how large your buffer is. If you can accept a 1+ second buffer, then that should be enough to cover just about any GC pause.

I would expect that live code changes you make shouldn’t trigger too much heavy compilation. You’re likely making parameter or small logic changes, not bringing in new large packages (which hopefully you can load/precompile ahead of time).

I’ve played around with synthesizing using PortAudio before. It definitely involved hair-pulling at the time, getting a ring buffer to work without a C shim library and other things. I think something about interacting with threads from C has changed in Julia since then, possibly making the callback logic easier. At the time, I had to take a lot of care never ever to allocate anything, or I’d get an immediate segfault. Which was not nice for development experience :wink: As for live updates, I think when Julia compiles that completely stops everything across all threads, so you are likely to get glitches that way. Not sure if it can be completely avoided if the C thread does its own thing and continues playing from the buffer while Julia halts.

It was still fun though, even though I tried to take all the dirty shortcuts I could, I did manage to get a polyphonic synth playable via Midi if I remember correctly. You could have a look here GitHub - jkrumbiegel/PortAudioSynth.jl

4 Likes

If you write a Julia equivalent of Sonic Pi, I’ll be fascinated to see it!

I tried controlling Sonic Pi from Julia (a Pluto notebook), and - although it was easy - I didn’t find it an experiment worth pursuing - might just as well write the Ruby directly…

Screen Recording 2023-07-14 at 10.15.23

This is very cool ! But I agree there is limited upside in making yet another supercollider client, there’s already many good ones.

2 Likes

Not sure if it can be completely avoided if the C thread does its own thing and continues playing from the buffer while Julia halts.

I think you could just have a separate process that plays the audio from a shared memory buffer.

1 Like

This is a good point actually, that would be a good way to airgap any issue with portaudio or c code from the rest of the execution.

Does it? If compilation or anything else like GC halts all threads, it probably should be written down in multithreading docs.

Looking at the code, it looks like you put a fixed callback in a lower-level wrapper of a PortAudio Stream, and all it really does is pull samples via a Ref{AudioBuffer} userData, the processing is elsewhere in a Timer. How feasible do you think it would be for a separate thread to write to the userData with arbitrary function calls when prompted by PortAudio’s asynchronous callbacks, like how robsmith11 and JTriggerFish are thinking? The prompting is important because otherwise it’s just a write that can get into a race condition with a C thread, and PortAudio.jl already seems to have implemented PortAudio’s blocking write.

Now that I’ve read more of the docs, is it possible to get away with a conditional write loop?

If you want to avoid blocking you can query the amount of available read or write space using Pa_GetStreamReadAvailable() or Pa_GetStreamWriteAvailable() and use the returned values to limit the amount of data you read or write.

If the buffers are too full then we could opt not to write that frame and perhaps adjust how many samples to process or how much time waited before writing processed data? Really speculating here, though.

In my testing at the time, I couldn’t get Julia to do anything “prompted” by the C thread. There was only this option to wait on an async condition that gets notified from C, but this was too slow and dropped events as well. That’s why I said I think nowadays there are different options that might be worth trying out.

Here’s my current attempt, I had some early success with SDL’s audio queing model which possibly allievates some of the issues mentioned above.
The current version errors at runtime, I need to spend more some time debugging it but in case anyone is interested or has comments ( I still don’t really know what am doing when coding in Julia ):

1 Like

If you’re changing everything about the SineWave, it might be simpler to have it be immutable and just replace it with a new one. Accessors.jl should make that easy.

Also

        if index > length(sine_waves) || index < 1
            println("Invalid index.")
        else
            wave = sine_waves[index]

seems a little unusual. Are you trying to avoid getting an exception here? See checkbounds and @inbounds.

1 Like

Thanks, that looks quite useful, though I really want to avoid any allocations so I’ll have to check what it does behind the scenes.
And yes I want to avoid exceptions too though there’s probably better ways to do this in the language indeed.

Dude this is sick

1 Like

Here’s one without allocations.

using BangBang
struct S
  x::Float64
end
julia> let
       ss = [S(1)]
       @time @set!! ss[1].x = 99
       end
  0.000001 seconds
1-element Vector{S}:
 S(99.0)

I got it sort of working.

Observations:

  • I can’t get the buffer size much lower than 1024 without all sorts of crackling. My hunch is that the thread locking is quite slow, but my threading implementation may also be quite suboptimal - suggestions welcome here
  • It can play in the order of 200-300 sines implemented in a very naive and inefficient way, which is not too bad. I don’t want to optimize the memory layout and access because ultimately I want to evalute a whole compute graph for each sample so there will be very little cache coherency. However any other performance tips welcome again. I have been quite careless with types and casts, I don’t know how much it matters in Julia
  • The very basic code interpreter based on readline() and eval() works. I can type “supersaw(110)” to instantiate a couple of naive saw waves and not notice any audio glitch.
    However, if a syntax error throws an exception, the first one will glitch the audio. After that following errors are fine. I’m guessing the JIT compiler does some work the first time, not sure how to avoid that.

All in all it seems somewhat promising but there are many questions left.
Also, is there a simple way to do a more REPL-like input, with completion , syntax highlighting etc ?

Type stability is very important to performance. You can use JET.@report_opt to identify these problems.

1 Like

Thanks a lot! The types are constant but there may be a few casts here and there, good to know in any case and I will check.

EDIT: @jar1 turned out to be mostly fine but it helped fixed a couple of interesting things, and indirectly made me realise that non mutable structs were stack allocated so replacing them with a new one is indeed the best solution as you suggested.

Seems like it can play about 1000 naive sines at a buffer size of 512 now.
I also spent a lot of time debugging some annoying crackles until I realised that SDL2 doesn’t seem to like a sampling rate other than 44.1khz at all, there must be some bug in the sample rate conversion.

1 Like