A little project I have in mind requires making data in real time (well, maybe 100 times a second at most) that is used to output music in real time. Details are scant because I haven’t really done anything yet, I’m trying to see what my options are and would love suggestions from the more experienced. So far I envision two vague approaches:
-
Computing a MIDI stream that an existing software synthesizer can use to make music. I like the idea of this because of functionality (especially audio effects like reverb) I honestly have no idea how to implement from scratch. However, I haven’t been able to find any way to stream MIDI from a Julia script to a software synthesizer. JuliaMusic has some options for playing MIDI files, but I don’t know if that’s at all applicable for MIDI data I’m making on the fly.
I found pyfluidsynth, a Python binding for FluidSynth, and it made me wonder if I need a Julia binding for a synthesizer to do this with Julia. -
Computing a waveform from periodic pitch and volume data, and streaming samples of it regularly. I pulled this off with PyAudio’s callback mode a few years ago. I’m not stoked about this option because, again, I would rather use existing software instead of reinventing the wheel. Still, if I must, I need a way to play WAV-like audio data. So far I’ve found PortAudio.jl (but Pkg.add isn’t working for it, not sure why). If the 1st approach is unfeasible, I’m open to suggestions for this one.