Reading and processing multiple very large Wav files

New job, new problems! :slight_smile:
I’m currently trying to figure out the best way to read a folder of very large WAV sound files and process them. Reading an entire file causes me to run out of memory. Thankfully, the WAV-package allows me to read smaller chunks at a time, so I can get around it. I was wondering though, is there a well thought-through way of doing this?

I need to:

  1. Read each files in a folder.
  2. Process each file (calculate spectrograms)
  3. Somehow downsample
  4. Save results

For now, I assume that each file can be processed independently, but it would be nice to have an approach that would allow for treating all files as one distributed file.

I have so far been considering mmap, but it seems to work only if I have already gotten all data into one file? Is there perhaps something like a distributed mmap?

it’s not exactly clear what you are doing here.

if the files really need to be processed separately, then you are doing the right thing. open them and work on them in chunks.

a spectrogram is a “chunked” FFT, so you have to do that.

as for the downsample, the Julia DSP library has filtering and downsampling that preserve state, i.e. they can be used in a streaming fashion so that you can read a few samples at a time and process them.

since your result will not fit in memory you’ll have to stream the output to an open file.

it seems like you are taking the correct approach.

if the real problem is that all of those large files are really sections of a still larger data-set then it should be a relatively simple thing that queues up the data file and manages the chunks as they transition from one file to the next.

Yeah, your summary pretty much agrees with what I am doing. I was mostly asking to see if there was an smooth method implemented somewhere where I only need to specify a folder and say that I would like to treat all files within it as one large memory-mapped array.

The downsampling I’m doing is such that the result will fit in memory. If results would not fit, it seems HDF5 supports appending to already existing files, as well as serving as the backend for a memory-mapped array.

oh, ok, I get what you are saying now. I definitely don’t know of any way to do that, in julia or otherwise.

I really like your HDF5 idea ! I haven’t tried using it yet, but I have some work that might benefit from that idea.

1 Like

The relevant docs for HDF5.jl are slightly hard to spot, you find them here

1 Like

I ended up creating a package for lazy, distributed wav files acting as AbstractArrays