Raspberry Pi record video to file and pipe frames

I’m trying to record a video to a file while at the same time, extract frames from the video stream, on a Raspberry Pi with a Pi Cam.
If anyone has a better idea on how to do this, please let me know.
Here’s what I got now:

w,h = (640, 480) # image dimensions
bytesperframe = w * h # number of bytes in the image
camera_cmd = `raspivid -w $w -h $h --output - --timeout 0 --nopreview` # calling raspivid and piping out the stream
tee_cmd = `tee test_video.h264` # split the stream into 2, one gets saved to file
frame_cmd = `ffmpeg -f h264 -i pipe: -r 1 -f image2pipe -` # and one gets processed by ffmpeg for me to extract frames from
io = open(pipeline(camera_cmd, tee_cmd, frame_cmd)) # start it all
bytes = read(io, bytesperframe) # read one frame
img = reshape(bytes, w, h) # reshape
read!(io, img) # faster 

The problem with this is that whenever I close(io) I get Broken pipe errors, which makes sense, cause how on earth could this piped process stop elegantly? The consequences are that sometimes the video file is corrupt or only partially saved.

My goal is to be able to have a video file recording, while at the same time access (in “real time”) frames from that video so that I can process them. The results of that processing is then served with JSServe for diagnostics viewed by users, again, in “real time” (by that I mean within a second or so).

Thanks in advance!

The solution seems to be using VideoIO.appendencode, where I can grab images off of the camera, do what I will with them, and encode them to a video file at the same time. Brilliant! But… The encoding process on the RPI is about 60 times slower than on a PC, resulting in a FPS of about 2. More details here:

I ran into the same problem, and have been wondering if it is possible to just stream raw video data to a server (eg on the same WLAN) that does the encoding, but I have not figured this out yet.