Stream help in Genie

Hi everyone!

For some background, I am trying to set up a little site using the Genie Framework where I can watch livestreams of my 3D printer as well as start timelapses, and be able to review and download the timelapse videos. I am using a Raspberry Pi Picamera2 python package to control the camera, and I can take timelapses with PyCall relatively easily.

Where I am getting confused is in this example they are writing the frame to the client in the http server over and over again to implement accessing a livestream as a simple image in the browser. Any ideas how I would begin to mirror this in Genie? Would I just be better off doing something like the server sent events example from HTTP.jl? If so, how would I set that up with route?

Any help would be greatly appreciated, as well as any general direction I should go.

1 Like

We’ve had someone share an example of capturing images from a webcam with Genie. I don’t know how it’d work on the Pi, but perhaps it could be useful. Here’s the code

The link Pere pasted here is one good way to view a live feed of a webcam.

But to get frames from the RPI camera you’ll need to use something else than VideoIO (at least last I tried). You can either use PyCall, or shell out to libcamera (which is what I use).

Thank you for your response. For some reason, the link is not taking me to a text channel in discord. Perhaps I am not part of the right server? Could you possibly paste the code in this thread?

Yakir,

Interesting, perhaps using libcamera would be better than trying to mess with PyCall. Could you include a minimal example of how you shell out to libcamera to grab frames from the RPI camera?

The following is the code form that post (though this uses VideoIO)

using JpegTurbo, VideoIO
using GenieFramework
@genietools

# a convinience function to convert pixel matrices to jpegs
to_frame = String ∘ jpeg_encode

# open the camera
cam = opencamera()

# use `img` as a container for the frames
const img = Ref(read(cam))

# use `frame` as a container for the jpeg
const frame = Ref(to_frame(img[]))

# fetch fresh frames from the camera as quickly as possible
Threads.@spawn while isopen(cam)
    read!(cam, img[])
    frame[] = to_frame(img[]) # this typically takes less than the camera's 1/fps
    yield()
end

# avoid writing to disk, when the user asks for a frame they get the latest one
route("/frame") do
    frame[]
end

@app Webcam begin 
    @out imageurl = "/frame"
end myhandlers

global model = init(Webcam, debounce = 0) |> myhandlers

# add an (invalid) anchor to the imagepath in order to trigger a reload in the Quasar/Vue backend
Stipple.js_methods(model::Webcam) = """updateimage: async function () { this.imageurl = "frame#" + new Date().getTime() }"""

# have the client update the image every 33 milliseconds (should be changed to the camera's actual 1000/fps or less)
Stipple.js_created(model::Webcam) = "setInterval(this.updateimage, 33)"

# set the image style to basic to avoid the loading wheel etc 
ui() = [imageview(src=:imageurl, basic=true)]

route("/") do 
    page(model, ui) |> html
end

Server.up()

and here is how I use libcamera:

function get_buffer_img(w, h)
    w2 = 64ceil(Int, w/64) # dimension adjustments to hardware restrictions
    nb = Int(w2*h*3/2) # total number of bytes per img
    buff = Vector{UInt8}(undef, nb)
    i1 = (w - h) á 2
    i2 = i1 + h - 1
    img = view(reshape(view(buff, 1:w2*h), w2, h), i1:i2, h:-1:1)
    return buff, img
end

struct Camera
    mode::CamMode
    buff::Vector{UInt8}
    img::SubArray{UInt8, 2, Base.ReshapedArray{UInt8, 2, SubArray{UInt8, 1, Vector{UInt8}, Tuple{UnitRange{Int64}}, true}, Tuple{}}, Tuple{UnitRange{Int64}, StepRange{Int64, Int64}}, false}
    proc::Base.Process

    function Camera(cm::CamMode)
        w, h, fps = camera_settings(cm)
        buff, img = get_buffer_img(w, h)
        proc = open(`rpicam-vid --denoise cdn_off -n --framerate $fps --width $w --height $h --timeout 0 --codec yuv420 -o -`)
        eof(proc)
        if cm == cmoff
            kill(proc)
        end
        new(cm, buff, img, proc)
    end
end

function Base.close(cam::Camera) 
    kill(cam.proc)
    close(cam.detector)
end

function detect(cam::Camera) 
    read!(cam.proc, cam.buff)
    return cam.img
end
2 Likes

Thank you!

If you want to give Bonito a try, you can do:

using FFMPEG_jll, Bonito
# Somehow I can't get VideoIO to work, but FFMPEG works fine...
ffmpeg_cmd = `$(FFMPEG_jll.ffmpeg()) -f dshow -i video="DroidCam Source 3" -preset veryfast -loglevel quiet -vframes 1 -f image2pipe -c:v png -`

App() do session
    bytes = read(ffmpeg_cmd)
    asset = Observable(Bonito.BinaryAsset(bytes, "image/png"))
    @async while true
        bytes = read(ffmpeg_cmd)
        asset[] = Bonito.BinaryAsset(bytes, "image/png")
        yield()
        session.status == Bonito.CLOSED && break
    end
    return DOM.img(src=asset)
end

Should work quite similar with read(video_io_stream).

There should be smarter options with streaming an mmpeg video, but it’s more complicated ^^

2 Likes

I believe I found your post from 2022 on the raspberry pi forum! Cool!

I am a little new to this. You are reading raw YUV420 pixel format from the RPI camera, do you have to convert that at all before imageview would be able to show it on a browser window? How would you manipulate the image matrix to facilitate this?

On another note, I noticed that if I used 1920x1080 as my width and height, the resulting img view from your get_buffer_img function is 1080x1080. Is that purely coincidence and how YUV420 pixel format is structured since the U and V layers are subsampled? I want to make sure I understand this completely.

You’re right, the square shape of the frame is just diverging I needed for my application. You can change that.
But I really just copy pasted my specific setup without any explanation or help… sorry about that. I hope I’ll have some time to expand on it next week.

No need to explain the shape of the image, I was able to figure that part out. I have been able to get the YUV “frames” into an image, got that part all figured out.

Did you use base64encode to get your image to show in the imageview call? Or did you use the String(jpeg_encode(img)) method from above? What rates were you getting encoding your image to show in your Genie page? Did you have to make your images smaller?

Awesome!

Did you use base64encode to get your image to show in the imageview call?

No.

Or did you use the String(jpeg_encode(img)) method from above?

Yeah, I used the jpeg_encode, but I never tested the difference between that and base64encode. It might be the case that the difference depends on the image size and even complexity…?

What rates were you getting encoding your image to show in your Genie page?

I didn’t benchmark it all too carefully, but 15 FPS worked fine on WiFi.

Did you have to make your images smaller?

Absolutely. In my use-case, 300x300 was enough for the image to be useful (even smaller would have worked for me, but it was quick enough, so…).

I’m sure that @sdanisch is right, encoding a stream should be better in all aspects, I just don’t know how to do that. I also plot a bunch of stuff on the frame (markers, object detection, etc) so I would need to fetch frames, plot things, and then stream this out to the client.