I’m using the Picamera library to grab an image from a Pi Camera on a Raspberry Pi. The main Python code in question is (found here):
import io
import time
import picamera
from PIL import Image
# Create the in-memory stream
stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture(stream, format='jpeg')
# "Rewind" the stream to the beginning so we can read its content
stream.seek(0)
image = Image.open(stream)
I’ve come along this far:
# import everything
picamera = pyimport("picamera")
io = pyimport("io")
numpy = pyimport("numpy")
pilimage = pyimport("PIL.Image")
# some constants
framerate = 30
w = 268
h = 268
# init and set the camera
camera = picamera.PiCamera()
camera.resolution = (W, H)
camera.framerate = framerate
# my implementation
stream = io.BytesIO()
camera.capture(stream, resize=(w, h), format="jpeg", use_video_port=true)
stream.seek(0)
img = numpy.array(pilimage.open(stream))
collect(colorview(RGB, normedview(PermutedDimsArray(img, (3, 2, 1)))))
While I can’t get away from using the picamera library in Python (because I need to save a high quality video at the same time as I want to extract low res images from the stream), I suspect there is a way to avoid using Python’s io, numpy, and PIL…
Basically the idea is that we can already load jpeg-formatted images from byte streams (using ImageMagick under the hood), so there’s no need to do that loading in NumPy+PIL.
Ah, ok. I think the basic process is right, but using codeunits(stream.read()) is wrong. The general idea is that you need to get the raw bytes from the BytesIO as a Vector{UInt8} and then you should be able to construct an IOBuffer from that vector and then load it.