Dear fellow Julians,
I’m currently working on a project about using computer vision to monitor vast areas for wild fires. We currently have a working version of the software implemented in Python. The code is running on our server, which receives video streams from various cameras and feed them to our fire detection code, which is implemented using the YOLO (You Only Look Once) algorithm.
One of the goals of the project is to scale this to hundreds or even thousands of cameras monitoring many areas of interest. In order to do that, I’m working on a concurrent version of the algorithm in Julia. Since I’m pretty new to concurrent / parallel programming, I’m not sure I’m approaching it the best way. Thus, I would like to share what I have right now in the hope I could get some useful feedback.
After reading the Parallel Computing section of the Julia manual, this is what I came up with so far:
Functions:
Function | Description |
---|---|
poll_camera() |
polls a particular camera |
detect_features() |
detect features on a single video frame |
notify_detection() |
send notification to the appropriate service |
Channels:
Channel | Description |
---|---|
frame_channel |
Used by the poll_camera() function to send frames to the detect_features() function |
notifier_channel |
Used by the detect_features() function to send yolo results to the notify_detection() function |
The system actually has many more functions besides the ones listed above, but for the sake of this message the ones above are enough.
As far as I understand, my scenario is one of “task parallelism”, as opposed to “data parallelism”. The code is shown below:
# Channels
const frame_channel = Channel{Tuple}(128)
const notifier_channel = Channel{Tuple}(128)
function poll_camera(camera_id)
# Grab a frame from the camera stream
strm = VideoIO.openvideo(camera_id)
while !eof(strm)
read!(strm, video_frame)
# Feed the read frame into the channel 'frame_channel'
put!(frame_channel, video_frame)
end
function detect_features()
# Loop over items in 'frame_channel'
for video_frame in frame_channel
# Run the model
yolo_results = yolo_model(...)
# If a detection occurs in the current frame, we send the yolo results to the notify_detection() function
if !isempty(yolo_results)
put!(notifier_channel, (video_frame, yolo_results))
end
end
end
function notify_detection()
# Loop over items in 'notifier_channel'
for item in notifier_channel
# Code comes here...
end
end
After having defined the functions above, I spawn multiple tasks for each of them as shown below:
# Spawn 'n_tasks_1' poll_cameras() tasks
for i in 1:n_tasks_1
@spawn poll_cameras()
end
# Spawn 'n_tasks_2' detect_features() tasks
for i in 1:n_tasks_2
@spawn detect_features()
end
# Spawn 'n_tasks_3' notify_detection() tasks
for i in 1:n_tasks_3
@spawn notify_detection()
end
When searching this forum for parallel programing posts, I stumbled upon the following tutorial written by @tkf:
https://juliafolds.github.io/data-parallelism/tutorials/concurrency-patterns/
Reading this tutorial made me wonder if I’m doing the right thing. Of the patterns listed in the tutorial, it seems to me that what I’m doing is most similar to the “Task farm” pattern, although I’m not sure (I noticed, for example, that my implementation is missing the @sync
construct in the outer loop).
I would really appreciate if someone could give me any feedback regaring my implementation. Am I going in the right direction? Are there any obvious mistakes in the way I’m approaching this? Any feedback and/or pointers would be greatly appreciated.
Thanks a lot!