EdgeCameras.jl: Imaging around corners

package
announcement

#1

Hi all! I just finished up a Julia implementation of a cool algorithm for my computational photography class project, and I thought it would be worth sharing. The idea is called an “edge camera”, in which a edge (like the corner of a wall) acts as a partial 1-D pinhole camera, creating a (very faint) image of the scene around the corner. By looking at subtle variations in the shadow cast by the corner on the ground, we can actually infer, for example, the number of people on the other side of the wall and how those people are moving.

You can see more about the original algorithm from the authors of the paper here: https://people.csail.mit.edu/klbouman/cornercameras.html

And you can see my Julia implementation here: https://github.com/rdeits/EdgeCameras.jl/blob/gh-pages/notebooks/demo.ipynb

I’ve tried to keep the implementation as simple as possible, but I snuck in a few Julia-specific goodies, like the fact that the “images” that I reconstruct are actually AxisArrays with Unitful dimensions on their axes, so you can do things like ask for the slice of an image between two times by doing im[:, 5u"s"..10u"s"].

Many thanks to the developers of Images.jl, VideoIO.jl, AxisArrays.jl, Unitful.jl, and CoordinateTransformations.jl, without whom I would have had to do my final project in (shudder) C++.


#2

That looks an interesting project!
Woud you be able to use a live video camera, instead of reading a video file as in your code?
I guess you have to place the four markers around the area of interest, which might not be easy with a live feed.

Then again, wiht a live feed would you be able to just subtract the static background and only have the moving parts, which oyu are interested in?
Why cannot you just have the whole scene as an area of interest anyway - what function do the four markers have?


#3

Thanks!

Yes, live video should absolutely be possible, and I think you’ve correctly identified the challenges. Live video from a fixed camera should be pretty easy, since you can just take a moving average of the last N frames as an estimate of the background. Live video from a moving camera would be harder, since you still need to estimate the background of the scene, which would probably require aligning and averaging the previous frames from the camera.

The four markers are necessary because the particular geometry of the edge camera requires that we identify which pixels are on the ground around the edge and, for each pixel, what its angle with respect to the edge is. The four markers are just an easy way to do that identification, but there are lots of other ways of solving the problem. There are lots of other ways of identifying the floor and estimating the relative angle from pixels on the ground to the edge, but hand-labeling four points was by far the easiest for me to implement.