Hi all,
Thought I’d ask for some image analysis advice:
I want to detect the cross points in images of a grid (marked a couple of these in red, just as an example):
(a clean and full resolution of this image is available here)
I need to detect the points where the green ropes intersect. I then use those locations to spatially calibrate the image: each such cross point is on a 5 cm x 5 cm grid in real life (it just looks warped because of the camera lens and other effects).
There are a number of ways to accomplish this:
FFT: the barrel-distortion obscures the regular grid pattern and the frequencies are not so clean to do that.
Morphological operations: the images of this grid will have a large variation of backgrounds, so any simple operation (dilate, top hat, etc) might not work in the long run (though, they can be beneficial to pre-process the image).
Template matching: not sure how to do that.
Tensor flow: no idea how to do that.
Hough transform: AFAIK, doesn’t exist in Julia.
I would really appreciate any advice/input you might have.
Thanks in advance!
Great find on the hough transform! I missed that. The corner stuff feels very fickle in terms of how large the corners are, but I should put more time into that.
Any half-decent corner detection algorithm should be able to find the intersections; you rarely get more ideal input than this. Yes, you may need to get the scale about right but it shouldn’t be too sensitive.
I’m doubtful about the Hough transform since those lines are not exactly straight. Maybe they are locally straight enough if you apply it to smaller patches.
Thanks for the corner encouragement, I’ll play around with that instead of the hough. The way I thought about the hough transform is exactly as you said, all I need is to get “enough lines” intersecting the crossing points and then I have the intersections.
It can be surprisingly difficult to detect these kinds of ‘corners’ well. Here’s my attempt
# Using Julia 0.6
using Images
using ImageProjectiveGeometry
using PyPlot
img = load("DSC00030.jpg");
gimg = Float64.(Gray.(img)); # Get a Float64 array
figure(1)
PyPlot.imshow(gimg, cmap = ColorMap("gray"))
# Detect corners using Noble's variant of the Harris detector with a
# large smoothing scale of 5.
cimg = ImageProjectiveGeometry.noble(gimg, 5);
figure(2)
PyPlot.imshow(cimg, cmap = ColorMap("gray"));
title("Corner feature response")
# Get the 600 strongest corner responses and overlay on input image.
(r,c) = ImageProjectiveGeometry.nonmaxsuppts(cimg, radius=5, N = 600, img = gimg, subpixel=true,fig=3);
title("Detected Corners")
The localization of the grid points is not as good as one would like, the subpixel flag in nonmaxsuppts does not help really. Most so called corner detectors do not localize well on things that a human would call a corner. There are a number of corner detectors in Images and in ImageProjectiveGeometry, noble() seemed to give the best results for me. If you have a checkerboard pattern, rather than your grid of lines, looking for minima in the response of ImageProjectiveGeometry.hessianfeatures is the way to go.
Thank you for your input! Wow, the great @peterkovesi! Since I have you here, let me just say thank you for all the content you put out there and your amazing work in image analysis: it is mostly thanks to you that my colleagues and I could use image analysis in our research. Thank you.
A checkerboard would have been better for sure, but we can’t use a flat surface like a printed checkerboard: We want to calibrate a flat patch of earth in the wild. These patches have sparse grass and plants growing on them and we don’t want to crush and flatten these by placing a board on them. So this grid of ropes seems ideal… But maybe we should add small reflective balls to the intersection points… Hmm…
Just a thought: regardless of the method you end up working with, removing that barrel distortion may help significantly. It should be easy with, for example, LensFun, especially if it already contains your lens, otherwise you can calibrate and add it.
I thought so too but I find that the success of the fft (which is what I assume you’re getting at) highly depends on how good the barrel correction is. So it’s very sensitive…
I meant also for methods other than FFT. I use these corrections for photography, they are very fast and have little setup cost (there are command line options which you can easily run from Julia).
Ok cool. Do you mean like imagemagick? I thought you meant that for an fft to result in a clear signal in the frequency domain, a signal i can use, I’d first need to align the lines better.
No, I mean LensFun, which I even linked above imagemagick can do barrel correction too, but I find LensFun (which I usually use via darktable, but there is a command line version, most Linux distros have it packaged) is very nice.
OK, I checked out Lensfun, very cool. But:
I need to spatially calibrate videos in order to track moving animals, converting the pixels coordinates to real-world coordinates. The animals can move only in two dimensions (thankfully), so I don’t really need the fancy and awesome Camera Calibration Toolbox for Matlab. Lensfun would remove any distortions from the images, but what I need is to find the spatial transformation that would convert pixels coordinates to real-world coordinates. Or can I use Lensfun for that purpose? I should check to see if the video camera and wide lens combination exists on their data-base.
Sorry, this strayed off from being Julia-related… But I have to say, Interpolations.jl, CoordinateTransformations.jl, and ImageTransformations.jl could be combined with @peterkovesi’s ImageProjectiveGeometry.jl to build a camera calibration package (there is even a container for it already: CameraGeometry.jl)…
I used and read your code before on numerous occasions and it helped me greatly in my first steps in projective geometry.
Thank you!!
@yakir12
From what I understand you are filming a flat surface about 75x75 cm.
you don’t have any preferred point of reference(origin and orientation) but you do want to be able to convert pixels to
cm on the flat surface, in a consistent way.
If that so I would try the following procedure:
grayscale and downsample the image for faster and easier workflow
LoG filter the image with a large enough kernel, this has the quality of enhancing areas with edges in the order of magnitude of the LoG kernel and suppressing
uniform areas
choose a large enough patch somewhere in the middle and use that as the kernel to filter the image.Since you have a
repeating pattern , local maximums will have the same pattern as the corners even though they are not localized
on the corners.
-Since there is distortion in the image, pattern far away from the original template may not much the original template,
to overcome this you’ll have to take a new template centered around the last found maximum every now and then.
once you have the grid mapped out, you know that it is 5cm then you can interpolate pixels to “real world”, and you
don’t need to do any distortion correction or camera calibration which would be just a more compact mathematical model
describing the grid you just found.
attached below is my quick go at the procedure above, without the more complicated detail of re-initing the template.
using Images
using PyPlot
A = imread("DSC00030.jpg")
sz = Int64.(round.(size(A)[1:2]./4))
A = imresize(A,sz)
G = squeeze(sum(Float32.(A),3),3);
G = imfilter(G,ImageFiltering.Kernel.LoG(3));
imshow((G))
T = G[300:399,300:399]
res = imfilter(B,T)
imshow(res)
Good idea, using auto correlation (right?), but I fear I’ve been spooked by @peterkovesi’s cautionary words. I think I’ll put more effort into constructing a calibration board that has a lot more contrast. Rather put the time and effort in that phase so that the automatic calibration process later on is completely hands free…
We want to calibrate a flat patch of earth in the wild
What you really want to calibrate is the camera itself, i.e. the lens distortion. You don’t even need to use a physical grid in the field. Just the frame is sufficient, specifically at least 3 corners of it. Knowing their positions on rectified image you will be able to project a virtual grid without disturbance to plants by the physical grid.
As mentioned earlier, checkerboard pattern is a standard camera calibration target. OpenCV has functions to perform the calibration or you can use a standalone application.
OK! I’ll try to test this ASAP. That would make our lives a lot easier…! Thanks. If I have the 3 points I can just use CoordinateTransformations.jl: I’m not interested in unwarping the images, just coordinates tracked from the videos.