Detecting crosses in grids

OK, I tested it with this much distortion:
tmp2
The differences are bigger now, a mean of 1.43 cm. But still, I feel like we can try and avoid that much barrelness (by filming from a little further away) and live with an error of about 1 cm. I also feel that if I had been able to use the bottom inner corners (and not the upper ones as I have until now), then the accuracy would increase.

To summarize, I’m sure you are right, the projective transform cannot take care of both the wide-angle distortion and the perspective of the camera. But it seems like this biologist can live with the marginal errors this produces.

You’ll probably find in @peterkovesi codes a function to calculate the Homography given your control points.

Yap, and there’s matlab’s code to fiddle with. I started on it, but there’s more to be done.

This discussion has diverged quite a bit from the original question but going back there, what you might be looking for is a symmetry detector. Here is a crude implementation.

using Images

function nonmax_suppression(x::Matrix{T}) where {T}
    I, J = size(x)
    y = copy(x)
    for j = 1:J
        for i = 1:I
            c = x[i,j]
            for j2 = max(j - 1, 1):min(j + 1, J),
                i2 = max(i - 1, 1):min(i + 1, I)
                if x[i2, j2] > c
                    y[i, j] = 0
                    break
                end
            end
        end
    end
    return y
end

x = green(load("DSC00030.jpg"))
x = imfilter(x, KernelFactors.gaussian((1.2, 1.2)))
g1, g2 = imgradients(x)
α1 = g1 .* g1 - g2 .* g2
α2 = 2 * g1 .* g2
α1 = imfilter(α1, KernelFactors.gaussian((1.2, 1.2)))
α2 = imfilter(α2, KernelFactors.gaussian((1.2, 1.2)))
h1 = [(i^2 - j^2) / max(1, i^2 + j^2) for i = -15:15 , j = -15:15]
h2 = [2 * i * j / max(1, i^2 + j^2) for i = -15:15 , j = -15:15]
r = imfilter(α2, centered(h2)) - imfilter(α1, centered(h1))
r = nonmax_suppression(r)

using ImageView
imshow((r .> 0) + x)

result

Edit: Removed square roots from the denominators of h1 and h2. Those mistakenly remained when I restructured the code. Makes marginal difference to the result in this case.

6 Likes

My god! That’s incredible!!!

Wow…

@GunnarFarneback I am curious to learn more about this method: did you come up with it, or does it map to one of the commonly used edge detector schemes?

This simple but crude variation is my own construction but it’s based on theory that was well known in the lab where I did my PhD. You can for example read about it in chapter 4 of Björn Johansson’s PhD Thesis, Low Level Operations and Learning in Computer Vision, with citations of earlier research in section 4.7.

4 Likes

Thank you @GunnarFarneback; I really appreciate you taking the time to point me in the right direction. I hope we can get something like this into scikit-image for more people to use; it’s a neat concept.

1 Like

I think that’s kinda where things like SciKit break down… Sure you can bake quite a few generally useful things into a library, but you can’t bake all useful things in general into a library. Does that make sense? At a certain point, these techniques just aren’t “methods” anymore and the flexibility of the lower level operations and understanding when to use them and why matters more. Opinion - I think its time we revisit the line of thinking that lead to things like SciKit(learn especially) and consider improvements.

Awesome solution Gunnar! I’d throw a solution into the ring but gotta go focus on getting some job talk slides made. Ran into a problem similar to this years back trying to remember what I did

1 Like

Well, sure, you can also not write a fully generic language, operating system, etc. But it’s useful to bundle commonly used functionality, to properly test and document it. It would be absurd to expect researchers and practitioners to implement all solutions by themselves. So, I politely have to disagree with you that the library model “breaks down”.

1 Like

I think it is the idea of libraries as prepackaged black boxes that breaks down — composable, modular implementations of building blocks may be more useful in some contexts (this may have been what @anon92994695 meant). Multiple dispatch and generally Julia is a great match for the latter.

2 Likes

Exactly Tamas. Not to say all things can be abstracted into it’s “elements” and have those elements still maintain good value. But, the ability to abstract methods into flexible generic/simple components to cater operations to specific problems (not generic problems), has for me, been quite valuable. A large part of my workflow can hinge on swapping assumptions/techniques in one(or more) area(s) of a preexisting something or another to cater an algorithm to a situation.

Sorry if I came off as abrasive Stefan - not my intention, sometimes I do that in text, but if we were talking face-to-face you’d know I’m just spit balling ideas and my intent is just to explore different things with different people and have fun.

I have pedagogical gripes with a lot of the “kits” for doing technical things. I’ve noticed trends with kit users that go something like: some large number of people people misunderstanding a field, while overestimating their skill, or worse yet, assuming the toolbox is the field. That being said if I know I need something exotic as a baseline and a kit has it you bet I’m using it after reading the code! If you ever want to brainstorm what the future could look like - let me know - I’m an open book!

1 Like

Thanks, @anon92994695—I admit, it felt a bit strange to make my first post on a Julia forum and to be told that the whole ecosystem I’ve invested 15 years of my life in is essentially broken :slight_smile: That said, I totally accept that in an in-person conversation your comment would have come across very differently.

My goal, when I started scikit-image, was certainly not to empower black-box use of algorithms (I very strongly advocate against that style of work), but to build a library of easily composable image processing primitives, that provide researchers with well tested and documented building blocks for their research.

We think that having code that is easy to read makes it more likely that users will end up looking “inside the box” and contribute back. It is true that Julia makes composition easier, and that for Python performance came as an afterthought in the forms of, e.g., Cython and numba.

Speaking from a Python scientific community perspective, we’ve worked with Julia since the beginning to make sure that there are good inter-accessibility bridges, and we’ve enjoyed hosting Julia developers at the annual SciPy conference too; I don’t see it as a competitive zero-sum game. All tools and communities have their limitations: we operate within those, trying to learn from others, and trying to make connections where necessary.

I hope I can learn from you how to write better libraries that will ultimately serve to improve open and transparent image processing and computer vision research.

2 Likes

Just to be clear, I never said anything against python, I just think we now have a new tool and can reconsider the design of certain libraries/use cases rather then repeating them. Appreciate your contributions to the field, I have used skimage in a few projects way back when and enjoyed its convenience. Had no idea you were a contributor to that library, and I again had no intentions of offending you or your work. I do believe by using Julia, we could modernize/improve a lot of the “SK” type “toolboxes”. I have a few ideas for SKLearn but cannot touch them due to my current noncompete(I think?) but I could probably contribute some ideas to the bones of an SKImage type effort in Julia - the Julia image ecosystem is awesome.

1 Like