You’ll want to look at something like a circle Hough transform (which is implemented in ImageFeatures.jl). The idea is to set up a discretized parameter space of possible circle radii for a binary image (where true is a pixel that’s likely to be on the circle, and false is not) which is then surveyed by brute force. That’ll only find objects that are nearly circular, though, so you may want to use a Hough transform as a seed for an ellipse parameter-fitting optimization procedure.

I would look into the theory for “circular symmetry detection” or “rotational symmetry detection”. It may be more difficult to find an out of the box implementation than for the circle Hough transform, but I’d expect the results to be more robust.

If you can generate a lot of training data, a convolutional neural network could probably solve it quite easily as well.

For the circular case, since it’s only 3d, a grid search might not be a terrible idea. In a reasonable amount of time, you could probably iterate over every combination of center and radius, and see which has lowest least square error.

I imagine you could design a simple algorithm like the following…

Find the brightest point
draw a small circle of subpixel radius
calculate fitness (define an objective function: intensity/pixels touching?)
expand circle and move radius by some measure of error via a derivative

I don’t think this would be too hard. If you are really stuck this could be fun to work on, but I’d need some data to play with.

@ckneal Thanks for the idea. I barely know anything about image processing, but I’m very happy to learn. I just didn’t know what a good starting point was.
This is kind of a side project, so I didn’t want to put that much time it (doing random background reading) and I asked for directions here. Still, it would be very helpful to do it and learn the basics of image processing during the process.

Anyway, here’s some sample data in case someone gets it working before I do:

I don’t know how to plot the data in PyPlot to get the density plot from the x, y, and z columns, so a piece of advice would be helpful here as well.
However, the data is ordered such that I can just reshape it into a square matrix and that is the density.

I’m reading it with this:

using DelimitedFiles
function readImage(path::String)
image = readdlm(path, Int64)
z = reshape(image[:, 3], (301, 301))
return z
end

and plotting it like this:

const PrecArray64 = Union{Array{Int64, 2}, Array{Float64, 2}}
using PyPlot
function showImage(image::PrecArray64)
clf()
imshow(image, aspect = "equal", extent = (-150,150,-150,150))
colorbar()
gcf()
end
showImage(readImage("img.dat"))

A paper that you may find useful is
“Direct Least Square Fitting of Ellipses”. Andrew Fitzgibbon, Maurizio Pilu, and
Robert B. Fisher. PAMI, Vol 21, No 5, 1999.

You would need to perform some preprocessing on your image along the lines of some smoothing followed by non-maximal suppression in order to localise the ridge of the ellipse

A book you may find (very) useful is “The Image Processing Handbook” by J.C.Russ. For fitting that kind of data I personally would start by interactively playing with different algorithms in ImageJ.

this came from imfilter(img, Kernel.gaussian(8)) and I’d say it gave the best edge detection with the canny algorithm from Images.jl:

However, this doesn’t seem clean enough for the Fitzgibbon algorithm suggested by @peterkovesi.

I got a non maximum suppressed image by running the first few steps in the canny edge suppression here up to thin_edges_nonmaxsup(), but this seems like it would be harder to fit than the edges image above.

I’ve also realised that I can theoretically figure out the angle at which the ellipse is oriented based on some experimental constants, so I’m currently also looking for a way to constrain the ellipse fitting to a particular angle.

Did you consider creating a cost function by using the equation for a general ellipse to define a distance, and the pixel intensities as weights? Then you minimize the cost function with respect to the ellipse parameters.

You might need the absolute (or perhaps square is better) value of the ellipse equation though, to avoid negative distance values for points inside the ellipse.

Ridge detection may be a better approach than edge detection for this sort of thing, since ellipse-fitting is complicated by the fact that you effectively have two ellipsoidal edges.

This is what I can get with some form ridge detection (basically hessian_matrix_eigvals from skimage):

This was done on the already gaussian blurred image and it’s the cleanest I could get it in a few tries.

Removing the edge artifacts:

This looks promising (i.e. just one ellipse), but I don’t know how to binarize this as it still has some trash lines in the centre (I want to binarize this so I can run an ellipse fitting algorithm (currently Fitzgibbon et al.) on the set of x and y indices of the ellipse).

I really think this is the best way forward. This way you won’t need any edge/ridge detection, you just work with the image directly. Although I’d recommend some lower cutoff, where any pixel values lower than some percentile should be discarded.

I wasn’t getting anything reasonable with ridge/edge detection and fitting, so I decided to go brute force since the problem is quite constrained from the beginning.

I created a method to generate (rather large ~230x230) kernels from ellipse equations (where I have ones for points on the ellipse and zeroes otherwise). I then convolved them with a 3x3 gaussian kernel to make it a bit thicker. One result looks like this:

I’m then convolving the initial gaussian blurred images with the kernels via imfilter from ImageFiltering.jl. I then look at the maximum intensity in the resulting image and get the corresponding coordinates. You can then use these to center the image (which I needed to do anyway).

Theoretically, the angle is known, and the ellipses don’t vary that greatly in size. Therefore, it’s reasonable to try a range of kernels corresponding to varying the width and height of the ellipse by ~50).
I then have a list of the maximum intensity for every kernel, and assume that the largest element of this list corresponds to the best fit ellipse.

For now, this is doing the required job quite well, but I wouldn’t mind to find out if there are nicer ways to go about this.

P. S. I’m looking for a better way to thicken the original 1 pixel wide ellipse kernel (i.e. before convolving with the 3x3 gaussian kernel). After generating a bunch of points on a particular ellipse, I’m rounding their coordinates to integers and I’m using these to define the ones in the initial ellipse kernel.
A current candidate idea would be to look at the ellipse point coordinates before rounding, and see what pixel the float coordinates would fall into. Then look at the 3x3 mask centered on that pixel, and weigh the “importance” of the 9 pixels in the mask by the inverses of the distances from the float coordinates to the pixels.