Hey everyone. I saw julia has the libraries Clustering.jl and ImageSegmentation.jl for data segmentation. I was mainly looking to do some grayscale image segmentation and I am not seeing k-means for images (e.g. https://www.mathworks.com/help/images/ref/imsegkmeans.html) or super pixel segmentation. I wanted to ask if these are just not implementation or if I am just missing something.
It does look like the python wrapper for OpenCV has these functions, however a quick look at OpenCV.jl didn’t turn up any bindings.
You should be able to use Clustering.jl’s k-means on images. Images are just arrays.
There is also fuzzy C-means and various superpixel/region-growing algorithms, like
felzenszwalb. See the documentation here.
Thanks. I didn’t know the k means algorithm from Clustering.jl works on images, since MATLAB breaks them up into two different functions (kmeans and imsegkmeans). And I don’t recall there being an example for it being used on images (maybe I can make a PR).
I’ll have to take a closer look at the others algorithms from ImageSegmentation.jl. I wasn’t too familiar with felzenszwalb, so I’ll have to do some reading.
I’m unfamiliar with what imgsegkmeans is really doing under the hood (and the help page you linked isn’t very explicit). A common use for k-means clustering in images is clustering by color, but I guess if you’re doing grayscale that’s unlikely to be interesting. So it’s possible it will be different from what Matlab is doing. (See https://github.com/JuliaImages/ImageSegmentation.jl/blob/master/src/clustering.jl for a wrapper that makes it easy.)
Let us know how it works out, and consider contributing something if it’s important missing functionality. The nice thing about Julia is that you can code up many of these algorithms yourself, whereas you’d be unlikely to get the performance you need if you tried that in Matlab. Check out the src directory in ImageSegmentation, you’ll see most of these are not enormous (though not trivial either).
Looking a bit closer at kmeans from Clustering.jl, it seems that they want a matrix that is d*n, where d is the dimension and n is the number of points. Therefore when I am reading in an image I am just getting a two dimensional matrix of intensity values. Not the coordinate values they they seem to want.
Then the function you linked to seems to be breaking the image up into different channels? Have you had success with that function and using kmeans on colored images? Or was it just something in the repo?
My guess as to what matlab is doing with imsegkmeans (at least with grayscale) is they pick k random intensities and go from there, i.e. finding how many pixels are close to that intensity and iterating from there. But I am just learning about this stuff for the first time ha.