Hi, I’m trying to find the best match for one image from an image set. Currently I use assess_ssim function to do it. For example, img2 is clearly the best match for img1. But assess_ssim gives very similar estimates for img2-5. Can anyone suggest a better way to do this? Thanks
Just general similarity as what we can judge by eye.
img1 is a cross-section of a volume. img2-5 are some cross-sections in the volume after deformation (img2 crosssection corresponds to img1). I want some code to pick the img2 out automatically.
It is tricky to define the proper measure. For example, consider the following images:
Which image is more similar to A? B or C? C has higher overlap with A, but B is exactly like A only shifted.
Anyway, your images consist mostly of background (the black pixels). If you compute any pixel-wise measure directly, you will get high values for any pair of images simply because img1[i, j] == img2[i, j] == black for most pixels.
How about omitting pixels that are black in both images? Something like
function foreground_mask(img)
threshold img to get foreground pixels and return the mask
end
img_ref = ...
mask_ref = foreground_mask(img_ref)
for img in images
mask = foreground_mask(img)
# Pixels that are foreground at least in one image
combined_mask = mask_ref .|| mask
measure = some_measure(img_ref[combined_mask], img[combined_mask])
end
These images have been polished and all black pixels are completely zero. If I understand what you mean, you’re suggesting combining the foreground pixels of two images to calculate their overlap and from that determine the similarity, right? I’m afraid that’s not that useful, as the volume (a polycrystalline metal) is deformed inhomogeneously with more or less extraction along x and y. The patterns just feel similar, but actually overlap very slightly.
Not an expert, just played around a bit with related topics. Anyway, if you know that just a certain kind of transformation could happen, you can try to factor them out before calculating a pixel-wise distance.
Basic idea: Procrustes analysis - Wikipedia.
If you have many images and speed is an issue, you can try ideas from: Perceptual hashing - Wikipedia
Depending on how bad the deformation is, you might be able to register the images before comparing them pixelwise. RegisterQD supports affine deformations.
BTW, my ultimate goal is actually to register the particles between two X-ray CT volumes (before and after deformation). My current strategy is to first find the same crossection (as img1 and img2) and then register particles on these crossection images, by which to turn one 3D problem into two 2D problems. I wonder do you have any suggestion for direct registration of particle markers in 3D? Thanks
The procrustes analysis seems fit well for my case.
I’m just an experimenter and not that good at math I can only look to see if anyone has ever written such a package for it. Anyway, thanks for your help.