Cross-correlation function in Julia vs. Python

Hi there,

I am new to Julia. I want to perform a cross-correlation with two 2d arrays (both 5X5). Using crosscor() (StatsBase.jl) returns a 9x5x5 array. What I would like is the cross-correlation matrix of dims M+N-1, such as the output of the scipy.signal.correlate2d() from Python, giving a 9x9 Matrix.
Here is my code:

a = ones(5,5)
b = ones(5,5)
crosscor(a,b, demean=false)
Output:
9×5×5 Array{Float64,3}:

What kind of cross-correlation is implemented here? I am having a hard time grasping this.
If I use the 2D convolution with a reversed array, I do get what I want:
conv2(a,b[end:-1:1,end:-1:1])
Output:
9×9 Array{Float64,2}

but the implementation is FFT-based. I would like a direct convolution, but I cannot find one in Julia.

hope you can help me clarify this! Thanks!

See the documentation by doing ?crosscor in the REPL.

I agree that the documentation in this case is not as clear as it could be. The dimension of length 9 is showing you expectation values for different “lags”, i.e. I think what you are seeing here in the vector case would be.
\langle (x_{i}-\langle x\rangle)(x_{i+j} - \langle x\rangle)\rangle
for a range of j (given in the documentation), but it would be really nice if the exact expression were written in the documentation. The remaining two indices are showing you these values for each pair of columns.

1 Like

This can be solved by imfilter and padarray from the ImageFiltering package but if your arrays are small you will likely get perfectly decent performance by implementing the function yourself, just writing the nested loops involved.

1 Like

Thank you for the answers so far. I see that the concept of cross-correlation here is quite different then I need it to be. I would like to see whether one image a is part of the other image b. So I would slide one image over the other and see for which displacement I get the highest cross-correlation value.

Python gives the following output:
a = np.ones((3,3))
b = np.ones((3,3))
c = ss.correlate2d(a, b)

array([[ 1., 2., 3., 2., 1.], 
          [ 2., 4., 6., 4., 2.],
          [ 3., 6., 9., 6., 3.], 
          [ 2., 4., 6., 4., 2.], 
          [ 1., 2., 3., 2., 1.]])

So the peak in my correlation matrix is 9, as the center of it, meaning that without displacement, the two images are the most “similar”.

Julia on the other hand gives me the following:

a = ones(3,3)
b = ones(3,3)
crosscor(a,b, demean=false)*3

[:, :, 1] = 1.0 1.0 1.0 
              2.0 2.0 2.0 
              3.0 3.0 3.0 
              2.0 2.0 2.0 
              1.0 1.0 1.0 
[:, :, 2] = 1.0 1.0 1.0 
              2.0 2.0 2.0 
              3.0 3.0 3.0 
              2.0 2.0 2.0 
              1.0 1.0 1.0

And so on…So it does not tell me anything.
I might implement a function resembling the Python output. I would just like to know the difference in the Python CC and the Julia one.

I’m definitely not an image processing person by any means, but it might be worth thinking about whether this is really an effective measure for similarity of images. I’d be doubtful about how far you could take the concept of “lag” or displacement here especially since there is no sense in which x_{i+1} follows x_{i} in a sequence if they lie on the boundary of a row of pixels.

Just thinking out loud here, I would think that one thing you could do would be to subtract the images from each other and perform some sort of test to see if the difference is “random noise”. Of course this is easier said than done, and perhaps it would be a good place for some sort of dimensionality reduction.

Having said all that. I would think that the conceptually simplest way to do what you are trying to do would be to just vectorize the image. Then you can just use crosscor and know exactly what you are looking at.

You also might want to check out the various packages here.

1 Like

This cross-correlation principle is commonly used for image similarity (like in particle image velocimetry - the thing I want to implement in Julia finally). But I could certainly look also into other “similarity”, i.e. “disimilarity” measures like the subtracting that you proposed.
Also, I will look more into the vectorization and then try it out.

Thanks for the ideas and the link!

Like I said, I’m a layman, so I defer to your judgment. The reason I questioned it is because it is far from obvious to me what the expectations about what the cross-correlation would be in the general case.

It’s a standard image processing technique, except you need to do some normalization to avoid getting the idea that a white area in the image is most similar to your template. E.g. like this:

using Images

function correlate_image(img, template)
    z = imfilter(Float64, img, centered(template), Algorithm.FIR())
    q = sqrt.(imfilter(Float64, img.^2, centered(ones(size(template))),
                       Algorithm.FIR()))
    m = sqrt(sum(template.^2))
    corr = z ./ (m * max.(eps(), q))
    return corr
end

For illustration:

julia> x=rand(1:9, 8, 8)
8×8 Array{Int64,2}:
 7  2  2  9  7  2  4  7
 3  7  9  5  3  2  9  5
 1  9  1  4  1  4  2  8
 7  6  3  1  8  8  6  7
 4  9  4  4  1  6  6  2
 4  8  7  1  8  2  1  8
 5  4  2  5  5  3  4  1
 3  8  3  8  4  5  7  8

julia> y = x[2:4, 5:7]
3×3 Array{Int64,2}:
 3  2  9
 1  4  2
 8  8  6

julia> c = correlate_image(x,y)
8×8 Array{Float64,2}:
 0.661888  0.737724  0.908773  0.822907  0.602351  0.699639  0.862974  0.875247
 0.573246  0.637993  0.802854  0.615533  0.602121  0.626725  0.815854  0.789346
 0.884582  0.892484  0.633597  0.678266  0.740834  1.0       0.81138   0.83531 
 0.827208  0.685541  0.795446  0.605092  0.714524  0.644814  0.824203  0.741265
 0.772589  0.803458  0.695693  0.8739    0.736878  0.742672  0.715778  0.796131
 0.870059  0.706784  0.678821  0.599637  0.903347  0.756111  0.604329  0.58481 
 0.847023  0.830675  0.713355  0.892543  0.76089   0.728849  0.960917  0.8947  
 0.74215   0.778543  0.781013  0.849     0.773851  0.82505   0.754464  0.763592

julia> c[3,6]
1.0

Notes:

  • You can substitute ImageFiltering for Images if you don’t want the whole package.
  • You rather want to force spatial filtering rather than FFT in case you have a sufficiently large area of zeros in your image, or you have to figure out exactly how to deal with zeros being corrupted into epsilons of varying signs.
  • Never mind the variable names. That code was never meant to go public and I won’t make the effort to come up with better ones.
2 Likes

If your goal is to implement PIV, I guess that the two input arrays are of the same size and that you want to perform the correlation a large number of times for similar inputs.

I implemented a simple multipass PIV code for fun a year ago, and found that the performance could be significantly increased by writing a custom correlation function using FFT. Something along the lines of this might work for you (no guarantees of correctness :slight_smile:):

# The size of your interrogation area (assumed here to be square)
interrogationArea = 32

# Dummy data
a = rand(interrogationArea, interrogationArea)
b = rand(interrogationArea, interrogationArea)

# Plan the forward and inverse transforms for better performance
p = plan_fft(rand(Float64, interrogationArea, interrogationArea), flags = FFTW.MEASURE)
ip = plan_ifft(rand(Complex128, interrogationArea, interrogationArea), flags = FFTW.MEASURE)

function correlate_fft(p, ip, a, b)
       corr = fftshift(real(ip*(conj(p*a).*(p*b))))
end

A quick benchmark show a speedup of roughly 50x compared to the ‘’‘correlate_images’‘’ function proposed above, but this solution is of course much less general.

1 Like

Thanks for the code and your reply!

Yes, it will be a lot of calculations. I’ll be going for some GPU computing for this.

I want to implement both direct and more faster methods, but would like to start with the more general one and test my PIV performance with all the different nuances.

Thank you very much for the code contribution!

It is very close to what I want to have. The resulting matrix should be 10x10 (8 + 3 - 1). But I will play around with the code and do the adjustments.

I am trying to follow your approach to calculate cross-correlations using FFT. In my case, I have millions of images to compare. Thus, the function you provided allocates way too much memories and GC would be very busy. In your code, every step will allocate a temporary array. I tried to eliminate the allocations as much as possible.

function corr_fft(P, IP, A, B)
	cc = fftshift(real.(IP*(conj(P*A) .* (P*B))))
end

fftshift!(y, x) = circshift!(y, x, div.([size(x)...],2))

function corr_fft1!(cc, P, IP, A, B, XC)
	FA  = P*A
	conj!(FA)
	FB  = P*B
	FA .= FA .* FB
	A_mul_B!(FB, IP, FA)
	XC .= real.(FB)
	fftshift!(cc, XC)
end

In this version, cc and XC are pre-allocated real arrays outside of the function. But I still have to allocate FA and FB, two complex arrays, for each call. Unfortunately, if one wants to use A_mul_B!(Y, plan, X), the input image X has to be a complex array as well. But if the number of calls to this function is greater than the number of images, it might be worth to convert all images to complex arrays first and use the following version.

function corr_fft2!(cc, P, IP, AC, BC, FA, FB, XC)
	A_mul_B!(FA, P, AC)
	conj!(FA)
	A_mul_B!(FB, P, BC)
	FA .= FA .* FB
	A_mul_B!(FB, IP, FA)
	XC .= real.(FB)
	fftshift!(cc, XC)
end

Here, cc and XC are pre-allocated real arrays, AC and BC are converted complex images, and FA and FB are pre-allocated complex arrays. For 100 x 100 pixel images, I benchmarked the three versions in my MacBook pro. AC and BC are complex arrays converted from A and B.

julia> @btime corr_fft($P, $IP, $A, $B)
  204.048 μs (22 allocations: 625.95 KiB)

julia> @btime corr_fft1!($cc, $P, $IP, $A, $B, $XC)
  193.094 μs (11 allocations: 313.03 KiB)

julia> @btime corr_fft2!($cc, $P, $IP, $AC, $BC, $FA, $FB, $XC)
  177.746 μs (3 allocations: 224 bytes)

The last version is much faster than the first two when loops through millions of images. Now, if only there is a way to directly do A_mul_B!(Y, plan, X) without requiring converting all my images to complex arrays, that would be super! I wonder if there is any method in FFTW package to do that. I didn’t find any in the manual.

FYI, our lab have an optimized 2D implementation. It was already used to compare millions of images.
https://github.com/seung-lab/Alembic/blob/master/src/math/convolve_inplace.jl#L520

1 Like