Image Processing Question

I am converting my filtered back-projection algorithm (Tomographic reconstruction - Wikipedia) from python3 to julia. I’ve already seen major improvements in the projection part of the algorithm (was 20mins with python now 30secs with julia). However, I am having issues rotating the projections and adding them up. In python I can simply use numpys pad array to adjust where the image center is and then use scipys rotate to rotate the projections and then add them together.

In julia though you need to create a Rotmatrix(…) and then use warp to apply the rotation, which while odd coming from python isn’t difficult. However, I am not seeing a flag like reshape=false for warp so when I rotate the image all of the projections stay the same shape. Is there just something I am missing or is there another way to go about this?

I’m not familiar with the particulars of the algorithm, and it’s hard to know what’s going on without a concrete example. Can you share your Python code, or your Julia code with pseudocode for the missing part of the routine?

Sure. Here is the following python3 code:

    for channel in range(num_sensors):
        print("Channel: {} of {}".format(channel+1, num_sensors))
        ti = time.time()

        temp_im = np.pad(mesh_sect_results[channel], [padY, padX],
                        'constant', constant_values = (0,))
        temp_im = sim.rotate(temp_im, theta[channel], reshape = False)
        final_im += temp_im

Basically I am taking a square matrix rotating while keeping its shape and adding it to another matrix of the same shape. In Julia this seems to be a little more difficult. Since I have to create a rotation matrix with RotMatrix(…) and then use warp(…). However warp(…) doesn’t seems to have a argument to stop the matrix from re-sizing.

Just specify the indices where you want to warp to, i.e. the indices of final_im

help?> warp

  warp(img, tform, [indices], [degree = Linear()], [fill = NaN]) -> imgw
using Images, CoordinateTransformations

xx = 0:0.05:pi
img = Gray.([sin(x)*cos(2y) for x in xx, y in xx])
tfm = recenter(RotMatrix(pi/8), center(img))

imgw = warp(img, tfm, size(img), 0)
img += imgw

I am not sure if it is clever to really perform an explicit rotation. Here is some code that I wrote some time ago (Julia 0.2)

function h(radon::Array, filterfunc::Function)
  D, M = size(radon)
  filter = filterfunc(-D/2:(D/2-1))
  P = fftshift(fft(fftshift(radon,1),1),1)
  P .*= filter
  result = real(ifftshift(ifft(ifftshift(P,1),1),1))
  return result
end

function f(itp,x::Real,y::Real, D, M)
  result = zero(eltype(itp))
  for θ = 1:M
    ξ=x*cos(θ*π/M)+y*sin(θ*π/M)
    Ξ = (ξ/√2+0.5)*D
    result += itp[Ξ,θ]
  end
  return result*M/π
end

function filteredbp(radon::Array; N =501, Interpolation=Linear())
  hγ = h(radon, abs)
  x = -0.5:1/(N-1):0.5
  y = -0.5:1/(N-1):0.5
  fxy = zeros(eltype(hγ), length(x), length(y))

  itp = interpolate(hγ, (BSpline(Interpolation), NoInterp()), OnGrid())
  D,M = size(hγ)
  for i=1:length(x)
    for j=1:length(y)
      fxy[i,j] = f(itp,x[i],y[j], D, M)
    end
  end
  return fxy
end

This is/was for academic use in teaching. So do not expect a generic FBP.

Thank you. I must’ve missed that in the docs. I’ll post again once I give it a try.

Edit: Well in a short test it worked! Thanks again. I’ll integrate it to my actual program later and post the results.

Do you mean explicit rotation with a back-projection algorithm in general or that RotMatrix along with warp aren’t good ways to do that?

I mean explicit rotation in the back-projection phase. You want a non-allocating algorithm during that phase. In my code you can also see that one just has to do 1D interpolation on the detector. No 2D interpolations (that are required during your rotation) are required. You can run my code and it would be interesting to see some numbers. 30 secs for an FBP sounds extremely long. I would expect something in the ms range unless you are using very large matrix sizes.

I’ll take a look at your code to see what I can adopt or use to improve my own. Our non-simulated data tends to be a 1x1500 array from 256 sensors (256x1500 matrix), so I am not sure what you consider a large data set.

However, for my research I am working on photoacoustic tomography (PAT) reconstruction, which is a little different than CT reconstruction (I am assuming your code is meant for that). So in PAT reconstruction we have point sensors (assumed in the algorithm) that make up a full-ring array. So we have to take this signal and smear it across the sensors assumed view angle. To do this I recreate a matrix of x and y coordinates by converting from their polar counter parts.

x = zeros(Int64, size(θ)[1], size(radius)[1])
y = zeros(Int64, size(θ)[1], size(radius)[1])

for δr =1:size(radius)[1]

    x[:,δr] = ceil.(1+radius[δr]*cos.(θ))
    y[:,δr] = ceil.(1+radius[δr]*sin.(θ))

end

Then assign the signals value in the projection using the corresponding x and y coordinate. This is where most the the computation takes place. The actual back-projection (filling in the image space), doesn’t take all that long, since it is mostly just rotation and then element-by-element addition.

Most of what I know is from papers and other students so I haven’t had a chance to try and create clever tricks to optimize the reconstruction, such as your 1D interpolation.