Help implement Kalman filter

Hi all!
I have a specific problem that I think lends itself to a Kalman filter. Since I’ve never used a Kalman filter before and I’m not sure how exactly I could apply it on my data, I thought I’d reach-out here.

My data

I have time-lapse images of roots growing. As they grow they emit a bit of light from their tip. I want to track the trajectory of the root tips.

1

The image of the root tip looks like a blurry point source – the distribution of the light intensities sometimes looks almost like a gaussian (albeit not in the below example).

1

I can get an ok estimation of the location of the beginning of a root-tip so I know where to start in the image. I also have good guesses about the tip’s:

  1. speed
  2. angular speed (how quick it can change directions)
  3. range of possible directions (the root doesn’t grow “up”, it only grows with gravitation)
  4. intensity change (how the tip brightness degrades with time)

How

I read this excellent explanation about Kalman filters, I checked out the Kalman, StateSpace, DataAssim, and StateSpaceRoutines packages and I thought that I should be able to:

  1. Start with an initial tip location
  2. Guess the location of the tip in the next frame
  3. Extract a large enough region of interest (ROI) from that next frame centered on the guess
  4. Use the light distribution in that ROI to feed the Kalman filter with an estimation of the new location

Implementation

I’m a bit lost… Does anyone here have a good understanding of how I could use one of the available tools in Julia to accomplish this?

Thanks!

Example data

This will generate some data to play with:

sz = (100,100)
nframe = 10
imgs = rand(sz..., nframe);
t = 1:nframe
x = 1.1*t + sz[2]/2 + rand(nframe)
y = 7.8*t + 1 + rand(nframe)
i = [CartesianIndex(round(Int, y[frame]), round(Int, x[frame]), frame) for frame in 1:nframe]
imgs[i] = 5e3
using ImageFiltering
for frame in 1:nframe
    imgs[:,:,frame] = imfilter(imgs[:,:,frame], Kernel.gaussian(2))
end
4 Likes

Your data is not in state space form (that I could easily imagine as applicable for the Kalman filter). Eg your state could be (x, y) coordinates of the root tips, and then perhaps you could use a random walk or an I(1) process (“velocity” changes randomly), but those coordinates would need to be extracted first from the pixels. AFAICT that is orthogonal to the application of the Kalman filter. (BTW, amazing stuff).

I’ve successfully followed the tip by just finding the closest and brightest pixel below the previous one. So I could generate an approximation for where the tips might be.

Yes, amazing pictures. To have something more robust, instead of taking the brightest pixel, you can do the following: Given the current position, you can apply a window or kernel and then compute the average of the light location (mean of the light distribution) within the window.

ij(x,y) =  round(Int, y), round(Int, x)

# initial guess x y
x = ...
y = ...
M = imgs[:,:,1] 

kernel(x, y, σ) = 1/(2*π*σ^2)*exp(-hypot(x,y)^2/(2*σ^2))

function track(M, x, y, σ = 2, n = 7)
    si = 0.0
    sj = 0.0
    s = 0.0
    i0, j0 = ij(x, y)
    for i in i0-n:i0+n
        for j in j0-n:j0+n
            im = mod1(i, size(M,1))
            jm = mod1(j, size(M,2))
            
            si += i*M[im,jm]*kernel(y - i, x - j, σ)
            sj += j*M[im,jm]*kernel(y - i, x - j, σ)
            s += M[im,jm]*kernel(y - i, x - j, σ)
        end
    end
    sj/s, si/s
end

xx = zeros(nframe)
yy = zeros(nframe)

for frame in 1:nframe
    x, y = track(imgs[:, :, frame], x, y, 10, 16)
    xx[frame] = x
    yy[frame] = y
end


[xx yy]

In the example after playing a bit with the window parameters, say sigma=10 and n=16, I get

 x        xtrue    y         ytrue
51.9744  51.8124   9.98118   9.76172
 52.9121  53.0537  16.3462   17.1948 
 53.8929  53.8186  24.1064   24.813  
 54.88    54.799   32.098    33.1901 
 55.8867  56.2031  40.0934   40.5572 
 57.7943  57.5816  47.2083   48.399  
 58.8921  58.6591  55.0959   55.9413 
 58.9889  58.9721  63.0704   64.3214 
 59.8979  60.299   71.0971   71.7049 
 61.7865  61.6383  79.0698   79.9845

That seems to work nicely. You can also use track to obtain a velocity vector.

2 Likes

This video looks amazing! Is there any hope that you could publish the data set somewhere in the open so that others could play around with it?

1 Like

Thanks!

Cool, but could you please explain what the function ij is or does (my only guess was round(Int, x), but that doesn’t seem to work)?

Ah, sorry, You had it almost.

ij(x,y) =  round(Int, y), round(Int, x)

I’ll edit.

Wow! Awesome!
So what do you mean by track in:

?

track is just the function name. If you take

xnew, ynew = track(imgs[:, :, frame], xpred, ypred, 10, 16)
dx = xcur - xnew
dy = ycur - ynew

you can use that in filtering.

Right, right.

So, I played around with it, and it seems like if I increase the noise by one order of magnitude:

imgs[i] = 5e2

then it really breaks down. I might need to play with the window size and kernel variance…
A similar solution I’ve used before is to calculate the weighted mean of the window, using the pixel brightness as the weights. Not sure how your solution really differs from that…

While this is cool, I can’t help thinking if there is a way to use online Kalman filtering here…

As @Tamas_Papp said: Kalman-filtering is mostly orthogonal, if you use a function like track to obtain the noisy position signal, you can use Kalman filtering to update the position x, y taking a linear model for the dynamics of the root location and the current position into account.

Ok, so I understand I need to have a step between the timelapse images and the kalman filter, namely your track function or something similar that spits out an estimation of the coordinates. I can then use similar estimators that spit out the speed, angular speed, direction, and intensity change, but how do I (if at all) feed all that into a kalman filter?

Please have at it, and please let me know if you find some robust way of getting the tip’s trajectories:
dropbox link to a zip file with 25 TIF UInt16 images (2 MB each), temporally sorted (named 1, 2, 3, …, 25). As a starter, there is a tip in image 1.TIF at column 798 and row 636. Not to be all negative and all, but keep in mind that while you could easily follow it by detecting the brightest point below it in the consecutive image, it would be better if we could find some more robust and accurate way of doing this.
Thanks in advance!

2 Likes

I think the following is a pretty decent way of getting an approximation of the tip’s locations:

W = 10
cols = [col for row in -W:W for col in -W:W]
rows = [row for row in -W:W for col in -W:W]
win = CartesianIndex.(rows, cols)
frame = img[p .+ win]
w = Float64.(frame)
w -= mean(w)
clamp!(w, 0, 1)
yx = Float64.([rows cols])
μ = mean(yx, StatsBase.Weights(w), 1)
p = CartesianIndex(round.(Int, μ)...) + p

So now I have a list of estimated locations. I should therefore be able to use a Kalman filter on that list in order to improve on these estimates. Since this next step is separate from acquiring the tip estimates, I’ll start a new topic just for that.

That is about what I also found working on your example data. I applied my solution to the normalized frame, but normalizing and thresholding the window works even better I suppose.

1 Like

Cool. Well, now all that remains, unless I misunderstood something, is to refine that with the Kalman filter (see here).

[Offtopic]: This looks impressive! Can you please post a link (or two) to the science behind the light emission?