Feature Detection and Tracking

Kanade-Tomasi Algorithm for Feature Detection

The feature detection algorithm that we have used is the one presented by Kanade and Tomasi ([1], [2]). The algorithm performs the following steps:

- Gaussian blur of the image.

- Computation of first derivatives (these first two steps can be carried out in one pass using Sobel derivatives).

- Compute the matrix of square derivatives on the neighborhood of each pixel.

- Compute the minial eigenvalue for the matrix correspondant to each pixel.

- Reject pixels with negligible minimal eigenvalue.

- Reject the pixels for which the minimal eigenvalue is not a local maximum.

The pixels that remain are considered as the detected features.

This set of operations is supported by the OpenCV function cvGoodFeaturesToTrack( ).

Later we need to refine the location of the features to subpixel level. This is done with the OpenCV function cvFindCornerSubPix( ).

Since the motion estimation algorithm works with sequences of images, we only perform the feature detection on the first image of every sequence.

Lucas-Kanade Algorithm for Feature Tracking

At every frame except for the beginning of a sequence, we need to find the correspondences for the detected features. At a certain step we may have to drop some of the features, if a good correspondence is not found. This actually occurs in real situations when tracked features become occluded.

For tracking the features we have used the Lucas-Kanade algorithm proposed in [3] and [5]. This is a two-frame method for computing optical flow, and it works as follows:

//Let's call I1 the first image and I2 the second image

//Let's call Q1 the neighborhood of a feature in the first image and Q2 in the second image

for (every feature in Q1)

{

Set p = feature;

Initialize displacement vector d = 0;

Initialize image difference dif = INFINITY;

while (dif > THRESHOLD)

{

Compute d0 = optical flow at p;

d = d + d0;

Warp Q1 to Q', according to d0;

Compute the Sum of Squared Differences dif between Q' and Q2; //See [5] p.147 for a further description on the SSD.

if(dif < THRESHOLD)

break;

Update Q1 = Q';

Update p to the center of Q1;

}

}

In practice, we have taken advantage of the function cvCalcOpticalFlowPyrLK( ) in OpenCV to track the features. This function uses the concept of pyramids, which allow a hierarchical computation for better running time performance.

## Sunday, July 04, 2004

Subscribe to:
Post Comments (Atom)

## No comments:

## Post a Comment