A Feedback-Based Algorithm for Motion Analysis with Application to Object Tracking

  • PDF / 6,715,204 Bytes
  • 17 Pages / 600.03 x 792 pts Page_size
  • 89 Downloads / 184 Views

DOWNLOAD

REPORT


Research Article A Feedback-Based Algorithm for Motion Analysis with Application to Object Tracking Shesha Shah and P. S. Sastry Department of Electrical Engineering, Indian Institute of Science, Bangalore 560 012, India Received 1 December 2005; Revised 30 July 2006; Accepted 14 October 2006 Recommended by Stefan Winkler We present a motion detection algorithm which detects direction of motion at sufficient number of points and thus segregates the edge image into clusters of coherently moving points. Unlike most algorithms for motion analysis, we do not estimate magnitude of velocity vectors or obtain dense motion maps. The motivation is that motion direction information at a number of points seems to be sufficient to evoke perception of motion and hence should be useful in many image processing tasks requiring motion analysis. The algorithm essentially updates the motion at previous time using the current image frame as input in a dynamic fashion. One of the novel features of the algorithm is the use of some feedback mechanism for evidence segregation. This kind of motion analysis can identify regions in the image that are moving together coherently, and such information could be sufficient for many applications that utilize motion such as segmentation, compression, and tracking. We present an algorithm for tracking objects using our motion information to demonstrate the potential of this motion detection algorithm. Copyright © 2007 Hindawi Publishing Corporation. All rights reserved.

1.

INTRODUCTION

Motion analysis is an important step in understanding a sequence of image frames. Most algorithms for motion analysis [1, 2] essentially perform motion detection on consecutive image frames as input. One can broadly categorize them as correlation-based methods or gradient-based methods. Correlation-based methods try to establish correspondences between object points across successive frames to estimate motion. The main problems to be solved in this approach are establishing point correspondences and obtaining reliable velocity estimates even though the correspondences may be noisy. Gradient-based methods compute velocity estimates by using spatial and temporal derivatives of image intensity function and mostly rely on the optic flow equation (OFE) [3] which relates the spatial and temporal derivatives of the intensity function under the assumption that intensities of moving object points do not change across successive frames. Methods that rely on solving OFE obtain 2D velocity vectors (relative to the camera) while those based on tracking corresponding points can, in principle, obtain 3D motion. Normally, velocity estimates are obtained at a large number of points and they are often noisy. Hence, in many applications, one employs some postprocessing in the form of model-based smoothing of velocity estimates

to find regions of coherent motion that correspond to objects. (See [4] for an interesting account of how local and global methods can be combined for obtaining velocity flow field.) While the two approaches mentio