FPGA Implementation of a Feature Detection and Tracking Algorithm for Real-time Applications
An efficient algorithm to detect, correlate, and track features in a scene was implemented on an FPGA in order to obtain real-time performance. The algorithm implemented was a Harris Feature Detector combined with a correlator based on a priority queue of
- PDF / 424,899 Bytes
- 10 Pages / 430 x 660 pts Page_size
- 64 Downloads / 184 Views
Abstract. An efficient algorithm to detect, correlate, and track features in a scene was implemented on an FPGA in order to obtain real-time performance. The algorithm implemented was a Harris Feature Detector combined with a correlator based on a priority queue of feature strengths that considered minimum distances between features. The remaining processing of frame to frame movement is completed in software to determine an affine homography including translation, rotation, and scaling. A RANSAC method is used to remove mismatched features and increase accuracy. This implementation was designed specifically for use as an onboard vision solution in determining movement of small unmanned air vehicles that have size, weight, and power limitations.
1
Introduction
As high computational requirements are the nature of computer vision and image processing applications, real-time performance is not a trivial accomplishment. It is even more difficult when the nature of a specific application limits the size of the computing system, or the amount of power that can be consumed. With small unmanned air vehicles, or micro UAVs, there are a multitude of applications for onboard image processing, but the ability of the craft to maneuver in the air is severely hindered if it has to be tethered to a power supply or overloaded with the weight of large processing systems. In order to provide image processing solutions onboard micro UAVs, a balanced combination of software algorithms and hardware implementations needs to be determined. This paper details work to provide an onboard image processing solution of detecting and tracking features in order to estimate the movement or homography from one frame to the next for micro UAV applications. Feature detection forms the basis of many UAV applications. It is the initial step in developing computer understanding of video sequences. Detected features are tracked and classified as obstacle/non-obstacle in order to implement basic obstacle avoidance. Obstacle avoidance can be used to help a UAV safely avoid trees, power lines, and other obstacles. Features can also be identified as a desired target and then used to maintain a specified distance from the target, effectively tracking any identifiable object. Tracked features may also be combined with line G. Bebis et al. (Eds.): ISVC 2007, Part I, LNCS 4841, pp. 682–691, 2007. c Springer-Verlag Berlin Heidelberg 2007
FPGA Implementation of a Feature Detection and Tracking Algorithm
683
detection or color segmentation in order to implement path or lane following, allowing the UAV to map out potential routes for ground vehicles. These features can be used to obtain information about the UAVs surroundings, such as height above ground, pitch, roll, direction of movement, and speed. Image processing solutions involving feature detection and homography calculation currently exist for micro UAV applications, but are all done remotely on a ground station computer. Most noise introduced into image processing for micro UAV applications occurs during transm
Data Loading...