Stereo Vision Based Self-localization of Autonomous Mobile Robots

This paper presents vision based self-localization of tiny autonomous mobile robots in a known but highly dynamic environment. The problem covers tracking the robot position with an initial estimate to global self-localization. The algorithm enables the r

  • PDF / 1,847,945 Bytes
  • 14 Pages / 430 x 660 pts Page_size
  • 92 Downloads / 256 Views

DOWNLOAD

REPORT


NWFP University of Engineering and Technology Peshawar, Pakistan {bais, yahya.khawja, usman, gmjally}@nwfpuet.edu.pk 2 Vienna University of Technology Vienna, Austria [email protected] 3 Dalhousie University Halifax, Canada [email protected] 4 Islamia College Peshawar, Pakistan [email protected]

Abstract. This paper presents vision based self-localization of tiny autonomous mobile robots in a known but highly dynamic environment. The problem covers tracking the robot position with an initial estimate to global self-localization. The algorithm enables the robot to find its initial position and to verify its location during every movement. The global position of the robot is estimated using trilateration based techniques whenever distinct landmark features are extracted. Distance measurements are used as they require fewer landmarks compared to methods using angle measurements. However, the minimum required features for global position estimation are not available throughout the entire state space. Therefore, the robot position is tracked once a global position estimate is available. Extended Kalman filter is used to fuse information from multiple heterogeneous sensors. Simulation results show that the new method that combines the global position estimation with tracking results in significant performance gain. Keywords: self-localization, stereo vision, autonomous robots, Kalman filter, soccer robots.

1

Introduction

In an application where multiple robots are autonomously working on a common global task, knowledge of the position of individual robots turns out to be a basic requirement for successful completion of any global strategy. One of the solutions for the robot position estimation is to start at a known location and track the robot position locally using methods such as odometry or inertial navigation [1]. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 367–380, 2008. c Springer-Verlag Berlin Heidelberg 2008 

368

A. Bais et al.

These methods suffer from unbounded error growth due to integration of minute measurements to obtain the final estimate [2]. Another approach is to estimate the robot position globally using external sensors [3]. To simplify the process of global localization the robot environment is often engineered with active beacons or other artificial landmarks such as bar code reflectors and visual patterns. If it is not allowed to modify the robot environment, the global localization has to be based on naturally occurring landmarks in the environment. Such methods are less accurate and demand significantly more computational power [2]. Additionally, the minimum features required for global self-localization are not available throughout the entire state space. Thus it is clear that a solution that is based only on local sensors or the one which computes the global position at every step is unlikely to work. A hybrid approach to self-localization that is based on information from the local sensors as well as the external sensor is required. Such a method is complementary and comp