Vision-based Localization in RoboCup Environments
Knowing its position in an environment is an essential capability for any useful mobile robot. Monte-Carlo Localization (MCL) has become a popular framework for solving the self-localization problem in mobile robots. The known methods exploit sensor data
- PDF / 198,378 Bytes
- 6 Pages / 451 x 677.3 pts Page_size
- 50 Downloads / 171 Views
Stefan Enderle1 , Marcus Ritter1 , Dieter Fox2 , Stefan Sablatnog1 , Gerhard Kraetzschmar1, Gunther Palm1 1
Dept. of Neural Information Processing University of Ulm D-89069 Ulm, Germany
2
Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213
Knowing its position in an environment is an essential capability for any useful mobile robot. Monte-Carlo Localization (MCL) has become a popular framework for solving the self-localization problem in mobile robots. The known methods exploit sensor data obtained from laser range nders or sonar rings to estimate robot positions and are quite reliable and robust against noise. An open question is whether comparable localization performance can be achieved using only camera images, especially if the camera images are used both for localization and object recognition. In this paper, we discuss the problems arising from these characteristics and show experimentally that MCL nevertheless works very well under these conditions. Abstract.
1
Introduction
In the recent past, Monte-Carlo localization has become a very popular framework for solving the self-localization problem in mobile robots [4, 5]. This method is very reliable and robust against noise, especially if the robots are equipped with laser range nders or sonar sensors. In some environments, however, for example in the popular RoboCup domain [6], providing a laser scanner for each robot is diÆcult or impossible and sonar data is extremely noisy due to the highly dynamic environment. Thus, enhancing the existing localization methods such that they can use other sensory channels, like uni- or omni-directional vision systems, is a state-of-the-art problem in RoboCup. In this work, we present a vision-based MCL approach using visual features which are extracted from the robot's unidirectional camera and matched to a known model of the RoboCup environment. 2
Monte Carlo Localization
Monte Carlo localization (MCL) [5] is an eÆcient implementation of the general Markov localization approach (see e.g. [4]). Here, the in nite probability distribution Bel(l) expressing the robots belief in being at location l is represented by a set of N samples S = fs1 ; : : : ; sN g. Each sample si = hli ; pi i consists of a P. Stone, T. Balch, and G. Kraetzschmar (Eds.): RoboCup 2000, LNAI 2019, pp. 291-296, 2001. c Springer-Verlag Berlin Heidelberg 2001
292
Stefan Enderle et al.
P
robot location li and weight pi . As the weights are interpreted as probabilities, we assume N i=1 pi = 1. The algorithm for Monte Carlo localization is adopted from the general Markov localization framework. Initially, a set of samples re ecting initial knowledge about the robot's position is generated. During robot operation, the following two kinds of update steps are iteratively executed: As in the general Markov algorithm, a motion model P (ljl ; m) is used to update the probability distribution Bel(l). In MCL, a new sample set S is generated from a previous set S by applying the motion model as follows: For each sample hl ; p
Data Loading...