From Multiple Images to a Consistent View

The approach presented in this paper allows a team of mobile robots to estimate cooperatively their poses, i.e. positions and orientations, and the poses of other observed objects from images. The images are obtained by calibrated color cameras mounted on

  • PDF / 327,037 Bytes
  • 10 Pages / 451 x 677.3 pts Page_size
  • 38 Downloads / 249 Views

DOWNLOAD

REPORT


Abstract

The approach presented in this paper allows a team of mobile robots to estimate cooperatively their poses, i.e. positions and orientations, and the poses of other observed objects from images. The images are obtained by calibrated color cameras mounted on the robots. Model knowledge of the robots' environment, the geometry of observed objects, and the characteristics of the cameras are represented in curve functions which describe the relation between model curves in the image and the sought pose parameters. The pose parameters are estimated by minimizing the distance between model curves and actual image curves. Observations from possibly di erent view points obtained at di erent times are fused by a method similar to the extended Kalman lter. In contrast to the extended Kalman lter, which is based on a linear approximation of the measurement equations, we use an iterative optimization technique which takes non-linearities into account. The approach has been successfully used in robot soccer, where it reliably maintained a joint pose estimate for the players and the ball. 1

Introduction

1.1 Motivation and Goal of the Work To successfully perform their tasks, most autonomous mobile robots must be able to estimate their own poses, consisting of position and orientation. Furthermore, the interaction with other robots and the manipulation of objects require them to localize other, possibly moving, objects. A strong restriction is that the localization problem has to be solved in real-time. Often the required localization accuracy varies with the distance of the object to be localized. A robot that wants to grasp an object requires an accurate position whereas less accurate estimates are suÆcient to approach the object. Due to the lower price and weight, visual sensors are often preferred against laser range nders. In this paper, we propose a novel approach for estimating the poses of cooperating, mobile robots and the poses of other objects observed by the robots.

1.2 Previous Work The problem of pose-estimation from images is frequently addressed by the robotics, the computer vision, and the photogrammetry communities. Due to the huge number of publications, a comprehensive review would be beyond the scope of this paper. Recently sample-based versions of Markov localization became very popular [2, 5, 3, 6]. Closely related to sample{based Markov localization is the Condensation algorithm [1] which is often used for object tracking. Both approaches P. Stone, T. Balch, and G. Kraetzschmar (Eds.): RoboCup 2000, LNAI 2019, pp. 169-177, 2001. c Springer-Verlag Berlin Heidelberg 2001

170

R. Hanek et al.

represent the posterior distribution of the sought parameters (e.g. the pose) by samples, which allows to approximate virtually any distribution. However, in order to achieve high accuracy, usually a high number of samples is necessary. The conditional probability density of the environment observation has to be computed for each sample pose. A good match between the observation and a sample pose leads to n