Mobile Robot Visual Navigation Using Multiple Features

  • PDF / 3,270,984 Bytes
  • 10 Pages / 600 x 792 pts Page_size
  • 103 Downloads / 230 Views

DOWNLOAD

REPORT


Mobile Robot Visual Navigation Using Multiple Features Nick Pears Department of Computer Science, University of York, York Y010 5DD, UK Email: [email protected]

Bojian Liang Department of Computer Science, University of York, York Y010 5DD, UK Email: [email protected]

Zezhi Chen Department of Computer Science, University of York, York Y010 5DD, UK Email: [email protected] Received 22 December 2003; Revised 29 July 2004 We propose a method to segment the ground plane from a mobile robot’s visual field of view and then measure the height of nonground plane features above the mobile robot’s ground plane. Thus a mobile robot can determine what it can drive over, what it can drive under, and what it needs to manoeuvre around. In addition to obstacle avoidance, this data could also be used for localisation and map building. All of this is possible from an uncalibrated camera (raw pixel coordinates only), but is restricted to (near) pure translation motion of the camera. The main contributions are (i) a novel reciprocal-polar (RP) image rectification, (ii) ground plane segmentation by sinusoidal model fitting in RP-space, (iii) a novel projective construction for measuring affine height, and (iv) an algorithm that can make use of a variety of visual features and therefore operate in a wide variety of visual environments. Keywords and phrases: plane segmentation, image rectification, plane and parallax, obstacle detection, mobile robots.

1.

INTRODUCTION

1.1. Robust, multifeature, multicue vision systems In order to operate reliably over extended periods of time (i.e., hours/days/weeks rather than seconds/minutes), computer vision systems must use of all the information in the image stream that is pertinent to the current task. This requires that the system can make this pertinent information explicit by employing a range of feature extractors and visual cues opportunistically, namely, as and when they are available in the image stream and deemed to provide useful constraints to solve task-related problems and resolve any ambiguities. In this way, visual interpretation and decision making can be maximally informed. We believe that this principle is particularly important in unconstrained environments when the visual environment regularly changes because the disambiguating information content in the image stream is continually changing. Thus if we rely on a single feature/cue combination, such as corners/corner-motion, the application will fail in scenes with few corners, or poorly distributed corners in image space.

We have focused on mobile robot visual navigation as a challenging computer vision problem because the nature of the application suggests that the visual environment is likely to be variable as the robot moves, for example, from room to room. Indeed, we make no assumptions in the work presented here, other than having a reasonably flat floor. This makes the work applicable to mainly indoor mobile robot applications, but also outdoor applications which traverse reasonably flat man-made structures such as pe