Outdoor Mapping and Navigation Using Stereo Vision
We consider the problem of autonomous navigation in an unstructured outdoor environment. The goal is for a small outdoor robot to come into a new area, learn about and map its environment, and move to a given goal at modest speeds (1 m/s). This problem is
- PDF / 2,727,439 Bytes
- 12 Pages / 430 x 660 pts Page_size
- 29 Downloads / 306 Views
1 Introduction We consider the problem of autonomous navigation in an unstructured outdoor environment. The goal is for a small outdoor robot to come into a new area, learn about and map its environment, and move to a given goal at modest speeds (1 m/s). This problem is especially difficult in outdoor, off-road environments, where tall grass, shadows, deadfall, and other obstacles predominate. Not surprisingly, the biggest challenge is acquiring and using a reliable map of the new area. Although work in outdoor navigation has preferentially used laser rangefinders [14, 2, 6], we use stereo vision as the main sensor. Vision sensors allow us to use more distant objects as landmarks for navigation, and to learn and use color and texture models of the environment, in looking further ahead than is possible with range sensors alone. In this paper we show how to build a consistent, globally correct map in real time, using a combination of the following vision-based techniques: • Efficient, precise stereo algorithms. We can perform stereo analysis on 512x384 images in less than 40 ms, enabling a fast system cycle time for real-time obstacle detection and avoidance. • Visual odometry for fine registration of robot motion and corresponding obstacle maps. We have developed techniques that run at 15 Hz on standard PC hardware, and that provide 4% error over runs of 100 m. Our method can be integrated with information from inertial (IMU) and GPS devices for robustness in difficult lighting or motion situations, and for overall global consistency. • A fast RANSAC [3] method for finding the ground plane. The ground plane provides a basis for obstacle detection algorithms in challenging outdoor terrain, and produces high-quality obstacle maps for planning. • Learning color models for finding paths and extended ground planes. We learn models of the ground plane and path-like areas on and off-line, using a combination of geometrical analysis and standard learning techniques. • Sight-line analysis for longer-range inference. Stereo information on our robot is unreliable past 8m, but it is possible to infer free space by finding “sight lines” to distant objects. Good map-building is not sufficient for efficient robot motion. We have developed an efficient global planner based on previous gradient techniques [11], as well as a novel O. Khatib, V. Kumar, and D. Rus (Eds.) Experimental Robotics, STAR 39, pp. 179–190, 2008. c Springer-Verlag Berlin Heidelberg 2008 springerlink.com
180
K. Konolige et al.
local controller that takes into account robot dynamics, and searches a large space of robot motions. While we have made advances in many of the areas above, it is the integration of the techniques that is the biggest contribution of the research. The validity of our approach is tested in blind experiments, where we submit our code to an independent testing group that runs and validates it on an outdoor robot. In the most recent tests, we finished first out of a group of eight teams. 1.1
System Overview
This work was conducted as p
Data Loading...