Mobile Robot Simultaneous Localization and Mapping Using Low Cost Vision Sensors
In this work, an information-based iterative algorithm is proposed to plan a mobile robot’s visual exploration strategy, enabling it to most efficiently build a graph model of its environment. The algorithm is based on determining the information present
- PDF / 856,370 Bytes
- 8 Pages / 430 x 660 pts Page_size
- 84 Downloads / 201 Views
Cummins Engine Company, Columbus, IN 47201, USA Pontifical Catholic University of Rio de Janeiro, 22453-900, Brazil
Summary. In this work, an information-based iterative algorithm is proposed to plan a mobile robot’s visual exploration strategy, enabling it to most efficiently build a graph model of its environment. The algorithm is based on determining the information present in sub-regions of a 2D panoramic image of the environment from the robot’s current location using a single camera fixed on the mobile robot. Using a metric based on Shannon’s information theory, the algorithm determines potential locations of nodes from which to further image the environment. Using a feature tracking process, the algorithm helps navigate the robot to each new node, where the imaging process is repeated. A Mellin transform and tracking process is used to guide the robot back to a previous node. The set of nodes and the images taken at each node are combined into a graph to model the environment. By tracing its path from node to node, a service robot can navigate around its environment. Experimental results show the effectiveness of this algorithm.
1 Introduction In the past decade, mobile service robots have been introduced into various nonindustrial application areas such as entertainment, building services, and hospitals. The market for medical robots, underwater robots, surveillance robots, demolition robots, cleaning robots and many other types of robots for carrying out a multitude of services has grown significantly (Thrun 2003). The algorithm complexity of personal and service robots has grown as a result from increased computational performance (Khatib 1999). This growth in algorithm complexity has been in conjunction with growth in hardware costs, a discouraging factor when aiming for large markets. Although hardware costs have declined with respect to their sophistication, this economic trend will still require the replacement of complex hardware architectures by more intelligent and costeffective systems. Of particular interest here are the environment sensing abilities of the robot, thus algorithms must be developed to facilitate this behavior. Mobile robot environment mapping falls into the category of Simultaneous Localization and Mapping (SLAM). In SLAM, a robot localizes itself as it maps the environment. To achieve the localization function, landmarks and their relative motions are monitored with respect to the vision systems. Although novel natural landmark selection methods have been proposed (Simhon et al. 1998), most SLAM architectures rely on identifying distinct, recognizable landmarks such as corners or edges in the environment (Taylor et al. 1998). This often limits the algorithms to wellstructured environments, with poor performance in highly textured environments. These algorithms have been implemented for several different sensing methods, such as stereo camera vision systems (Se et al. 2002), laser range sensors (Tomatis et al. 2001), O. Khatib, V. Kumar, and D. Rus (Eds.) Experimental Robotics,
Data Loading...