Autonomous Mobile Robot That Can Read

  • PDF / 5,075,429 Bytes
  • 13 Pages / 600 x 792 pts Page_size
  • 66 Downloads / 279 Views

DOWNLOAD

REPORT


Autonomous Mobile Robot That Can Read ´ Dominic Letourneau Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec, Canada J1K 2R1 Email: [email protected]

Franc¸ois Michaud Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec, Canada J1K 2R1 Email: [email protected]

Jean-Marc Valin Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec, Canada J1K 2R1 Email: [email protected] Received 18 January 2004; Revised 11 May 2004; Recommended for Publication by Luciano da F. Costa The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera. Keywords and phrases: character recognition, autonomous mobile robot.

1.

INTRODUCTION

Giving to mobile robots the ability to read textual messages is highly desirable to increase their autonomous navigating in the real world. Providing a map of the environment surely can help the robot localize itself in the world (e.g., [1]). However, even if we humans may use maps, we also exploit a lot of written signs and characters to help us navigate in our cities, office buildings, and so on. Just think about road signs, street names, room numbers, exit signs, arrows to give directions, and so forth. We use maps to give us a general idea of the directions to take to go somewhere, but we still rely on some forms of symbolic representation to confirm our location in the world. This is especially true in dynamic and large open areas. Car traveling illustrates that well. Instead of only looking at a map and the vehicle’s tachometer, we rely on road signs to give us cues and indications on our progress toward our destination. So similarly, the ability to read characters, signs, and