Sensor Architecture and Data Fusion for Robotic Perception in Urban Environments at the 2007 DARPA Urban Challenge

We will demonstrate the sensor and data fusion concept of the 2007 DARPA Urban Challenge vehicle assembled by Team CarOLO, Technische Universität Braunschweig. The perception system is based on a hybrid fusion concept, combining object and grid based appr

  • PDF / 2,990,313 Bytes
  • 16 Pages / 430 x 660 pts Page_size
  • 116 Downloads / 157 Views

DOWNLOAD

REPORT


bstract. We will demonstrate the sensor and data fusion concept of the 2007 DARPA Urban Challenge vehicle assembled by Team CarOLO, Technische Universit¨ at Braunschweig. The perception system is based on a hybrid fusion concept, combining object and grid based approaches in order to comply with the requirements of an urban environment. A variety of sensor systems and technologies is applied, providing a 360 degree view area around the vehicle. Within the object based subsystem, obstacles (static and dynamic) are tracked using an Extended Kalman Filter capable of tracking arbitrary contour shapes. Additionally, the grid based subsystem extracts drivability information about the vehicle’s driveway by combining the readings of laser scanners, a mono and a stereo camera system using a Dempster-Shafer based data fusion approach. . . .

1

Introduction

Robotic perception is one of the key issues in autonomous driving. While current automotive approaches are commonly targeting specific driver assistance systems, the 2007 DARPA Urban Challenge called for a more general realization, capable of detecting a broad variety of target types common in an urban environment. In order to fulfill these requirements, the Technische Universit¨ at Braunschweig developed a vehicle (further to be referred to as Caroline) equipped with a distributed sensor network, combining radar, rotating and fixed beam LIDAR as well as image processing principles in order to create a consistent artificial image of the vehicle’s surrounding. Different data fusion concepts have been implemented in a hybrid perception approach, delivering static and dynamic obstacles in an object-based fashion and drivability and 3-dimensional ground profile information in a grid oriented description.

2

Sensor Concept

A variety of sensor types originating from the field of driver assistance systems has been chosen to provide detection of static and dynamic obstacles in the vehicle’s surrounding as depicted in Fig. 1: G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 275–290, 2008. c Springer-Verlag Berlin Heidelberg 2008 

276

J. Effertz

5: Sick LMS 291

2: SMS UMRR Medium Range Radar

3: IBEO Alasca XT (Fusion Cluster)

7: IDS Mono Color Camera

6: Stereo Vision System

4: Ibeo ML laser scanner

2: SMS Blind Spot Detector (right and left)

1: Hella IDIS LIDAR system

Fig. 1. Sensor Setup

1. A stationary beam LIDAR sensor has been placed in the front and rear section of the vehicle, providing a detection range of approx. 200 meters with an opening angle of 12 degrees. The unit is equipped with an internal preprocessing stage and thus delivers its readings in an object oriented fashion, providing target distance, target width and relative target velocity with respect to the car fixed sensor coordinate frame. 2. 24 GHz radar sensors have been added to the front, rear, rear left and right side of the vehicle. While the center front and rear sensors provide a detection range of approx. 150 meters with an opening angle of 40 degrees, the rear right and left sensors oper