Content-Based Driving Scene Retrieval Using Driving Behavior and Environmental Driving Signals

With the increasing presence of drive recorders and advances in their technology, a large variety of driving data, including video images and sensor signals such as vehicle velocity and acceleration, can be continuously recorded and stored. Although these

  • PDF / 581,940 Bytes
  • 14 Pages / 439.37 x 666.142 pts Page_size
  • 26 Downloads / 215 Views

DOWNLOAD

REPORT


Content-Based Driving Scene Retrieval Using Driving Behavior and Environmental Driving Signals Yiyang Li, Ryo Nakagawa, Chiyomi Miyajima, Norihide Kitaoka, and Kazuya Takeda

Abstract With the increasing presence of drive recorders and advances in their technology, a large variety of driving data, including video images and sensor signals such as vehicle velocity and acceleration, can be continuously recorded and stored. Although these advances may contribute to traffic safety, the increasing amount of driving data complicates retrieval of desired information from large databases. One of our previous research projects focused on a browsing and retrieval system for driving scenes using driving behavior signals. In order to further its development, in this chapter we propose two driving scene retrieval systems. The first system also measures similarities between driving behavior signals. Experimental results show that a retrieval accuracy of more than 95 % is achieved for driving scenes involving stops, starts, and right and left turns. However, the accuracy is relatively lower for driving scenes of right and left lane changes and going up and down hills. The second system measures similarities between environmental driving signals, focusing on surrounding vehicles and driving road configuration. A subjective score from 1 to 5 is used to indicate retrieval performance, where a score of 1 means that the retrieved scene is completely dissimilar from the query scene and a score of 5 means that they are exactly the same. In a driving scene retrieval experiment, an average score of more than 3.21 is achieved for queries of driving scenes categorized as straight, curve, lane change, and traffic jam, when data from both road configuration and surroundings are employed. Keywords Content-based retrieval • Driving data • Drive recorder • Similarity measure • Surrounding environment

Y. Li (*) • R. Nakagawa • C. Miyajima • N. Kitaoka • K. Takeda Graduate School of Information Science, Nagoya University, Nagoya, Japan e-mail: [email protected] G. Schmidt et al. (eds.), Smart Mobile In-Vehicle Systems: Next Generation Advancements, DOI 10.1007/978-1-4614-9120-0_14, © Springer Science+Business Media New York 2014

243

244

14.1

Y. Li et al.

Introduction

Drive recorders are used to investigate the causes of traffic accidents and to improve drivers’ safety awareness. With the increasing presence of more advanced drive recorders, a large variety of driving data, including video images and sensor signals such as vehicle velocity and acceleration, can be continuously recorded and stored. Although these advances may contribute to traffic safety, the increasing amount of driving data complicates retrieval of desired information from large databases. Some researchers have studied methods for recognizing driving events, such as lane changing and passing, using HMM-based dynamic models [1–3]. In our previous work, a similarity-based retrieval system for finding driving data was proposed [4]. However, since our method u