Automated Vehicle Control at Freeway Lane-drops: a Deep Reinforcement Learning Approach

  • PDF / 3,917,497 Bytes
  • 20 Pages / 595.276 x 790.866 pts Page_size
  • 48 Downloads / 213 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Automated Vehicle Control at Freeway Lane‑drops: a Deep Reinforcement Learning Approach Salaheldeen M. S. Seliman1 · Adel W. Sadek1   · Qing He1,2 Received: 3 March 2020 / Revised: 2 July 2020 / Accepted: 29 July 2020 © Springer Nature Singapore Pte Ltd. 2020

Abstract This study develops an optimal, real-time and adaptive control algorithm for helping a Connected and Automated Vehicle (CAV), navigate a freeway lane-drop site (e.g. work zones). The proposed traffic control strategy is based on the Deep Q-Network (DQN) Reinforcement Learning (RL) algorithm, and is designed to determine the driving speed and lane-change maneuvers that would enable the CAV to go through the bottleneck, with the least amount of delay. The DQN RL agent was trained using the microscopic traffic simulator VISSIM, where the learning focused on how the CAV may be able to optimally maneuver the lane drop site while driving as close as possible to the freeway speed limit. VISSIM was also used to compare the performance of the DQN-controlled AV, as opposed to a human-driven vehicle with no intelligent control, in terms of the driving speed or travel time needed to traverse the lane drop site, under a congested, real life-like traffic scenario. The research findings demonstrate the promise of DQN RL in allowing the CAV to intelligently, and optimally navigate, through the lane drop site. Specifically, for the scenario for which the agent was trained, the reduction in the CAV travel time was around 96 percent, compared to the base case. The robustness of the RL agent was further tested on various scenarios different from the training case. For those cases, the mean and standard deviation of the reductions in the travel of the DQN-controlled CAV travel times, compared to the base case, were around 31% and 61%, respectively. Keywords  Deep reinforcement learning · Automated or self-driving vehicles · Deep Q-network · Freeway lane-drops

Introduction Recently, there has been an unprecedented interest in Automated Driving Systems (ADS), also known as Connected and Automated Vehicles (CAVs) or self-driving vehicles. This is evidenced by the number of companies striving to develop self-driving capabilities, including major automotive and technology companies, along with several start-ups. It is also evidenced by the number of research studies, scientific papers, conferences, pilots of the technology, and even limited commercial deployments. ADS or CAVs have the

* Adel W. Sadek [email protected] 1



Department of Civil, Structural, and Environmental Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA



Department of Industrial and Systems Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA

2

potential to revolutionize transportation, resulting in major paradigm shifts in the way we move. Among the purported benefits of CAV technology are: (1) improved safety (by reducing crashes caused by driver error and/or incapacitation); (2) increased human pro