A Multimodal Path Planning Approach to Human Robot Interaction Based on Integrating Action Modeling

  • PDF / 7,252,143 Bytes
  • 18 Pages / 595.224 x 790.955 pts Page_size
  • 1 Downloads / 185 Views

DOWNLOAD

REPORT


A Multimodal Path Planning Approach to Human Robot Interaction Based on Integrating Action Modeling Yosuke Kawasaki1 · Ayanori Yorozu1 · Masaki Takahashi2 · Enrico Pagello3 Received: 11 December 2019 / Accepted: 5 August 2020 © Springer Nature B.V. 2020

Abstract To complete a task consisting of a series of actions that involve human-robot interaction, it is necessary to plan a motion that considers each action individually as well as in relation to the following action. We then focus on the specific action of “approaching a group of people” in order to accurately obtain human data that is used to make the performance of tasks involving interactions with multiple people more smooth. The movement depends on the characteristics of the important sensors used for the task and on the placement of people at and around the destination. Considering the multiple tasks and placement of people, the pre-calculation of the destinations and paths is difficult. This paper thus presents a system of navigation that can accurately obtain human data based on sensor characteristics, task content, and real-time sensor data for processes involving human-robot interaction (HRI); this method does not navigate specifically toward a previously determined static point. Our goal was achieved by using a multimodal path planning based on integration of action modeling by considering both voice and image sensing of interacting people as well as obstacle avoidance. We experimentally verified our method by using a robot in a coffee shop environment. Keywords Robot navigation · Human-robot interaction · Action modeling · Multimodal path planning

1 Introduction Human-robot interaction (HRI) is a very active research field that focuses on communication between people and robots [1]. The robot is expected to behave like a human partner while it is performing an interactive task in a human environment. To realize smoother performance of Electronic supplementary material The online version of this article (https://doi.org/10.1007/s10846-020-01244-7) contains supplementary material, which is available to authorized users.  Yosuke Kawasaki

[email protected] 1

The Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan

2

The Department of System Design Engineering, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan

3

Intelligent Autonomous System Lab, Department of Information Engineering, University of Padova, Padua, Italy, and IT+Robotics Srl, Vicenza, Italy

interactive tasks, robot behavior should be designed in a way to facilitate its assigned task. Robot’s tasks can be completed by sequential execution of multiple actions including movements, manipulations, spoken dialogs and so on [2], as shown in Fig. 1. To complete a task sequentially, the motion should be planned considering each action individually and its connection to the next action. Some motion planners have considered an efficient execution of robot’s sequential actions [3–6]. We define an action modeling a