A Framework Design for Human-Robot Interaction

Multimodal human-robot interaction integrates various physical communication channels for face-to-face interaction. However, face-to-face interaction is not the only communication method. The proposed framework achieves more flexible communications such a

  • PDF / 1,744,620 Bytes
  • 6 Pages / 439.37 x 666.142 pts Page_size
  • 47 Downloads / 288 Views

DOWNLOAD

REPORT


Abstract Multimodal human-robot interaction integrates various physical communication channels for face-to-face interaction. However, face-to-face interaction is not the only communication method. The proposed framework achieves more flexible communications such as communication to others at a distance. Intimate and loose interactions are categorized as ubiquitous multimodal human-robot interaction. Therefore, this work presents a framework design for human-robot interaction using ubiquitous voice control and face localization and authentication implemented for intimate and loose interactions. The simulation results demonstrate the practicality of the proposed framework.

Introduction Humanoid and human-friendly robots are useful in daily life owing to their ability to assist individuals in executing some tasks by easily communicating with each other. Face-to-face interaction is almost adopted for human-robot interaction. Additionally, individuals can simply, naturally, and efficiently interact with robots without location and time constrains. Such communication is called ‘‘ubiquitous human-robot interaction’’. Many investigations on multimodal human-robot interaction focus on various communication channels in social interaction like speech and visual communication [1–4]. Unfortunately, the only communication in most of these systems is face-to-face interaction. Ubiquitous human-robot interaction has been proposed [5] Y.-H. Chin  C.-W. Su  J.-H. Li  Chang-HongLin  J.-C. Wang (&) Department of Computer Science and Information Engineering, National Central University, Jhongli, Taiwan, Republic of China e-mail: [email protected] H.-P. Lee  J.-F. Wang Department of Electrical Engineering, National Cheng-Kung University, Tainan, Taiwan, Republic of China

Y.-M. Huang et al. (eds.), Advanced Technologies, Embedded and Multimedia 1043 for Human-centric Computing, Lecture Notes in Electrical Engineering 260, DOI: 10.1007/978-94-007-7262-5_118, Ó Springer Science+Business Media Dordrecht 2014

1044

Y.-H. Chin et al.

to exploit gesture and voice to realize different interactions such as intimate and loose interactions. For loose interaction, author of [5] employed a camera to look over the room to detect whether someone is requesting interaction through gesturing. However, such an approach is not the most natural and efficient means of interacting with robots because individuals need to wave their hands to call robots to come close to them in order for them to instruct the robots. Therefore, speech may be a better choice for interaction between human and robots. The proposed framework design for human-robot interaction developed ubiquitous voice control subsystem to construct six microphones in the room to detect whether someone is giving a request by voice commands for loose interaction. Furthermore, one microphone installed in each robot is utilized to record close talking for intimate interaction. As well as speech, face localization and authentication subsystem is applied to recognize a user face for intimate i