On Designing Expressive Robot Behavior: The Effect of Affective Cues on Interaction
- PDF / 2,225,913 Bytes
- 17 Pages / 595.276 x 790.866 pts Page_size
- 0 Downloads / 209 Views
ORIGINAL RESEARCH
On Designing Expressive Robot Behavior: The Effect of Affective Cues on Interaction Amir Aly1 · Adriana Tapus2 Received: 26 February 2020 / Accepted: 19 July 2020 © Springer Nature Singapore Pte Ltd 2020
Abstract Creating a convincing affective robot behavior is a challenging task. In this paper, we are trying to coordinate between different modalities of communication: speech, facial expressions, and gestures to make the robot interact with human users in an expressive manner. The proposed system employs videos to induce target emotions in the participants so as to start interactive discussions between each participant and the robot around the content of each video. During each experiment of interaction, the expressive ALICE robot generates an adapted multimodal behavior to the affective content of the video, and the participant evaluates its characteristics at the end of the experiment. This study discusses the multimodality of the robot behavior and its positive effect on the clarity of the emotional content of interaction. Moreover, it provides personality and gender-based evaluations of the emotional expressivity of the generated behavior so as to investigate the way it was perceived by the introverted–extroverted and male–female participants within a human–robot interaction context. Keywords Speech synthesis · Facial expressions modelling · Gesture synthesis · Embodiment of affective robot behavior · Human perception of the robot behavior
Introduction Robots are moving into human social spaces and collaborating in different tasks. An intelligent social robot is required to adapt the affective content of its generated behavior to the context of interaction and to the profile of the user to increase the credibility and appropriateness of its interactive intents. Speech, facial expressions, and gestures can express synchronized affective information that can enhance behavior expressivity [18]. Gestures and facial expressions play an important role in explaining speech particularly in case of any speech signal deterioration [28]. Different studies in the literature of Human–Robot Interaction (HRI) and Human–Computer Interaction (HCI) discussed synthesizing affective speech [40, 58] and facial expressions [13, 65] in addition to gesture generation [20,
61]. Besides, other studies investigated the effect of multimodal information of speech and facial expressions on emotion recognition (compared to unimodal information) [17]. However, to our knowledge, these studies, among others, have not proposed a general framework to bridge between affective speech1 (Sect. “Affective Speech Synthesis”) on one side and both adaptive gestures [1, 3–5] and facial expressions (Sect. “Facial Expressivity”) on the other side, as illustrated in our current study. The proposed framework allows for an explicit control on prosody parameters so as to better express emotion. In addition, it considers the relationship between emotion and gestures, which allows for adapting the generated robot gestural behavior to the c
Data Loading...