APML, a Markup Language for Believable Behavior Generation
Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two component
- PDF / 2,494,477 Bytes
- 21 Pages / 439.37 x 666.142 pts Page_size
- 22 Downloads / 195 Views
Dipartimento di Informatica, University of Bari, Bari, Italy decarolis~di.uniba.it
2
LINC - Paragraphe, IUT of Montreuil - University of Paris, Paris, France
3
Dipartimento di Educazione, University of Rome Three, Rome, Italy
c.pelachaud~iut.univ-paris8.fr
poggi~uniroma3.it
4
School of Informatics, University of Edinburgh, Edinburgh, UK steedman~informatics.ed.ac.uk
Summary. Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two components: a Mind and a Body. Her mind reflects her personality, her social intelligence, as well as her emotional reaction to events occurring in the environment. Her body corresponds to her physical appearance able to display expressive behaviors. We designed a Mind-Body interface that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a new XML language (APML). Moreover we have developed a language to describe facial expressions. It combines basic facial expressions with operators to create complex facial expressions. The purpose of this chapter is to describe these languages and to illustrate our approach to the generation of behavior of an agent able to act consistently with her goals and with the context of the interaction.
1 Introduction Humans communicate using verbal and non-verbal signals: body posture, gestures (pointing at something, describing object dimensions, etc.), facial expressions, gaze (making eye contact, looking down or up to a particular object), and using intonation and prosody, in combination with words and sentences. The way in which people communicate, and therefore the signals that they employ, is influenced by their personality, goals, and affective state and by the context in which the conversation takes place [12]. One very active research H. Prendinger et al. (eds.), Life-Like Characters © Springer-Verlag Berlin Heidelberg 2004
66
Berardina De Carolis et al.
area in the field of intelligent agents is devoted to constructing Embodied Conversational Agents (ECAs). An ECA is an agent embedded in a virtual body that interacts with another agent (a human user or another virtual agent) in a human-like manner, and particularly in a believable way. Believability is mostly related to the ability to express emotion [3] and to exhibit a given personality[20]. However, according to recent literature [36, 13], an agent is more believable if it can behave in ways typical of given cultures, and, finally, if it has a personal communicative style [5, 34]. Developing such a "computer conversationalist" that is able to exhibit these added dimensions of communication requires moving from natural language generation (NLG) to multi-modal behavior generation. One possible approach is to consider body and mind as strictly and necess
Data Loading...