A Virtual Interpreter for the Italian Sign Language

In this paper, we describe a software module for the animation of a virtual interpreter that translates from Italian to the Italian Sign Language (LIS).

  • PDF / 321,383 Bytes
  • 7 Pages / 430 x 660 pts Page_size
  • 31 Downloads / 145 Views

DOWNLOAD

REPORT


Abstract. In this paper, we describe a software module for the animation of a virtual interpreter that translates from Italian to the Italian Sign Language (LIS). The system we describe takes a “synthetic” approach to the generation of the sign language, by composing and parametrizing pre-captured and hand-animated signs, to adapt them to the context in which they occur. Keywords: Animated agents, sign languages, animation languages.

1

Introduction

Sign languages build communication through features that involve the use of hands, but also non manual components, such as head, torso and facial expression. Recently, many projects have employed animated agents for the construction of interfaces that support the communication with deaf people [13,3,7,4,1]. The scenarios in which these agents can be used include different communicative situations and support types, ranging from web pages and mobile devices to television broadcasting. In this paper, we describe a system that generates the real-time animation of the virtual interpreter of the Italian Sign Language (LIS, Lingua Italiana dei Segni), as part of the ATLAS (Automatic Translation into sign LAnguageS) project for the Italian-to-LIS translation.1 A LIS sentence consists of a sequence of signs, ordered according to the LIS order, accompanied by possible parallel constructs, anchored in specific phenomena. Such phenomena include the positioning of the gesture in front of the interpreter (the “signing space”), the increase or reduction of the “size” of a sign or its repetition (e.g., for plural). Other more complex phenomena involve the movement of hands through the space from and to context-dependent positions, such as for the verbs “to go” and “to give” [10], possibly mixed with the use of hand-shape to indicate specific types of entities [8]. Computer animation is a natural candidate for the development of systems that communicate by using a sign language, since the generation of a fullfledged sign language requires a “synthetic” approach to adapt the default signs 1

This work has been partially supported by the Converging Technologies project ATLAS funded by Regione Piemonte, Italy.

J. Allbeck et al. (Eds.): IVA 2010, LNAI 6356, pp. 201–207, 2010. c Springer-Verlag Berlin Heidelberg 2010 

202

V. Lombardo, F. Nunnari, and R. Damiano

Fig. 1. In the ATLAS project, the interpretation process is performed in two steps. First, the text (“The lady gave the icecream to the children”) is translated into an intermediate representation of its syntactic and semantic relations, called AEWLIS (“LADY THAT-BABY ICECREAM GIVE DONE”). Then, the LIS virtual interpreter animates the corresponding LIS signs.

(archived in a repository) to the specific context of a sentence [12,6]. So, prerecorded animations, that include the interpreter’s body and face, must be open to parametrization . The ATLAS project takes a mixed approach to the generation of signs, by using an animation language to compose and parametrize pre-captured and handanimated signs, stored in a repository (a “sign