Multimodal Object-Based Environment Representation for Assistive Robotics

  • PDF / 6,221,187 Bytes
  • 20 Pages / 595.276 x 790.866 pts Page_size
  • 72 Downloads / 202 Views

DOWNLOAD

REPORT


Multimodal Object-Based Environment Representation for Assistive Robotics Yohan Breux1

· Sebastien Druon1

Accepted: 27 September 2019 © Springer Nature B.V. 2019

Abstract Autonomous robots are nowadays successfully used in industrial environments, where tasks follow predetermined plans and the world is a known (and closed) set of objects. The context of social robotics brings new challenges to the robot. First of all, the world is no longer closed. New objects can be introduced at any time, and it is now impossible to build an exaustive list of them nor having a precomputed set of descriptors. Moreover, natural interactions with a human being don’t follow any precomputed graph of sequences or grammar. To deal with the complexity of such an open world, a robot can no longer solely rely on its sensors data: a compact representation to comprehend its surrounding is needed. Our approach focuses on task independent environment representation where human-robot interactions are involved. We propose a global architecture bridging the gap between perception and semantic modalities through instances (physical realizations of semantic concepts). In this article, we describe a method for automatic generation of object-related ontology. Based on it, a practical formalization of the ill-defined notion of “context” is discussed. We then tackle human-robot interactions in our system through the description of user request processing. Finally, we illustrate the flow of our model on two showcases which demonstrate the validity of the approach. Keywords Knowledge representation · Human–robot interaction · Natural language processing · Assistive robotics

1 Introduction In the previous decades, efforts have been made to understand and exploit the social benefits of robots in human environment [24]. In particular, some applications are focused on therapeutics [17,30,41,54], education [3,19] and humanrobot cooperation [4,43]. Unlike industrial applications where the environment is controlled and the interaction with the human operator limited, such applications require the robots to have a deeper understanding of their surroundings. Furthermore, these interactions are made easier for the operator when performed through oral interactions. Because of an early development towards industrial applications, majority of researches in robotics are task oriented

B

Yohan Breux [email protected] Sebastien Druon [email protected]

1

Laboratory of Informatics, Robotics and MicroElectronics, University of Montpellier, 161 rue Ada, 34095 Montpellier, France

and focus their efforts on action descriptions [51,52]. They use specific predefined environment representations for the tasks at hand. However, these representations lack genericity and can’t compactly represent abstract knowledge about the robot surroundings. Furthermore, as underlined above, robots and human should share a common way to describe the world and its concepts through natural language. It is important here to define the meaning of “environment representation” as it depends on the ap