Third Point of View Augmented Reality for Robot Intentions Visualization

Lightweight, head-up displays integrated in industrial helmets allow to provide contextual information for industrial scenarios such as in maintenance. Moving from single display and single camera solutions to stereo perception and display opens new inter

  • PDF / 1,046,606 Bytes
  • 8 Pages / 439.37 x 666.142 pts Page_size
  • 29 Downloads / 201 Views

DOWNLOAD

REPORT


stract. Lightweight, head-up displays integrated in industrial helmets allow to provide contextual information for industrial scenarios such as in maintenance. Moving from single display and single camera solutions to stereo perception and display opens new interaction possibilities. In particular this paper addresses the case of information sharing by a Baxter robot displayed to the user overlooking at the real scene. System design and interaction ideas are being presented.

A new generation of robotic systems is being introduced in working environments from small to large factories. These robots, thanks to advancements in actuation and perception, are capable to cooperate with human workers in the execution of task, rather than performing their own task independently inside a highly structured workflow. Examples of such robotic system are the Baxter [1] from Rethink Robotics and ABB Yumi, anticipated in the research world by many projects [2]. With the increased capability of these robots and the expected cooperative interaction, there is a need for the operator to understand the robot intention and current state as much as the robot needs to understand operator intentions. The former for supervision, the latter for safety and proactivity. The nature of the Human-Robot Communication (HRC) between these robots and human workers needs to take into account the specificities of working environment that limits traditional communication channels [3]: possibly over the average sound levels, direct manipulation of touch devices limited by gloves or by the working activity. A specific need for the operation is the possibility of understanding the intention of the robot contextualized over the working environment, that is to understand if the chosen object to be manipulated is the correct one or the target location. There are several display options for providing this information spanning from the traditional ones, such as display panels placed in the environment, on the robot, wear by the operator or simply in the hand, to projective or presented in eye- or head- mounted displays. In any of this case we are interested in presenting the selection highlighted in the real world by means of the capabilities offered by Augmented Reality (AR). Industrial plants commonly require specialist maintenance expertise; as a consequence, plants located in remote sites and away from where the compoc Springer International Publishing Switzerland 2016  L.T. De Paolis and A. Mongelli (Eds.): AVR 2016, Part I, LNCS 9768, pp. 471–478, 2016. DOI: 10.1007/978-3-319-40621-3 35

472

E. Ruffaldi et al.

nents were produced can be difficult to service effectively. Addressing major equipment failures often requires specialist on-site intervention, which can result in significant down-time and cost, but, more importantly, some maintenance and corrective procedures are so complicated or site-specific that a local engineer often is not able to proceed without complex instructions. The potential of Augmented Reality and Robotic Assistance in these frequent situa