Human Joint Angle Estimation and Gesture Recognition for Assistive Robotic Vision
We explore new directions for automatic human gesture recognition and human joint angle estimation as applied for human-robot interaction in the context of an actual challenging task of assistive living for real-life elderly subjects. Our contributions in
- PDF / 3,981,160 Bytes
- 17 Pages / 439.37 x 666.142 pts Page_size
- 18 Downloads / 150 Views
Inria GALEN & Centrale Sup´elec Paris, Chˆ atenay-Malabry, France [email protected] 2 National Technical University of Athens, Athens, Greece 3 University of Heidelberg, Heidelberg, Germany
Abstract. We explore new directions for automatic human gesture recognition and human joint angle estimation as applied for human-robot interaction in the context of an actual challenging task of assistive living for real-life elderly subjects. Our contributions include state-of-the-art approaches for both low- and mid-level vision, as well as for higher level action and gesture recognition. The first direction investigates a deep learning based framework for the challenging task of human joint angle estimation on noisy real world RGB-D images. The second direction includes the employment of dense trajectory features for online processing of videos for automatic gesture recognition with real-time performance. Our approaches are evaluated both qualitative and quantitatively on a newly acquired dataset that is constructed on a challenging real-life scenario on assistive living for elderly subjects.
1
Introduction
The increase in elderly population is a fact worldwide [1]. In this context computer vision and machine learning research applied on human-robot-interaction from the perspective of assistive living has both scientific interest and social benefits. In this work we focus on two prominent directions and apply the respective methods in the context of a challenging assistive living human-robot interaction scenario. This involves a robotic rollator that interacts with the elderly subjects using visual sensors, assisting them in every-day activities. These directions involve the use of state of the art deep learning based approaches for human joint angle estimation for the future goal of subject stability estimation, as well as the application of action recognition methods to enable elderly subjects interact with the robot by means of manual gestures. Herein we focus on the visual processing pipelines of this interface, and show a variety of rich applications and experiments. There has been a furore of activity on the pose estimation front in recent years. Pose estimation usually involves inferring the locations of landmarks or body parts and the quality of the prediction is measured by metrics that involve c Springer International Publishing Switzerland 2016 G. Hua and H. J´ egou (Eds.): ECCV 2016 Workshops, Part II, LNCS 9914, pp. 415–431, 2016. DOI: 10.1007/978-3-319-48881-3 29
416
A. Guler et al.
comparing the predicted and the ground truth locations in the image plane. In this work we address the problem of estimating human joint angles. The joint angle estimation task involves estimating the angles made by segments of the human body at the joint landmarks in world coordinates. More specifically, we are interested in estimating (a) the knee angles, that is, the angles between by the thigh and the shank segments of the left and right legs, and (b) the hip angles, or the angles made by the torso and thigh segm
Data Loading...