Generalized Hand Gesture Recognition for Wearable Devices in IoT: Application and Implementation Challenges
The proliferation of low power and low cost continuous sensing technology is enabling new and innovative applications in wearables and Internet of Things (IoT). At the same time, new applications are creating challenges to maintain real-time response in a
- PDF / 382,755 Bytes
- 10 Pages / 439.37 x 666.14 pts Page_size
- 54 Downloads / 158 Views
Abstract. The proliferation of low power and low cost continuous sensing technology is enabling new and innovative applications in wearables and Internet of Things (IoT). At the same time, new applications are creating challenges to maintain real-time response in a resource-constrained device, while maintaining an acceptable performance. In this paper, we describe an IMU (Inertial Measurement Unit) sensor-based generalized hand gesture recognition system, its applications, and the challenges involved in implementing the algorithm in a resource-constrained device. We have implemented a simple algorithm for gesture spotting that substantially reduces the false positives. The gesture recognition model was built using the data collected from 52 unique subjects. The model was mapped onto Intel® QuarkTM SE Pattern Matching Engine, and fieldtested using 8 additional subjects achieving 92% performance. Keywords: Gesture recognition · Machine learning · Pattern recognition · Feature engineering · Wearable device · Internet of Things
1
Introduction
Gesture recognition refers to the process of understanding and classifying meaningful movements by human’s body parts such as hands, arms, face, and sometimes head. The proliferation in technology, and in microelectronics more specifically, has inspired research in the field of IMU-based gesture recognition. Three-axis accelerometers are increasingly embedded into many personal electronic devices such as smartphones, Wiimote, etc [Akl et al., 2011; Liu et al., 2009; Liu et al., 2010]. A growing number of devices are being equipped with 3-axis gyroscopes in addition a 3-axis accelerometer. Combining the data from these two sensor types provides significantly more accurate motion information compared to only using an accelerometer. Kratz et al. (2013) showed an increase in performance with an addition of gyroscope data, which allows for more complex gestures to be used in mobile applications. Gesture recognition has wide range of applications in telerobotics [Speeter, 1992], character-recognition in 3D space using inertial sensors [Zhou et al., 2008; Oh, JK et al., 2004], controlling a TV set remotely [Freeman and Weissman, 1995], enabling hand as © Springer International Publishing Switzerland 2016 P. Perner (Ed.): MLDM 2016, LNAI 9729, pp. 346–355, 2016. DOI: 10.1007/978-3-319-41920-6_26
Generalized Hand Gesture Recognition for Wearable Devices in IoT
347
a 3D mouse [Bretzner and Lindeberg, 1998], using hand gestures as a control mechanism in virtual reality [Xu, 2006; Liu et al., 2009], understanding the actions of a musical conductor [Je et al., 2007]. Previous studies have used data from smaller number of subjects, though the variety of gestures and number of samples per each gesture were higher. Liu et al. (2010) collected a set of 8 gestures with 150 to 200 samples per gesture from a single subject. Xu et al. (2012) used a set of 7 gestures with 30 samples per gesture with no reference to the number of users. Akl et al. (2011) collected a set of 18 gestures with total 37
Data Loading...