A novel muscle-computer interface for hand gesture recognition using depth vision

  • PDF / 5,357,163 Bytes
  • 12 Pages / 595.276 x 790.866 pts Page_size
  • 68 Downloads / 225 Views

DOWNLOAD

REPORT


ORIGINAL RESEARCH

A novel muscle‑computer interface for hand gesture recognition using depth vision Xuanyi Zhou1 · Wen Qi2 · Salih Ertug Ovur2 · Longbin Zhang3 · Yingbai Hu4 · Hang Su2 · Giancarlo Ferrigno2 · Elena De Momi2 Received: 27 October 2019 / Accepted: 20 March 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract Muscle computer Interface (muCI), one of the widespread human-computer interfaces, has been widely adopted for the identification of hand gestures by using the electrical activity of muscles. Although multi-modal theory and machine learning algorithms have made enormous progress in muCI over the last decades, the processing of the collecting and labeling large data sets creates a high workload and leads to time-consuming implementations. In this paper, a novel muCI was developed to integrate the advantages of EMG signals and depth vision, which could be used to automatically label the cluster of EMG data collected using depth vision. A three layers hierarchical k-medoids approach was designed to extract and label the clustering feature of ten hand gestures. A multi-class linear discriminant analysis algorithm was applied to build the hand gesture classifier. The results showed that the proposed algorithm had high accuracy and the muCI performed well, which could automatically label the hand gesture in all experiments. The proposed muCI can be utilized for hand gesture recognition without labeling the data in advance and has the potential for robot manipulation and virtual reality applications. Keywords  Depth vision · Hand gesture recognition · Muscle computer interface · Clustering · Classification

1 Introduction Interaction with computing devices becomes an essential part of daily human life (Rautaray and Agrawal 2015). Modern human-computer interfaces are extremely rich, incorporating in multiple hand gesture applications such as robotmanipulation (Su et al. 2018; Li et al. 2015), health care (Li et al. 2013b), virtual reality (De Marsico et al. 2014), sign language (Quesada et al. 2017; Almasre and Al-Nuaim 2016; Zhao et al. 2013), computer games (Lee et al. 2017). * Hang Su [email protected] 1



State Key Laboratory of High Performance Complex Manufacturing, Central South University, Changsha 410083, China

2



Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, MI, Italy

3

Department of Mechanics, MoveAbility Lab, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden

4

Department of Informatics, Technical University of Munich, Munich 85748, Germany



Human-computer interface (HCI) with hand gesture recognition has appealed tremendous attention in the research community (Wachs et al. 2011; Rautaray and Agrawal 2015). Recently, a new method of HCI focused on muscle Computer Interface (muCI) (Chowdhury et al. 2013). Electromyography (EMG) signals of the user’s muscles were utilized as the inputs of muCI while executing multiple tasks. Therefore, many devices, such as prosthe