Enhancing Representation of Deep Features for Sensor-Based Activity Recognition
- PDF / 2,925,626 Bytes
- 16 Pages / 595.276 x 790.866 pts Page_size
- 21 Downloads / 203 Views
Enhancing Representation of Deep Features for Sensor-Based Activity Recognition Xue Li 1 & Lanshun Nie 1
&
Xiandong Si 1 & Renjie Ding 1 & Dechen Zhan 1
Accepted: 9 November 2020 # Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract Sensor-based activity recognition (AR) depends on effective feature representation and classification. However, many recent studies focus on recognition methods, but largely ignore feature representation. Benefitting from the success of Convolutional Neural Networks (CNN) in feature extraction, we propose to improve the feature representation of activities. Specifically, we use a reversed CNN to generate the significant data based on the original features and combine the raw training data with significant data to obtain to enhanced training data. The proposed method can not only train better feature extractors but also help better understand the abstract features of sensor-based activity data. To demonstrate the effectiveness of our proposed method, we conduct comparative experiments with CNN Classifier and CNN-LSTM Classifier on five public datasets, namely the UCIHAR, UniMiB SHAR, OPPORTUNITY, WISDM, and PAMAP2. In addition, we evaluate our proposed method in comparison with traditional methods such as Decision Tree, Multi-layer Perceptron, Extremely randomized trees, Random Forest, and k-Nearest Neighbour on a specific dataset, WISDM. The results show our proposed method consistently outperforms the state-of-the-art methods. Keywords Activity recognition . Reversed CNN . Enhancing features . Significant features
1 Introduction Activity recognition (AR) has attracted attentions from both academia and industry with the popularity of pervasive computing. Activity recognition is the recognition of an activity based on the information collected from target objects under certain environmental conditions. A typical activity recognition process includes data acquisition,
* Xue Li [email protected] Lanshun Nie [email protected] Xiandong Si [email protected] Renjie Ding [email protected] Dechen Zhan [email protected] 1
Harbin Institute of Technology, Harbin, Heilongjiang, China
signal/data preprocessing and segmentation, feature extraction and selection, training, and classification. Data acquisition is usually implemented by vision-based or sensor-based devices which monitor the environment. Sensors have several advantages such as low power, small size, low cost, and convenience, making sensorgenerated data especially popular in AR applications. The data generated from sensor-based monitoring mainly indicate state changes and consist of time series data, which bear inherent local dependency characteristics. Additionally, sensor-based activity data are generally in large sizes with high dimension. In view of the challenging characteristics of sensorbased activity data, previous studies have proposed using convolutional neural networks (CNNs) to extract features. CNNs’ outstanding performance has been proven in the image and language processing domains
Data Loading...