Transition activity recognition using fuzzy logic and overlapped sliding window-based convolutional neural networks

  • PDF / 2,169,190 Bytes
  • 18 Pages / 439.37 x 666.142 pts Page_size
  • 4 Downloads / 175 Views

DOWNLOAD

REPORT


Transition activity recognition using fuzzy logic and overlapped sliding window-based convolutional neural networks Jaewoong Kang1 · Jongmo Kim1 · Seongil Lee1 · Mye Sohn1

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract In this paper, we propose a novel approach that can recognize transition activities (e.g., turn to left or right, stand up, and travel down the stairs). Unlike simple activities, the transition activities have unique characteristics that change continuously and occur instantaneously. To recognize the transition activities with these characteristics, we applied convolutional neural network (CNN) that is widely adopted to recognize images, voices, and human activities. In addition, to generate input instances for the CNN model, we developed the overlapped sliding window method, which can accurately recognize the transition activities occurring during a short time. To increase the accuracy of the activity recognition, we have learned CNN models by separating the simple activity and the transition activity. Finally, we adopt fuzzy logic that can be used to handle ambiguous activities. All the procedures of recognizing the elderly’s activities are performed using the data collected by the six sensors embedded in the smartphone. The effectiveness of the proposed approach is shown through experiments. We demonstrate that our approach can improve recognition accuracy of transition activities. Keywords Human activity recognition · Transition activity · Convolutional neural network (CNN) · Overlapped sliding window · Fuzzy logic

B

Mye Sohn [email protected] Jaewoong Kang [email protected] Jongmo Kim [email protected] Seongil Lee [email protected]

1

Department of Industrial Engineering, Sungkyunkwan University, Suwon, Korea

123

J. Kang et al.

1 Introduction Human activity recognition (HAR) is an important research area with many applications in healthcare, smart environments, and homeland security [6, 19]. As the world population is rapidly aging, ambient-assisted living (AAL) is growing as an application field of HAR. One of the goals of AAL is to improve the quality of life of elderly and disabled by increasing their autonomy in daily activities. To do so, many AAL systems are interested in acquiring accurate and opportune data on human’s especially elderly and/or disabled activities required to provide appropriate services. Traditionally, various devices and tools have been used to recognize human activities [1, 3, 6, 25, 35]. Recently, due to the rapid spread of very small, high performance, and inexpensive sensors, it is becoming very easy to collect activity data for HAR. However, it is still difficult to determine exactly which activities are implied in the collected data. To resolve the difficulties, researchers have applied supervised learning methods such as decision tree [29], instance-based learning [4], neural network [32], fuzzy logic [25], Markov models [26], and ensemble methods [24]. Rarely, some researchers have applied semi-supervised learning methods to so