Emotion monitoring with RFID: an experimental study

  • PDF / 1,509,366 Bytes
  • 15 Pages / 595.276 x 790.866 pts Page_size
  • 14 Downloads / 233 Views

DOWNLOAD

REPORT


REGULAR PAPER

Emotion monitoring with RFID: an experimental study Qian Xu1 · Xuan Liu1,2   · Juan Luo1 · Zhenzhong Tang3 Received: 17 June 2020 / Accepted: 26 September 2020 © China Computer Federation (CCF) 2020

Abstract Emotion recognition can be helpful in many fields such as elderly healthcare. Existing emotion recognition approaches are usually based on wearable sensors or computer vision analysis, which are intrusive or inconvenient to use. In recent years, radio frequency identification (RFID) has been exploited to monitor physiological signs (e.g., respiration and heartbeat) of users in a contactless and convenient way. Motivated by such progresses, we conduct an experimental study on recognizing the emotion of users with commercial RFID devices. We propose Free-EQ, an emotion recognition framework which first extracts respiration-based features and heartbeat-based features from RFID signals and then uses these features to train a classifier to recognize different emotions of a target user. Experiments on commercial RFID hardware show that Free-EQ can distinguish different emotions with relatively high accuracy. Keywords  Emotion recognition · RFID · Vital signal · HeartBeat segmentation

1 Introduction 1.1 Motivation Emotion recognition has a significant impact on improving the interaction between machine and human, such as smart home that can adaptively adjust the setting according to human emotions. Recently, radio frequency identification (RFID) has been used to perform contactless vital signs monitoring of people, e.g., respiration and heartbeat Zhao et al. (2018) and Hou et al. (2017). By analyzing the collected respiration and average heart rate, according to the interaction of emotion and physiological signals, the goal of emotion recognition can be achieved by analyzing the training feature data Adib et al. (2015).

* Xuan Liu [email protected] Zhenzhong Tang tangzhenzhong@huge‑ic.com 1



College of Computer Science and Electronic Engineering, Hunan University, Changsha, China

2



The Science and Technology on Parallel and Distributed Processing Laboratory (PDL), National University of Defense Technology, Changsha 410073, China

3

Zhuhai Taixin Semiconductor Co., Ltd, Zhuhai, China



At present, many researches on emotion recognition methods have been proposed, and some of them are in practice. Among them, the audio-visual method of emotional expression is the mainstream technology of emotional recognition, that is, facial expression, body movement and speech to identify human emotions Kahou et al. (2016), Cowie et al. (2001) and PPG (2017). The other part is to use physiological means to achieve the purpose, which requires users to wear special physiological sensors (PPGBrugarolas et al. 2016; Jia et al. (2016); ECGKahou et al. 2016, etc). Recently, some researchers have tried to measure heart rate through a smartphone camera (Gregoski et al. (2012) and Lagido et al. (2014)), which requires users to put their fingertips on the camera. Both methods have their limitations. Audio visual