Korean video dataset for emotion recognition in the wild
- PDF / 4,097,331 Bytes
- 14 Pages / 439.642 x 666.49 pts Page_size
- 37 Downloads / 483 Views
Korean video dataset for emotion recognition in the wild Trinh Le Ba Khanh1 · Soo-Hyung Kim1 · Gueesang Lee1 · Hyung-Jeong Yang1 Eu-Tteum Baek1
·
Received: 14 October 2019 / Revised: 2 September 2020 / Accepted: 19 October 2020 / © The Author(s) 2020
Abstract Emotion recognition is one of the hottest fields in affective computing research. Recognizing emotions is an important task for facilitating communication between machines and humans. However, it is a very challenging task based on a lack of ethnically diverse databases. In particular, emotional expressions tend to be very dissimilar between Western and Eastern people. Therefore, diverse emotion databases are required for studying emotional expression. However, majority of the well-known emotion databases focus on Western people, which exhibit different characteristics compared to Eastern people. In this study, we constructed a novel emotion dataset containing more than 1200 video clips collected from Korean movies, called Korean Video Dataset for Emotion Recognition in the Wild (KVDERW). Which are similar to real-world conditions, with the goal of studying the emotions of Eastern people, particularly Korean people. Additionally, we developed a semi-automatic video emotion labelling tool that could be used to generate video clips and annotate the emotions in clips. Keywords Emotion recognition · Facial expression · Emotion dataset
1 Introduction Emotion recognition systems scientifically measure and analyze complex feelings, such as comfort, discomfort, convenience, and inconvenience, in humans to guide product or environment design to improve the quality of human life. Practically speaking, emotion recognition technology can facilitate emotion-based services for users by detecting emotions in entertainment, education, medicine, etc. This technology enables the analysis of immediate user reactions at the time of service, thereby improving the quality of services [11]. Emotion recognition has attracted significant attention from the computer vision and affective computing communities over the past decade because it is an important front-end task in many applications. The majority of existing techniques focus on classifying seven Hyung-Jeong Yang
[email protected] 1
Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, South Korea
Multimedia Tools and Applications
basic expressions: anger, disgust, fear, happiness, neutral, sadness, and surprise [8, 18]. A few methods follow a dimensional approach in which emotional expressions are treated as regression data in the arousal-valence space [16, 20]. Recently, deep learning approaches have been proposed to classify emotions [12] [14] [1]. Based on the outstanding performance of deep learning approaches, emotion recognition models have achieved remarkable accuracy. However, emotion recognition is still a very difficult task for deep learning architectures. In particular, one of the major limitations of emotion recognition is a lack of appropriate emotion databases. The maj
Data Loading...