From perception to action using observed actions to learn gestures
- PDF / 2,355,741 Bytes
- 16 Pages / 439.37 x 666.142 pts Page_size
- 22 Downloads / 219 Views
From perception to action using observed actions to learn gestures Wolfgang Fuhl1 Received: 17 October 2019 / Accepted in revised form: 8 August 2020 © The Author(s) 2020
Abstract Pervasive computing environments deliver a multitude of possibilities for human– computer interactions. Modern technologies, such as gesture control or speech recognition, allow different devices to be controlled without additional hardware. A drawback of these concepts is that gestures and commands need to be learned. We propose a system that is able to learn actions by observation of the user. To accomplish this, we use a camera and deep learning algorithms in a self-supervised fashion. The user can either train the system directly by showing gestures examples and perform an action, or let the system learn by itself. To evaluate the system, five experiments are carried out. In the first experiment, initial detectors are trained and used to evaluate our training procedure. The following three experiments are used to evaluate the adaption of our system and the applicability to new environments. In the last experiment, the online adaption is evaluated as well as adaption times and intervals are shown. Keywords Gestures · Supervised learning · Neuronal network adaption · Neuronal network · Online adaption
1 Introduction Computers in our daily environments are versatile. There exist notebooks, smartphones, desktop computers, cars, intelligent lighting, and multi-room entertainment systems to name only a few. Each device offers a variety of interaction techniques: Some are keyboard, touch, voice, mouse, gestures, or gaze Fuhl et al. (2016, (2017a, (2017b, (2018b). Each is consistent in itself, yet different with regard to the usability. Meaning often, the time to acquaint oneself to all the features and proper usability becomes laborious, leading to errors and frustration.
* Wolfgang Fuhl wolfgang.fuhl@uni‑tuebingen.de 1
Eberhard Karls Universität Tübingen, Sand 14, Tübingen, Germany
13
Vol.:(0123456789)
W. Fuhl
An example of onerous device acquaintance is gesture-based control; when the user learns the pre-programmed gestures. There are some disadvantages in this context, however, because the gestures may be unusual for humans, making the use of the interaction technique uncomfortable. Another disadvantage of the preprogrammed gesture-based control is that it is impossible to use if any fingers or arms are injured. Additionally, it also affects people who suffer from physical limitations. In the area of voice control, all dialects can be problematic (Simpson and Levine 2002). With this interaction technique, it is also necessary to learn the words to control the computer as well as the user has to get used to the commands to feel comfortable. The human being is capable of learning from observations because the human brain is a marvel and capable of the extraordinary. However, its capacity and functionality are limited. We absorb information through the sensory organs, which send signals to be processed in the sensory cortex an
Data Loading...