Comparing head gesture, hand gesture and gamepad interfaces for answering Yes/No questions in virtual environments
- PDF / 975,736 Bytes
- 10 Pages / 595.276 x 790.866 pts Page_size
- 42 Downloads / 259 Views
ORIGINAL ARTICLE
Comparing head gesture, hand gesture and gamepad interfaces for answering Yes/No questions in virtual environments Jingbo Zhao1,2 · Robert S. Allison2 Received: 13 November 2018 / Accepted: 27 November 2019 © Springer-Verlag London Ltd., part of Springer Nature 2019
Abstract A potential application of gesture recognition algorithms is to use them as interfaces to interact with virtual environments. However, the performance and the user preference of such interfaces in the context of virtual reality (VR) have been rarely studied. In the present paper, we focused on a typical VR interaction scenario—answering Yes/No questions in VR systems to compare the performance and the user preference of three types of interfaces. These interfaces included a head gesture interface, a hand gesture interface and a conventional gamepad interface. We designed a memorization task, in which participants were asked to memorize several everyday objects presented in a virtual room and later respond to questions on whether they saw a specific object through the given interfaces when these objects were absent. The performance of the interfaces was evaluated in terms of the real-time accuracy and the response time. A user interface questionnaire was also used to reveal the user preference for these interfaces. The results showed that head gesture is a very promising interface, which can be easily added to existing VR systems for answering Yes/No questions and other binary responses in virtual environments. Keywords Head gesture · Hand gesture · Virtual reality · Usability
1 Introduction Recent improvements in sensor technologies have enabled human body movements to be accurately tracked in real time. With novel depth sensors such as the Kinect and the Leap Motion, a large volume of algorithms has been proposed and developed for body gesture (Lun and Zhao 2015) and hand gesture recognition (Cheng et al. 2016). The accurate and fast head tracking sensors in head-mounted displays (HMDs), such as the Oculus Rift DK2, also make real-time head gesture recognition possible (Zhao and Allison 2017) in addition to the systems that use cameras to track head movements (Morimoto et al. 1996; Terven et al. 2014). One * Jingbo Zhao [email protected] Robert S. Allison [email protected] 1
College of Information and Electrical Engineering, China Agricultural University, No. 17 Tsinghua East Road, Beijing 100083, China
Department of Electrical Engineering and Computer Science, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
2
possible application of gesture recognition is to integrate such algorithms into VR systems to interact with virtual worlds. A typical interaction scenario in VR systems is to answer Yes/No questions asked by virtual avatars or raised by VR systems. For instance, Abate et al. (2011) presented an augmented reality (AR)-based tour system that may require an interface for answering questions asked by virtual tour guides. Answering Yes/No questions in VR systems is usually done by butto
Data Loading...