Sharing gaze rays for visual target identification tasks in collaborative augmented reality

  • PDF / 2,219,113 Bytes
  • 19 Pages / 595.276 x 790.866 pts Page_size
  • 79 Downloads / 181 Views

DOWNLOAD

REPORT


ORIGINAL PAPER

Sharing gaze rays for visual target identification tasks in collaborative augmented reality Austin Erickson1 · Nahal Norouzi1 · Kangsoo Kim1 · Ryan Schubert1 Joseph J. LaViola Jr.1 · Gerd Bruder1 · Gregory F. Welch1

· Jonathan Jules1 ·

Received: 8 January 2020 / Accepted: 6 June 2020 © Springer Nature Switzerland AG 2020

Abstract Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared gaze environments suffer from multiple types of errors related to eye tracking and networking, that can reduce objective performance and subjective experience. In this paper, we present a human-subjects study to understand the impact of accuracy, precision, latency, and dropout based errors on users’ performance when using shared gaze cues to identify a target among a crowd of people. We simulated varying amounts of errors and the target distances and measured participants’ objective performance through their response time and error rate, and their subjective experience and cognitive load through questionnaires. We found significant differences suggesting that the simulated error levels had stronger effects on participants’ performance than target distance with accuracy and latency having a high impact on participants’ error rate. We also observed that participants assessed their own performance as lower than it objectively was. We discuss implications for practical shared gaze applications and we present a multi-user prototype system. Keywords Shared gaze · Eye tracking · Eye tracking errors · Collaborative augmented reality · Target identification

1 Introduction Austin Erickson and Nahal Norouzi have contributed equally to this research.

B

Austin Erickson [email protected] Nahal Norouzi [email protected] Kangsoo Kim [email protected] Ryan Schubert [email protected] Jonathan Jules [email protected] Joseph J. LaViola Jr. [email protected] Gerd Bruder [email protected] Gregory F. Welch [email protected]

1

The University of Central Florida, 3100 Technology Parkway, Orlando, FL 32826-3281, USA

Over the last several years, great strides have been made to improve sensor and display technologies in the fields of augmented reality (AR) and virtual reality (VR) [19]. These advances, such as with respect to head-mounted displays (HMDs) and eye trackers, have provided new opportunities for applications in fields such as training, simulation, therapy, and medicine. For many of these, collaboration between multiple users is an important aspect of the experience. In real life, people use both verbal and nonverbal cues to communicate information to the person they are interacting with. In order to understand and improve collaborative experiences using AR/VR