Locomotion Interfaces
A locomotion interface is a device that creates an artificial sensation of physical walking. It should ideally be equipped with three functions: (1) The creation of a sense of walking while the true position of its user is preserved, (2) Allowing the walk
- PDF / 1,066,204 Bytes
- 21 Pages / 439.37 x 666.142 pts Page_size
- 66 Downloads / 222 Views
Locomotion Interfaces Hiroo Iwata
Abstract A locomotion interface is a device that creates an artificial sensation of physical walking. It should ideally be equipped with three functions: (1) The creation of a sense of walking while the true position of its user is preserved, (2) Allowing the walker to change bearing direction, (3) The simulation of uneven walking surfaces. This chapter categorizes and describes four different methods for the design and implementation of such interfaces: Sliding shoes, Treadmills, Foot-pads, and Robotic tiles. It discusses related technical issues and potential applications.
9.1 Introduction A locomotion interface is a device that creates a sense of walking in a virtual environment (VE). It provides for the experience of physical walking while a walker’s body is maintained localized in the real world. In many applications of VEs, such as immersive training or visual simulations, users can benefit from a good sensation of locomotion. It has often been suggested that the best locomotion mechanism for VEs would be walking [5]. It is well known that sense of distance or orientation while walking is much better than while riding on a vehicle. Proprioceptive and vestibular feedback during walking is particularly important for navigation [29–31]. Effects of proprioceptive and vestibular feedback have been tested in settings involving walking on real ground while immersed in VE. Loomis et al. [17] used a HMD in triangle completion walking tasks. Five conditions were employed for evaluating optic flow, vestibular, and proprioceptive stimulation as inputs to the path integration process. Two conditions involved walking (with and without vision), two H. Iwata (B) Virtual Reality Laboratory, Department of Intelligent Interaction Technologies, 1-1-1 Tennodai, Tsukuba 305-8573, Japan e-mail: [email protected] F. Steinicke et al. (eds.), Human Walking in Virtual Environments, DOI: 10.1007/978-1-4419-8432-6_9, © Springer Science+Business Media New York 2013
199
200
H. Iwata
involved wheelchair transport (with and without vision), and the fifth was a stationary (non-moving) condition with vision. The results indicated that the directional return toward the origin was much poorer when optic flow alone specified the outbound path. Chance et al. [3] set up a virtual maze, in which subjects encountered target objects along the way. Their task was to indicate the direction to these target objects from a terminal location in the maze. The scene of the virtual maze was provided by a HMD. Subjects controlled their motion through the mazes using one of three locomotion modes: Walk mode, Real Turn mode, and Visual Turn mode. The results showed that performance in the Walk mode was significantly better than that of Visual Turn mode. In another experiment, Bakker et al. [1] studied orientation performance in VE. They tested five stimulus conditions for turning: three with and two without visual stimuli, using one three different navigation metaphors to steer rotation. Their results showed that most
Data Loading...