Affordance Perception and the Visual Control of Locomotion

When people navigate through complex, dynamic environments, they select actions and guide locomotion in ways that take into account not only the environment but also their body dimensions and locomotor capabilities. For example, when stepping off a curb,

  • PDF / 620,783 Bytes
  • 20 Pages / 439.37 x 666.142 pts Page_size
  • 42 Downloads / 189 Views

DOWNLOAD

REPORT


Affordance Perception and the Visual Control of Locomotion Brett R. Fajen

Abstract When people navigate through complex, dynamic environments, they select actions and guide locomotion in ways that take into account not only the environment but also their body dimensions and locomotor capabilities. For example, when stepping off a curb, a pedestrian may need to decide whether to go now ahead of an approaching vehicle or wait until it passes. Similarly, a child playing a game of tag may need to decide whether to go to the left or right around a stationary obstacle to intercept another player. In such situations, the possible actions (i.e., affordances) are partly determined by the person’s body dimensions and locomotor capabilities. From an ecological perspective, the ability to take these factors into account begins with the perception of affordances. The aim of this chapter is to review recent theoretical developments and empirical research on affordance perception and its role in the visual control of locomotion, including basic locomotor tasks such as avoiding stationary and moving obstacles, walking to targets, and selecting routes through complex scenes. The focus will be on studies conducted in virtual environments, which have created new and exciting opportunities to investigate how people perceive affordances, guide locomotion, and adapt to changes in body dimensions and locomotor capabilities.

4.1 Introduction To successfully interact with one’s environment (real or virtual), it is necessary to select actions and guide movements in ways that take one’s body dimensions and movement capabilities into account. To illustrate this point, consider what would happen if the designer of a virtual environment (VE) were to dramatically alter the dimensions or dynamics of the user’s virtual body. In a VE, one’s virtual body could B. R. Fajen (B) Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA e-mail: [email protected] F. Steinicke et al. (eds.), Human Walking in Virtual Environments, DOI: 10.1007/978-1-4419-8432-6_4, © Springer Science+Business Media New York 2013

79

80

B. R. Fajen

be made taller, shorter, leaner, or stockier; one’s arms or legs could be stretched or compressed; one could be made faster, slower, stronger, or weaker. Before the user has a chance to adapt, he or she may attempt actions that have no chance of success, pass up opportunities to perform actions that would lead to beneficial outcomes, follow suboptimal routes, or inadvertently collide with objects in the virtual environment. Such behavior can be observed in real environments as well. Infants, upon first learning to walk, have difficulty gauging their actions to their new movement capabilities [1]. As such, they often attempt to descend sloped surfaces that are impossibly steep or cross gaps that are impossibly wide. Similarly, when older children ride bicycles, they have difficulty taking into account how long it takes to initiate movement [27]. This puts them at greater risk when crossing busy streets bec