Seeing Brains Reading Speech: A Review and Speculations

There are a number of ways to gain insights into how different parts of the brain extract speech from seen faces. Experimental data from normal subjects, from neuropsychological patients, from developmental disorders affecting cognition and from direct im

  • PDF / 2,685,225 Bytes
  • 19 Pages / 595.276 x 790.866 pts Page_size
  • 68 Downloads / 183 Views

DOWNLOAD

REPORT


Abstract. There are a number of ways to gain insights into how different parts of the brain extract speech from seen faces. Experimental data from normal subjects, from neuropsychological patients, from developmental disorders affecting cognition and from direct imaging of brain processes - all illuminate some parts of the story (if sometimes in a contradictory way). This chapter reviews findings concerned with the localization of face and (written and auditory) speech processing in order to gain some ideas about the possible neural systems that are required to obtain speech from seen faces, and to suggest the extent to which these may rest on broader functional bases. Keywords. face-processing, cortical imaging, laterality, visual movement perception, visual form perception, m- and p- visual systems, dorsal (where?) system, ventral (what?) system, dorsal (how?) system, right hemisphere, left hemisphere, occipital lobes, temporal lobes, parietal lobes, frontal lobes, Wernicke's area, Broca's area

1

Introduction

Speechreading is an odd ability: a range of natural and experimental observations suggest that most of us take account of the seen mouth movements of speakers, yet it is not critical to speech perception, except in some special circumstances (viewing speech in noise) or in special people (deaf or deafened people). It is not, therefore, surprising that relatively little attention has been given to uncovering the cortical substrates of this ability. Yet the careful delineation of the brain functions that subserve speechreading, viewed in relation to other functional abilities, may clarify a number of issues: it could indicate more precisely where the functional and neuroanatomical lines may be drawn between faces and words; between unimodal and bimodal processes, between the apprehension of form-frommovement and other aspects of movement perception; between speech and nonspeech representations.

D. G. Stork et al. (eds.), Speechreading by Humans and Machines © Springer-Verlag Berlin Heidelberg 1996

116

1. 1

Left and Right

My fascination with understanding how speechreading is performed by the brain stems from a simple observation: while right-hemisphere processes kick in early and reliably (Young & Bion, 1980) in processing faces for identity and for expression, left hemisphere processes alone are critical for reading and for speaking (see Damasio & Damasio, 1993). So the first question to be asked is 'what is the pattern of localization for seen lipspeech forms? Do pictures of speaking faces localize to the right hemisphere, like faces to be matched for expression or identity, or to the left hemisphere like written and spoken words?' The answer to this question depends to a large extent on the specifics of stimulus presentation. Different patterns of localization may apply to bimodal (vision and audition) inputs than to unimodal (vision) alone: at this stage only unimodal presentations will be considered; bimodal inputs will be deferred to a later stage in this chapter. Secondly, visual image quality