Cross-modal transfer of talker-identity learning
- PDF / 1,196,860 Bytes
- 20 Pages / 595.276 x 790.866 pts Page_size
- 0 Downloads / 190 Views
Cross-modal transfer of talker-identity learning Dominique Simmons 1 & Josh Dorsi 1 & James W. Dias 1 & Lawrence D. Rosenblum 1 Accepted: 4 September 2020 # The Psychonomic Society, Inc. 2020
Abstract A speech signal carries information about meaning and about the talker conveying that meaning. It is now known that these two dimensions are related. There is evidence that gaining experience with a particular talker in one modality not only facilitates better phonetic perception in that modality, but also transfers across modalities to allow better phonetic perception in the other. This finding suggests that experience with a talker provides familiarity with some amodal properties of their articulation such that the experience can be shared across modalities. The present study investigates if experience with talker-specific articulatory information can also support cross-modal talker learning. In Experiment 1 we show that participants can learn to identify ten novel talkers from point-light and sinewave speech, expanding on prior work. Point-light and sinewave speech also supported similar talker identification accuracies, and similar patterns of talker confusions were found across stimulus types. Experiment 2 showed these stimuli could also support cross-modal talker matching, further expanding on prior work. Finally, in Experiment 3 we show that learning to identify talkers in one modality (visual-only point-light speech) facilitates learning of those same talkers in another modality (auditory-only sinewave speech). These results suggest that some of the information for talker identity takes a modality-independent form. Keywords Multisensory processing . Speech perception . Face perception
Introduction The last 20 years have shown tremendous growth in the research concerning the cross-modal transfer of sensory experience. For example, it has been shown that motion aftereffects can be transferred across the visual and tactile modalities (Konkle, Wang, Hayward, & Moore, 2009). Relatedly, there is evidence that stimulus-timing information can be transferred between the auditory and visual modalities (Levitan, Ban, Stiles, & Shimojo, 2015). These low-level perceptual aftereffects are consistent with what has been reported for more complex stimuli. There is evidence, for example, of haptic-visual cross-modal transfer of facial expression (Matsumiya, 2013). There is also evidence that substantial cross-modal learning can occur implicitly and with unattended aspects of stimulation (e.g., Seitz & Watanabe, 2005). Within the realm of speech there is evidence that bimodal audiovisual experience results in improved auditory-only talker * Lawrence D. Rosenblum [email protected] 1
Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
identification (the bimodal training effect; e.g., von Kriegstein & Giraud, 2006). While these effects refer specifically to audiovisual talker learning effects, an important finding associated with them is the functional coupling between brain areas a
Data Loading...