Top-down grouping affects adjacent dependency learning

  • PDF / 556,983 Bytes
  • 7 Pages / 595.276 x 790.866 pts Page_size
  • 100 Downloads / 209 Views

DOWNLOAD

REPORT


BRIEF REPORT

Top-down grouping affects adjacent dependency learning Felix Hao Wang 1

&

Jason D. Zevin 2,3 & John C. Trueswell 4 & Toben H. Mintz 2,3

# The Psychonomic Society, Inc. 2020

Abstract A large body of research has demonstrated that humans attend to adjacent co-occurrence statistics when processing sequential information, and bottom-up prosodic information can influence learning. In this study, we investigated how top-down grouping cues can influence statistical learning. Specifically, we presented English sentences that were structurally equivalent to each other, which induced top-down expectations of grouping in the artificial language sequences that immediately followed. We show that adjacent dependencies in the artificial language are learnable when these entrained boundaries bracket the adjacent dependencies into the same sub-sequence, but are not learnable when the elements cross an induced boundary, even though that boundary is not present in the bottom-up sensory input. We argue that when there is top-down bracketing information in the learning sequence, statistical learning takes place for elements bracketed within sub-sequences rather than all the elements in the continuous sequence. This limits the amount of linguistic computations that need to be performed, providing a domain over which statistical learning can operate. Keywords Dependency learning . Rhythm . Statistical learning

How do learners perform word segmentation to break into language when the linguistic input they have access to is a continuous language stream? One prominent line of research within cognitive science has aimed to understand the role that statistical learning, and, in particular, statistical distributional analysis, plays in discovering underlying structure from serial input, and the role such processes may play in language acquisition. It has been shown, for example, that sequential cooccurrence statistics in speech and other auditory streams are computed by human infants (Aslin, Saffran & Newport, 1998; Saffran, Aslin, & Newport, 1996; Johnson & Jusczyk, 2001) and adults (Mirman, Magnuson, Graf Estes, & Dixon, 2008; Romberg & Saffran, 2013). These findings

* Felix Hao Wang [email protected] 1

Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV 89154, USA

2

Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA

3

Department of Linguistics, University of Southern California, Los Angeles, CA 90089, USA

4

Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA

led to proposals that humans are capable of using sequential co-occurrence statistics to segment continuous speech into words, using only bottom-up statistical information (Aslin et al., 1998; Estes, Evans, Alibali, & Saffran, 2007; Saffran et al., 1996). Other work has demonstrated how bottom-up cues such as prosody interact with statistical learning (Johnson & Jusczyk, 2001; Johnson & Seidl, 2009; Morgan, Meier, & Newport, 1987, 1989; Shukla, Nespor, & Mehler, 2007). Man