Implicit learning of two artificial grammars
- PDF / 955,770 Bytes
- 10 Pages / 595.276 x 790.866 pts Page_size
- 51 Downloads / 198 Views
SHORT COMMUNICATION
Implicit learning of two artificial grammars C. Guillemin1,2,3 · B. Tillmann1,2,3 Received: 20 January 2020 / Accepted: 26 September 2020 © Marta Olivetti Belardinelli and Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract This study investigated the implicit learning of two artificial systems. Two finite-state grammars were implemented with the same tone set (leading to short melodies) and played by the same timbre in exposure and test phases. The grammars were presented in separate exposure phases, and potentially acquired knowledge was tested with two experimental tasks: a grammar categorization task (Experiment 1) and a grammatical error detection task (Experiment 2). Results showed that participants were able to categorize new items as belonging to one or the other grammar (Experiment 1) and detect grammatical errors in new sequences of each grammar (Experiment 2). Our findings suggest the capacity of intra-modal learning of regularities in the auditory modality and based on stimuli that share the same perceptual properties. Keywords Incidental learning · Implicit cognition · Artificial grammar · Dual-system learning Implicit or incidental learning is the ability to learn, by mere exposure and without intention, a structured system (e.g., Perruchet 2008). This cognitive capacity has been shown for verbal, musical or visuo-spatial materials (e.g., Saffran 2001; Rohrmeier et al. 2011; Fiser and Aslin 2002), and is relevant for language acquisition, motor learning, and tonal enculturation (e.g., Perruchet and Pacton 2006; Rebuchat and Rohrmeier 2012). Our study investigated whether perceivers can learn two artificial systems at the same time even when implemented with the same events in the same modality. Previous findings are controversial, confirming or disconfirming dualsystem learning. They had been obtained with two of the main implicit learning paradigms, notably using artificial grammar (AG) or artificial language (AL), respectively. In the AG paradigm, participants are exposed to sequences built with a finite-state grammar defining Handling editor: Lola L. Cuddy (Queen’s University); Reviewers: Anja Cui (University of British Columbia), Dominique Vuvan (Skidmore College), and a third researcher who prefers to remain anonymous. * B. Tillmann [email protected] 1
Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Bron, France
2
CNRS, UMR5292, INSERM, U1028, Bron, France
3
University Lyon 1, Villeurbanne 69000, France
combinations between events (Reber 1967; see Fig. 1). The classical implementation uses letters, but AG learning has been also shown with colors (Conway and Christiansen 2006), shapes (Emberson et al. 2011), durations (Prince et al. 2018), dance movements (Opacic et al. 2009), and tones (e.g., Altmann et al. 1995; Tillmann and PoulinCharronnat 2010). During exposure, participants are not told about the regularities and sometimes perform an additional task requiring memory (i.e., was this sequence or item
Data Loading...