Implicit consequentiality bias in English: A corpus of 300+ verbs

  • PDF / 515,192 Bytes
  • 21 Pages / 595.276 x 790.866 pts Page_size
  • 93 Downloads / 193 Views

DOWNLOAD

REPORT


Implicit consequentiality bias in English: A corpus of 300+ verbs Alan Garnham 1 & Svenja Vorthmann 1 & Karolina Kaplanova 1 Accepted: 29 October 2020 # The Author(s) 2020

Abstract This study provides implicit verb consequentiality norms for a corpus of 305 English verbs, for which Ferstl et al. (Behavior Research Methods, 43, 124-135, 2011) previously provided implicit causality norms. An online sentence completion study was conducted, with data analyzed from 124 respondents who completed fragments such as “John liked Mary and so…”. The resulting bias scores are presented in an Appendix, with more detail in supplementary material in the University of Sussex Research Data Repository (via https://doi.org/10.25377/sussex.c.5082122), where we also present lexical and semantic verb features: frequency, semantic class and emotional valence of the verbs. We compare our results with those of our study of implicit causality and with the few published studies of implicit consequentiality. As in our previous study, we also considered effects of gender and verb valence, which requires stable norms for a large number of verbs. The corpus will facilitate future studies in a range of areas, including psycholinguistics and social psychology, particularly those requiring parallel sentence completion norms for both causality and consequentiality. Keywords Psycholinguistics . Verbs . Thematic roles . Consequentiality . Causality . Corpus studies

Language researchers have long used normative data both to investigate effects such as that of frequency on word identification and to control for those effects when other, more subtle, influences on those processes are under investigation. When large-scale norms were time-consuming to collect and score, only commonly used measures received systematic treatment, with word frequency being the paradigm example. For less commonly investigated features, for example implicit causality of verbs, small-scale norms were often collected for individual studies. More recently, norms have become easier to collect and score, and a number of factors have driven the need for norms on larger sets of items, in particular the use of techniques such as EEG and functional magnetic resonance imaging (fMRI) that require large sets of items if effects are to stand out from a background of noise, and the replication crisis, which suggests the use of larger sets of items (and participants) in all studies. For example, an event-related potential (ERP) study by Misersky, Majid, and Snijders (2019) used the large set of 400+ gender stereotype norms collected by Misersky et al. (2014), which have also been used in a

* Alan Garnham [email protected] 1

School of Psychology, University of Sussex, Pevensey 1 Building, Falmer, Brighton BN1 9QH, UK

range of other studies (e.g., Lewis & Lupyan, 2020; Richy & Burnett, 2020; Mueller-Feldmeth, Ahnefeld, & Hanulikova, 2019; Gygax et al., 2019). Studies of the effect of emotional valence on word recognition times (Citron, Weekes, & Ferstl, 2012) and on ERP components duri