PLease do not answer if you are reading this: respondent attention in online panels

  • PDF / 423,969 Bytes
  • 9 Pages / 439.37 x 666.142 pts Page_size
  • 77 Downloads / 195 Views

DOWNLOAD

REPORT


PLease do not answer if you are reading this: respondent attention in online panels Leonard J. Paas 1 & Meike Morren 2

# Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract This paper reports on the relevance of attention checks for online panels, e.g., M-Turk, SurveyMonkey, SmartSurvey, QualTrics. In two SmartSurvey studies approximately one third of the respondents failed a check that instructed them to skip the question. Attention-enhancing tools reduce this to approximately one fifth. The failure rate is not affected by replacing multiple-item scales with single-item measures. We find that failing the attention check relates to other attention indicators and that decreased attention levels often apply across the length of the survey. In support of relevance, our empirical findings show respondent inattentiveness systematically biases survey responses. Keywords Respondent attention . Response bias . Instructional manipulation checks . Online panels . Single-item measurement

1 Introduction Online panels such as M-Turk, SurveyMonkey, SmartSurvey, and QualTrics are often used for collecting survey data and for conducting experiments. Advantages include low prices and speedy data collection. However, some respondents are simultaneously subscribed to multiple panels (Comley 2005). Such Bprofessional respondents^ may dedicate the minimal cognitive effort required for providing plausible responses. This behavior could be more common in online surveys due to decreased personal contact (Johnson 2005) and anonymity (Meade and Craig 2012) which can result in lowquality data and faulty conclusions (Kaminska et al. 2010).

* Leonard J. Paas [email protected]

1

School of Communication, Journalism and Marketing, Massey University Albany, Private Bag 102904, North Shore Mail Centre, Auckland 0745, New Zealand

2

Department of Marketing, VU University Amsterdam, Amsterdam, The Netherlands

Mark Lett

Instructional manipulation checks (IMCs) are employed to assess respondent attention. In 2015, such checks were commonly applied in the four prominent marketing research and consumer behavior journals that we assessed. Three of the 51 Marketing Letters papers in 2015 reported IMCs. Out of these 51 papers, 31 reported lab experiments and/or primary collected survey data. Thus, in 9.7% (3/31*100%) of the papers in which IMCs were applicable such checks were indeed employed. This percentage is 8.1% for the 2015 issues of Journal of Marketing Research, 11.4% for Journal of Consumer Psychology, and 19.6% for Journal of Consumer Research. Respondent inattention can be substantial. Oppenheimer et al. (2009) found up to 46% of lab experiment participants failed an IMC. Failure rates on IMCs vary in consecutive studies, e.g., 16.3 and 18.0% in the two survey studies reported by Emrich and Verhoef (2015) and 5.5% in Berman et al. (2015). Such differences may relate to survey length, respondent characteristics or the IMC employed (Meade and Craig 2012; Oppenheimer et al. 2009). Concerning the latter, some res