Combatting carelessness: Can placement of quality check items help reduce careless responses?

  • PDF / 676,529 Bytes
  • 9 Pages / 595.276 x 790.866 pts Page_size
  • 6 Downloads / 192 Views

DOWNLOAD

REPORT


Combatting carelessness: Can placement of quality check items help reduce careless responses? Denise L. Reyes 1 Accepted: 11 November 2020 # Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract There is extensive literature on identifying careless responses in survey data and acknowledging their negative impact on accurate data analysis (Goodman, Cryder, & Cheema, 2013; Huang et al., 2015a, b; Meade and Craig, 2012). However, there are very limited findings on how researchers can help prevent participants from responding carelessly in the first place. The current study manipulated the placement of a quality check item (i.e., early placement versus late placement) and showed that participants were less likely to carelessly respond when the quality check items were placed later in the survey. Quality check items were also able to identify careless responses more than other approaches (i.e., LongString index, completion time index, and self-reported indicators). When using quality check items, it appears to be best when used toward the end of the survey. There are plenty of opportunities for future studies to expand on this research to uncover ways that can help deter careless responding. Keywords Careless responding . Survey development . Quality check items

In research, it is common for social psychologists to use lengthy questionnaires made up of long scales (e.g., International Personality Item Pool contains 300 items; Goldberg, 1999) or multiple shorter scales. Self-report data sometimes produces inaccurate results due to insufficient effort or careless responses from the respondent. This is a concern for researchers regarding the quality of their data. Berry and colleagues’ (1992) studies found that between 50 and 60% of college students self-reported that they answered randomly on at least one item in the MMPI-2. This pattern was still consist in job applications requiring applicants to fill out the MMPI-2 (Berry et al., 1992). Even more shockingly, Baer, Ballenger, Berry, and Wetter (1997) reported that 73% of respondents filling out the MMPI-2 disclosed that they answered randomly to at least one item. There is extensive literature on identifying careless responses in survey data and acknowledging their negative impact on accurate data analysis (Goodman et al., 2013; Huang, Bowling, et al., 2015a; Huang, Liu, & Bowling, 2015b; Meade & Craig, 2012). However, there are very limited findings on how researchers can help prevent participants from responding

* Denise L. Reyes [email protected] 1

Department of Psychology, Rice University, 6100 Main Street, Houston, TX 77005-1827, USA

carelessly in the first place. In other words, although there is stark evidence of careless responses occurring in survey data, we are not as certain about how we can avoid this shortcoming when conducting social science research. In the current study, I seek to extend the literature on data quality by testing whether the placement of quality check items can help prevent respondents from answering inacc