A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data

  • PDF / 479,506 Bytes
  • 14 Pages / 595.276 x 790.866 pts Page_size
  • 59 Downloads / 134 Views

DOWNLOAD

REPORT


A systematic review of applied single-case research published between 2016 and 2018: Study designs, randomization, data aspects, and data analysis René Tanious 1 & Patrick Onghena 1 Accepted: 9 October 2020 # The Psychonomic Society, Inc. 2020

Abstract Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a period of three years (2016–2018). Specifically, we were interested in which designs are most frequently used and how common randomization in the study design is, which data aspects applied single-case researchers analyze, and which analytical methods are used. The systematic review of 423 studies suggests that the multiple baseline design continues to be the most widely used design and that the difference in central tendency level is by far most popular in SCED effect evaluation. Visual analysis paired with descriptive statistics is the most frequently used method of data analysis. However, inferential statistical methods and the inclusion of randomization in the study design are not uncommon. We discuss these results in light of the findings of earlier systematic reviews and suggest future directions for the development of SCED methodology. Keywords Single-case experimental designs . Visual analysis . Statistical analysis . Data aspects . Randomization . Systematic review

Introduction In single-case experimental designs (SCEDs) a single entity (e.g., a classroom) is measured repeatedly over time under different manipulations of at least one independent variable (Barlow et al., 2009; Kazdin, 2011; Ledford & Gast, 2018). Experimental control in SCEDs is demonstrated by observing changes in the dependent variable(s) over time under the different manipulations of the independent variable(s). Over the past few decades, the popularity of SCEDs has risen Electronic supplementary material The online version of this article (https://doi.org/10.3758/s13428-020-01502-4) contains supplementary material, which is available to authorized users. * René Tanious [email protected] Patrick Onghena [email protected] 1

Faculty of Psychology and Educational Sciences, Methodology of Educational Sciences Research Group, KU Leuven, Tiensestraat 102, Box 3762, B-3000 Leuven, Belgium

continuously as reflected in the number of published SCED studies (Shadish & Sullivan, 2011; Smith, 2012; Tanious et al., 2020), the development of domain-specific reporting guidelines (e.g., Tate et al., 2016a, 2016b; Vohra et al., 2016), and guidelines on the quality of conduct and analysis of SCEDs (Horner, et al., 2005; Kratochwill et al., 2010, 2013).

The What Works Clearinghouse guidelines In educational science in particular, the US Department of Education has released a highly influential polic