Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consor
- PDF / 422,417 Bytes
- 14 Pages / 595.276 x 790.866 pts Page_size
- 0 Downloads / 146 Views
ORIGINAL RESEARCH ARTICLE
Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium Steven P. Broglio1 • Barry P. Katz2 • Shi Zhao2 • Michael McCrea3 Thomas McAllister4 • CARE Consortium Investigators
•
Springer International Publishing AG, part of Springer Nature 2017
Abstract Background Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Methods Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported
Individual authors are identified in the Acknowledgements. This article is part of the Topical Collection on The NCAA-DoD Concussion Assessment, Research and Education (CARE) Consortium.
Electronic supplementary material The online version of this article (https://doi.org/10.1007/s40279-017-0813-0) contains supplementary material, which is available to authorized users. & Steven P. Broglio [email protected] 1
NeuroTrauma Research Laboratory, University of Michigan Injury Center, University of Michigan, 401 Washtenaw Ave, Ann Arbor, MI 48109, USA
2
Department of Biostatistics, Indiana University, Indianapolis, IN, USA
3
Departments of Neurosurgery and Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
4
Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN, USA
concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/ oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for nonnormally distributed data. Results Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). Conclusions This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the
Data Loading...