Daily Evaluation Cards Are Superior for Student Assessment Compared to Single Rater In-Training Evaluations

  • PDF / 290,939 Bytes
  • 7 Pages / 595.276 x 790.866 pts Page_size
  • 50 Downloads / 167 Views

DOWNLOAD

REPORT


ORIGINAL RESEARCH

Daily Evaluation Cards Are Superior for Student Assessment Compared to Single Rater In-Training Evaluations James Johnston 1 & Maury Pinsk 1

# International Association of Medical Science Educators 2019

Abstract Introduction The University of Manitoba’s ambulatory pediatric clerkship transitioned to daily encounter cards (DECs) from single in-training evaluation reports (ITERs). The impact of this change on quality of student assessment was unknown. Using the validated Completed Clinical Evaluation Report Rating (CCERR) scale, we compared the assessment quality of the single ITER to the DEC-based system. Methods Block randomization was used to select from a cohort of ITER- and DEC-based assessments during equivalent points in clerkship training. Data were transcribed and anonymized and scored by two blinded raters using the CCERR. Results Inter-rater reliability for total CCERR scores was substantive (> 0.6). Mean total CCERR score for the DEC cohort was significantly higher than for the ITER cohort (25.2 vs. 16.8, p < 0.001), as were the mean scores for each item (2.81 vs. 1.86, p < 0.05). Multivariate logistical regression supported the significant influence of assessment method on assessment quality. Conclusions There is improvement in the average quality of student assessments associated with the transition from an ITERbased system to a DEC-based system. However, the improvement to only average CCERR scores for the DEC cohort suggests an unmet need for faculty development. Keywords Daily evaluation cards . Feedback . Medical students . Student assessment . Quality improvement

Introduction In most clerkship and residency programs, student assessment has traditionally been given in the form of a single, summative in-training evaluation report (ITER). ITERs have a dual role; they are used by program and clerkship directors to ensure clinical competence and are a major consideration in determining whether a student will pass or fail a given rotation. They also serve as a structured means of providing feedback to medical students, one of only a few program-mandated pieces of feedback that the learner might receive [1, 2]. However, the comprehensive ITER has several limitations. As a summative assessment, the ITER relies heavily on preceptor recall and is influenced by the amount of contact between the preceptor and the student [2–4]. Furthermore, a lack

* Maury Pinsk [email protected] 1

Department of Pediatrics & Child Health, Max Rady College of Medicine, University of Manitoba, FE009-840 Sherbrook St, Winnipeg, MB R3A 1S1, Canada

of faculty training and guidance as to what constitutes good feedback for an ITER has also been frequently identified as a limitation [4, 5]. Finally, as a feedback tool, the ITER is significantly limited by a lack of timeliness and/or consistently detailed comments [2, 3, 6], which risks omitting context and specific recommendations regarding areas of weakness or strength, impairing the ability of trainees to demonstrate improvement [7]. The pediatric und