The Sensitivity of a Scenario-Based Assessment of Written Argumentation to School Differences in Curriculum and Instruct

  • PDF / 821,327 Bytes
  • 42 Pages / 439.37 x 666.142 pts Page_size
  • 68 Downloads / 129 Views

DOWNLOAD

REPORT


The Sensitivity of a Scenario-Based Assessment of Written Argumentation to School Differences in Curriculum and Instruction Paul Deane 1 & Joshua Wilson 2 & Mo Zhang 1 & Chen Li 1 & Peter van Rijn 3 & Hongwen Guo 1 & Amanda Roth 1 & Eowyn Winchester 1 & Theresa Richter 1 Accepted: 8 October 2020/ # International Artificial Intelligence in Education Society 2020

Abstract Educators need actionable information about student progress during the school year. This paper explores an approach to this problem in the writing domain that combines three measurement approaches intended for use in interim-assessment fashion: scenario-based assessments (SBAs), to simulate authentic classroom tasks, automated writing evaluation (AWE) features to track changes in performance and writing process traits derived from a keystroke log. Our primary goal is to determine if SBAs designed to measure English Language Arts skills, supplemented by richer measurement of the writing task, function well as interim assessments that are sensitive to differences in performance related to differences in quality of instruction. We calibrated these measures psychometrically using data from a prior study and then applied them to evaluate changes in performance in one suburban and two urban middle schools that taught argument writing. Of the three schools, only School A (the suburban school, with the strongest overall performance) showed significant score increases on an essay task, accompanied by distinctive patterns of improvement. A general, unconditioned growth pattern was also evident. These results demonstrate an approach that can provide richer, more actionable information about student status and changes in student performance over the course of the school year. Keywords Scenario-based assessment . SBA . Writing . Assessment . Automated writing

evaluation . AWE . Natural language processing . NLP . Automated essay scoring . AES . Writing process . Keystroke log . Argumentation . Interim assessment . Formative assessment

* Paul Deane [email protected] Extended author information available on the last page of the article

International Journal of Artificial Intelligence in Education

Introduction Effective instruction incorporates assessment. Teachers need to know what their students know, their status with respect to current teaching goals, and what they need to learn. Students use feedback from assessment to consolidate and clarify what they have learned and set learning goals. Immediate feedback closely integrated with classroom practice is more likely to have beneficial effects on learning (Black and Wiliam 1998). Conversely, when high-stakes assessments are disconnected from classroom practice, there is a risk of negative washback (Au 2007), that is, a tendency for teachers to “teach to the test” rather than focusing on providing the best possible subject matter instruction. An alternative to traditional summative assessment is provided by scenario-based assessments (SBAs)—standardized assessments designed to simulate classroom tasks, embedde