Wise Crowd Content Assessment and Educational Rubrics

  • PDF / 1,301,519 Bytes
  • 27 Pages / 439.642 x 666.49 pts Page_size
  • 93 Downloads / 234 Views

DOWNLOAD

REPORT


Wise Crowd Content Assessment and Educational Rubrics Rebecca J. Passonneau1 · Ananya Poddar1 · Gaurav Gite1 · Alisa Krivokapic1 · Qian Yang2 · Dolores Perin3

© International Artificial Intelligence in Education Society 2016

Abstract Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate machine-generated summaries. The ideas in the educational rubric were extracted from a source text that students were asked to summarize. The wise-crowd content assessment model is derived from summaries written by an independent group of proficient students who read the same source text, and followed the same instructions to write their summaries. The resulting content model includes a ranking over the derived content units. All main ideas in the rubric appear

 Rebecca J. Passonneau

[email protected] Ananya Poddar [email protected] Gaurav Gite [email protected] Alisa Krivokapic [email protected] Qian Yang [email protected] Dolores Perin [email protected] 1

Columbia University, New York, NY, USA

2

Tsinghua University, Beijing, China

3

Teachers College of Columbia University, New York, NY, USA

Int J Artif Intell Educ

prominently in the wise-crowd content model. We present two methods that automate the content assessment. Scores based on the wise-crowd content assessment, both manual and automated, have high correlations with the main ideas rubric. The automated content assessment methods have several advantages over related methods, including high correlations with corresponding manual scores, a need for only half a dozen models instead of hundreds, and interpretable scores that independently assess content quality and coverage. Keywords Automated content analysis · Writing intervention · Wise-crowd content assessment · Writing rubrics

Introduction Automated tools to identify strengths and weaknesses of students’ reading and writing skills could make it easier for teachers across disciplines to promote reading skills, and to incorporate more writing into their curricula. Of the many aspects of verbal skills that students need to learn, this paper focuses on the assessment of their mastery of content. We present a method to assess content of students’ written summaries that derives a model of the important content for a particular summarization task from a small set of examples. For reasons explained below, we refer to the authors of the example set as a wise crowd. We demonstrate the application of the wise-crowd method on summaries written by community college students who participated in a successful intervention study to improve reading and writing skills. We present a main ideas rubric used in the community college intervention study, followed by a description of wise-crowd content assessment, and two automated implementations of

Data Loading...