Assessing Text-Based Writing of Low-Skilled College Students
- PDF / 524,982 Bytes
- 23 Pages / 439.37 x 666.142 pts Page_size
- 100 Downloads / 175 Views
Assessing Text-Based Writing of Low-Skilled College Students Dolores Perin 1 & Mark Lauterbach 2
# International Artificial Intelligence in Education Society 2016
Abstract The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored measures were a holistic quality rating for a persuasive essay and an analytic quality score for a written summary. Both writing samples were based on text on psychology and sociology topics related to content taught at the introductory undergraduate level. The study is a modified replication of McNamara et al. (Written Communication, 27(1), 57–86 2010), who identified several Coh-Metrix variables from five linguistic classes that reliably predicted group membership (high versus low proficiency) using human quality scores on persuasive essays written by averageachieving college students. When discriminant analyses and ANOVAs failed to replicate the McNamara et al. (Written Communication, 27(1), 57–86 2010) findings, the current study proceeded to analyze all of the variables in the five Coh-Metrix classes. This larger analysis identified 10 variables that predicted human-scored writing proficiency. Essay and summary scores were predicted by different automated variables. Implications for instruction and future use of automated scoring to understand the writing of low-skilled adults are discussed. Keywords Writingskills . Automatedscoring . Adult students . Persuasiveessay . Written summary
* Dolores Perin [email protected] Mark Lauterbach [email protected]
1
Teachers College, Columbia University, 525 W. 120th Street, Box 70, New York, NY 10027, USA
2
Brooklyn College, City University of New York, 2401 James Hall, 2900 Bedford Avenue, Brooklyn, NY 11210, USA
Int J Artif Intell Educ
Introduction In recent years there has been a proliferation of automated scoring systems to analyze written text including student-generated responses (Magliano and Graesser 2012; Shermis et al. 2016). In particular, automated scoring has been used to assess the writing ability of students who vary in age and language proficiency (Crossley et al. 2016; Weigle 2013; Wilson et al. 2016). Because of its precision and consistency in analyzing linguistic and structural aspects of writing (Deane and Quinlan 2010), automated scoring has potential to contribute to an understanding of the instructional needs of poor writers. One segment of this population is adults who have completed secondary education with low skills but have nevertheless been admitted to postsecondary institutions and aspire to earn a college degree (MacArthur et al. 2016). This is a large population with low literacy skills who often attend college developmental (also known as remedial) courses (Bailey et al. 2010). Automated essay scoring appears to per
Data Loading...