A Tool for Comparing Mathematics Tasks from Paper-Based and Digital Environments

  • PDF / 644,546 Bytes
  • 21 Pages / 439.37 x 666.142 pts Page_size
  • 101 Downloads / 169 Views

DOWNLOAD

REPORT


A Tool for Comparing Mathematics Tasks from Paper-Based and Digital Environments Alice Lemmo 1 Received: 13 September 2019 / Accepted: 8 August 2020/ # The Author(s) 2020

Abstract Comparative studies on paper and pencil– and computer-based tests principally focus on statistical analysis of students’ performances. In educational assessment, comparing students’ performance (in terms of right or wrong results) does not imply a comparison of problem-solving processes followed by students. In this paper, we present a theoretical tool for task analysis that allows us to highlight how students’ problem-solving processes could change in switching from paper to computer format and how these changes could be affected by the use of one environment rather than another. In particular, the aim of our study lies in identifying a set of indexes to highlight possible consequences that specific changes in task formulation have, in terms of task comparability. Therefore, we propose an example of the use of the tool for comparing paperbased and computer-based tasks. Keywords Comparative study . Computer-based assessment . Mathematics education .

Task analysis . Task design

Introduction This article stems from a wider Ph.D. research focused on comparing students’ problem-solving processes when tackling mathematics tasks in paper-based and computer-based environments (Lemmo, 2017). The increasing use of tests administered in the digital environment allows research in mathematics education to develop new fields of study. On the one hand, research in computer-based tests is concerned with the validity of these tests, while, on the other hand, it focuses on their comparability with existing paper tests. Large-scale surveys were conducted to study these two issues; they involved students from different

* Alice Lemmo [email protected]

1

Dipartimento di Scienze Umane, Università degli studi dell’Aquila, Viale Nizza, 14, 67100 L’Aquila, AQ, Italy

A. Lemmo

educational levels, from primary to secondary school instruction (Drasgow, 2015; Way, Davis, & Fitzpatrick, 2006). One of the first studies conducted on the topic involved the National Assessment of Education Progress (NAEP). Russell and Haney (1997) conducted a study to compare the effects of administering a test in two environments (paper and pencil vs computer) on performance (in terms of scores) of secondary school students. The findings revealed differences concerning the type of response: no substantial differences were identified in the case of multiple-choice items, while some such differences were found regarding open-ended items. Furthermore, research has shown that familiarity with the use of a keyboard allows students to obtain higher scores in digital mode than in paper-based tests (Russel, 1999; Russell & Plati, 2001). In general, the different studies carried out on the NAEP assessment system show how the performance of students completing computerbased tests is closely linked to their familiarity with the administrative environment. Similar research has been condu