Intelligent Feedback on Hypothesis Testing
- PDF / 1,184,591 Bytes
- 21 Pages / 439.37 x 666.142 pts Page_size
- 80 Downloads / 255 Views
Open Access
Intelligent Feedback on Hypothesis Testing Sietske Tacoma 1
& Bastiaan
Heeren 1,2 & Johan Jeuring 1,2
& Paul
Drijvers 1
Accepted: 17 September 2020/ # The Author(s) 2020
Abstract Hypothesis testing involves a complex stepwise procedure that is challenging for many students in introductory university statistics courses. In this paper we assess how feedback from an Intelligent Tutoring System can address the logic of hypothesis testing and whether such feedback contributes to first-year social sciences students’ proficiency in carrying out hypothesis tests. Feedback design combined elements of the model-tracing and constraint-based modeling paradigms, to address both the individual steps as well as the relations between steps. To evaluate the feedback, students in an experimental group (N = 163) received the designed intelligent feedback in six hypothesis-testing construction tasks, while students in a control group (N = 151) only received stepwise verification feedback in these tasks. Results showed that students receiving intelligent feedback spent more time on the tasks, solved more tasks and made fewer errors than students receiving only verification feedback. These positive results did not transfer to follow-up tasks, which might be a consequence of the isolated nature of these tasks. We conclude that the designed feedback may support students in learning to solve hypothesis-testing construction tasks independently and that it facilitates the creation of more hypothesis-testing construction tasks. Keywords Feedback . Hypothesis testing . Intelligent tutoring systems . Statistics
education
Introduction Hypothesis testing is widely used in scientific research, and is therefore covered in most introductory statistics courses in higher education (Carver et al. 2016). This topic is challenging for many students, because it requires the ability to follow a complex line
* Sietske Tacoma [email protected]
1
Utrecht University, Utrecht, The Netherlands
2
Open University of the Netherlands, Heerlen, The Netherlands
International Journal of Artificial Intelligence in Education
of reasoning involving uncertainty (Falk and Greenbaum 1995; Garfield et al. 2008). Additionally, this line of reasoning involves several complex concepts, such as significance level, test value and p value (Castro Sotos et al. 2007). Students struggle to understand the role and interdependence of these concepts in the hypothesis testing procedure, or, in other words, the logic of hypothesis testing (Vallecillos 1999). Appropriate feedback could support students in comprehending this logic, by focusing the student’s attention to currently relevant aspects and thus reducing cognitive load (Shute 2008). To address students’ reasoning regarding the logic of hypothesis testing, feedback should address all aspects of a (partial) solution: not only the content of a current step, but also its relations to earlier steps. Since groups in introductory statistics courses are often large, it is difficult for teachers to provide such s
Data Loading...