When Does Scaffolding Provide Too Much Assistance? A Code-Tracing Tutor Investigation

  • PDF / 1,690,767 Bytes
  • 36 Pages / 439.37 x 666.142 pts Page_size
  • 16 Downloads / 244 Views

DOWNLOAD

REPORT


When Does Scaffolding Provide Too Much Assistance? A Code-Tracing Tutor Investigation Jay Jennings 1 & Kasia Muldner 1 Accepted: 11 September 2020/ # International Artificial Intelligence in Education Society 2020

Abstract When students are first learning to program, they not only have to learn to write programs, but also how to trace them. Code tracing involves stepping through a program step-by-step, which helps to predict the output of the program and identify bugs. Students routinely struggle with this activity, as evidenced by prior work and our own experiences in the classroom. To address this, we designed a Code Tracing (CT)-Tutor. We varied the level of assistance provided in the tutor, based on (1) the interface scaffolding available during code tracing, and (2) instructional order, operationalized by when examples were provided, either before or after the corresponding problem was solved. We collected data by having participants use the tutor to solve code tracing problems (N = 97) and analyzed both learning outcomes and process data obtained by extracting features of interest from the log files. We used a multi-layered approach for the analysis, including standard inferential statistics and unsupervised learning to cluster students by their behaviors in the tutor. The results show that the optimal level of assistance for code tracing falls in the middle of the assistance spectrum included in the tutor, but also that there are individual differences in terms of optimal assistance for subgroups of individuals. Based on these results, we outline opportunities for future work around personalizing instruction for code tracing. Keywords Code tracing . Tutoring system . Assistance . Worked examples . Programming

instruction

In memory of Jim Greer Back in the 2000s when I was in my early years of graduate school, I first met Jim at one of the AIED conferences. He was introduced to me by my close friend Andrea Bunt, a fellow graduate student who knew Jim from her time at the

* Kasia Muldner [email protected]

1

Institute of Cognitive Science, Carleton University, Ottawa, Canada

International Journal of Artificial Intelligence in Education

University of Saskatchewan. A conference can be intimidating for those new to the community – everyone seems to know each other and while we are told to network and meet people, that can be challenging. I’ve always appreciated Jim’s kindness and mentorship – since that first introduction, he took time to talk to me during the various subsequent conferences, even though he had many people to talk to as he knew everybody in the community. I got to know his work over the years, and his impact on AIED scholarship cannot be overstated. He and his students have consistently produced groundbreaking work in diverse areas, including Bayesian student modeling, programming education, peer help systems, and the evaluation of tutoring systems, to name a few. One of his areas of expertise was Bayesian student modeling, something I also worked on for my Ph.D. research. Jim was