Do student ratings provide reliable and valid information about teaching quality at the school level? Evaluating measure
- PDF / 891,116 Bytes
- 36 Pages / 439.37 x 666.142 pts Page_size
- 90 Downloads / 190 Views
Do student ratings provide reliable and valid information about teaching quality at the school level? Evaluating measures of science teaching in PISA 2015 Anindito Aditomo 1,2
& Carmen
Köhler 2
Received: 2 October 2019 / Accepted: 13 July 2020/ # Springer Nature B.V. 2020
Abstract Large-scale educational surveys, including PISA, often collect student ratings to assess teaching quality. Because of the sampling design in PISA, student ratings must be aggregated at the school level instead of the classroom level. To what extent does school-level aggregation of student ratings yield reliable and valid measures of teaching quality? We investigate this question for six scales measuring classroom management, emotional support, inquiry-based instruction, teacher-directed instruction, adaptive instruction, and feedback provided by PISA 2015. The sample consisted of 503,146 students from 17,678 schools in 69 countries/regions. Multilevel CFA and SEM were conducted for each scale in each country/region to evaluate school-level reliability (intraclass correlations 1 and 2), factorial validity, and predictive validity. In most countries/regions, school-level reliability was found to be adequate for the classroom management scale, but only low to moderate for the other scales. Examination of factorial and predictive validity indicated that the classroom management, emotional support, adaptive instruction, and teacher-directed instruction scales capture meaningful differences in teaching quality between schools. Meanwhile, the inquiry scale exhibited poor validity in almost all countries/regions. These findings suggest the possibility of using student ratings in PISA to investigate some aspects of school-level teaching quality in most countries/regions. Keywords Teaching effect . Instructional quality . School climate . Multilevel modelling .
Confirmatory factor analysis
* Anindito Aditomo [email protected]; [email protected]
1
Faculty of Psychology, University of Surabaya, Surabaya, Indonesia
2
DIPF | Leibniz Institute for Research and Information in Education, Rostocker Str. 6, 60323 Frankfurt am Main, Germany
Educational Assessment, Evaluation and Accountability
1 Introduction Large-scale assessments of learning have shaped public discourse and influenced educational policy in many countries (Breakspear 2012; Grek 2009). While attention has mostly focused on students’ achievement, large-scale assessment studies also provide rich information regarding the school context and educational processes such as teaching practices. Given its importance for student learning, teaching quality needs to be evaluated using measures that are reliable and valid (Klieme 2013; Marsh and Roche 1997; Müller et al. 2016; Wallace et al. 2016). This study evaluates indicators of science teaching quality provided by the 2015 cycle of the Programme for International Student Assessment (PISA, OECD 2016). We focus on PISA for several reasons. First, while many studies have used PISA’s teaching scales to address substantive questions (A
Data Loading...