Towards a More Meaningful Evaluation of University Lecturers
- PDF / 572,174 Bytes
- 8 Pages / 439.37 x 666.142 pts Page_size
- 15 Downloads / 181 Views
Towards a More Meaningful Evaluation of University Lecturers Thilo Hagen1 Received: 12 October 2020 / Accepted: 22 October 2020 © New Zealand Association for Research in Education 2020
Abstract Evaluating the teaching performance of lecturers in higher education is important for both the Universities and the faculty themselves. Having information about teaching performance is essential to bring about change in student learning and assessment, to incentivize lecturers, to appraise lecturers and to make important administrative decisions. The most common approach to evaluate lecturers is student evaluation of teaching (SET). However, SET is commonly considered to be only a poor reflection of lecturer teaching performance. Here I propose a number of measures to improve SET. I recommend to change the current cardinal grading of lecturers to an ordinal system, in which student rank their best lecturers based on specific criteria. These criteria should be concrete, aligned with the desired attributes of a good lecturer and process-oriented rather than achievement-oriented. To increase student motivation to provide accurate feedback, SET should be directly linked to teaching awards and publicized transparently. Finally, to obtain meaningful formative feedback, lecturers should administer their own feedback surveys, tailored to the specific pedagogical approaches and learning outcomes of their modules. It is hoped that with these measures a more meaningful student evaluation of teaching can be achieved. Keywords Higher education · Student evaluation of teaching · Formative and summative lecturer feedback · Cardinal and ordinal feedback
Introduction Student evaluation of teaching (SET) is the most common type of feedback, whereby students award scores in response to specific questions and often also provide verbal comments. Various problems related to biasing and confounding factors in SET have been highlighted, as summarized by Spooren et al. (2013) in their SET * Thilo Hagen [email protected] 1
Department of Biochemistry, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
13
Vol.:(0123456789)
New Zealand Journal of Educational Studies
meta-analysis. Based on my own experience, one main problem with SET is that many students award scores in a very narrow range. Thus, students tend to give a default score of around 80% and only deviate from this in extreme cases when a lecturer is clearly above or below the average. As a result, teacher scores are often nearly indistinguishable and do not adequately reflect differences in teaching quality. The reasons for the student scoring behavior could be several-fold, including difficulties in giving absolute scores (not knowing the benchmarks or expectations), scoring apathy because there are too many lecturers to evaluate and questions to answer, not wanting to ‘hurt’ the lecturer, or only doing the evaluation because it is linked to some incentive for the students. However, the main reason for the narrow range of SET scores is likel
Data Loading...