What Determines Student Evaluation Scores? A Random Effects Analysis of Undergraduate Economics Classes

  • PDF / 139,678 Bytes
  • 15 Pages / 504.567 x 720 pts Page_size
  • 39 Downloads / 163 Views

DOWNLOAD

REPORT


What Determines Student Evaluation Scores? A Random Effects Analysis of Undergraduate Economics Classes Michael A. McPherson, R. Todd Jewell and Myungsup Kim Department of Economics, University of North Texas, PO Box 311457, Denton, TX 76203-1457, USA

Student evaluation scores are a standard component of the way colleges and universities assess the quality of an instructor’s teaching for purposes of promotion and tenure, as well as merit raise allocations. This paper applies a feasible generalized least squares model to a panel of data from undergraduate economics classes. We find that instructors can ‘‘buy’’ better evaluation scores by inflating students’ grade expectations. Class size and instructor experience are important determinants of evaluation scores in principles classes, but not in upper-level courses. Male instructors get better scores than females, and younger instructors are more popular than older ones. Certain other factors are also important determinants of evaluation scores. Our results suggest that an adjustment to the usual departmental rankings may be useful. Eastern Economic Journal (2009) 35, 37–51. doi:10.1057/palgrave.eej.9050042 Keywords: student evaluation; undergraduate economics JEL: A0; A2

INTRODUCTION Student evaluation of teaching (SET) at the college and university level and its determinants has been an area of active research for more than a half-century.1 The large and growing literature in this area points to the importance of the role that SET scores have come to play in academic departments. For example, colleges and universities routinely use SET scores to assess the quality of an instructor’s teaching for purposes of promotion and tenure. Furthermore, SET scores are often an important component in deliberations for merit or excellence raise allocations. While some strands of the literature in this area debate whether or not SETs should be of such central importance, the fact remains that these scores have been and continue to be used extensively. Understanding the determinants of SET scores may be of considerable interest and utility to instructors and to administrators. Despite the breadth of the literature, much of the research has been unconvincing due to either data difficulties or statistical shortcomings. This paper takes advantage of an unusually large panel of data from 24 consecutive semesters comprising economics courses taught at a large public university. While McPherson [2006] analyzes a smaller portion of these data, his use of a fixed effects methodology precludes an examination of characteristics of instructors that are time-invariant. Instead, we use a random effects model estimated with feasible generalized least squares (FGLS). This enables an examination of instructor-specific, time-invariant characteristics such as gender and race. In addition, our method permits a proper accounting of unobservable effects specific to individual instructors. In the earlier literature there are only a small number of examples of efforts to tackle this

Michael A. McPherso