Utilizing Qualitative Methods for Assessment

  • PDF / 115,929 Bytes
  • 13 Pages / 439.37 x 663.307 pts Page_size
  • 9 Downloads / 243 Views

DOWNLOAD

REPORT


UTILIZING QUALITATIVE METHODS FOR ASSESSMENT

INTRODUCTION

In a state-of-the-art paper published in Language Testing, Bachman (2000) argues that the field of language testing has shown ample evidence of maturity over the last quarter century—in practical advances such as computer-based assessment, in our understanding of the many factors involved in performance testing, and in a continuing concern over ethical issues in language assessment. However, an equally important methodological development over just the last fifteen years has been the introduction of qualitative research methodologies to design, describe, and validate language tests. That is, many language testers have come to recognize the limitations of traditional statistical methods for language assessment research, and have come to value these innovative methodologies as a means by which both the assessment process and the product may be understood. In what follows, I discuss a number of notable studies that use qualitative methods for assessment, with particular focus on oral language testing; consider some of the problems and difficulties that face the qualitative researcher in language assessment; and conclude with thoughts on how the adoption of these methods reflects the central concern of language assessment—test validity. E A R LY D E V E L O P M E N T S

An examination of the body of research on language testing suggests that it can be grouped, methodologically, into two main periods: pre-1990 and post-1990. The earlier period was defined by research that was almost entirely quantitative and outcome-based, and, with respect to speaking assessment, based especially on the Foreign Service Institute Oral Proficiency Interview (the OPI). Construct validation studies, comparisons of face-to-face versus tape-mediated assessments, and analyses of rater behavior were undertaken on not only the OPI, but also on the International English Language Testing System (IELTS), the Occupational English Test for Health Professionals, and the Australian Assessment of English Communication Skills (access Lazaraton, 2002, for a review of this literature). Generally speaking, much of this research (particularly on the OPI) examined issues of test E. Shohamy and N. H. Hornberger (eds), Encyclopedia of Language and Education, 2nd Edition, Volume 7: Language Testing and Assessment, 197–209. #2008 Springer Science+Business Media LLC.

198

A N N E L A Z A R AT O N

reliability, that is, consistency in performance elicitation and ratings. However, because reliability is a necessary but not sufficient condition for establishing test validity, criticisms of this early research were put forward for not considering test validity insufficiently, if at all. Leo van Lier, in his seminal 1989 paper on the assumed but untested relationship between oral interviews and natural conversation, took this early research to task and redirected the attention of a number of language-testing researchers, by stimulating an interest in analyzing empirically the nature of discourse and interaction tha