Ranking journals in business and management: a statistical analysis of the Harzing data set

  • PDF / 168,431 Bytes
  • 14 Pages / 595 x 794 pts Page_size
  • 16 Downloads / 166 Views

DOWNLOAD

REPORT


OPINION PIECE

Ranking journals in business and management: a statistical analysis of the Harzing data set John Mingers1 and Anne-Wil Harzing2 1 Kent Business School, University of Kent, Kent, U.K.; 2Department of Management, University of Melbourne, Victoria, Australia

Correspondence: John Mingers, Kent Business School, University of Kent, Canterbury, Kent CT9 7PE, U.K. Tel: þ 44 1227 824008; Fax: þ 44 1227 761187; E-mail: [email protected]

Abstract Creating rankings of academic journals is an important but contentious issue. It is of especial interest in the U.K. at this time (2007) as we are only one year away from getting the results of the next Research Assessment Exercise (RAE) the importance of which, for U.K. universities, can hardly be overstated. The purpose of this paper is to present a journal ranking for business and management based on a statistical analysis of the Harzing data set which contains 13 rankings. The primary aim of the analysis is two-fold – to investigate relationships between the different rankings, including that between peer rankings and citation behaviour; and to develop a ranking based on four groups that could be useful for the RAE. Looking at the different rankings, the main conclusions are that there is in general a high degree of conformity between them as shown by a principal components analysis. Cluster analysis is used to create four groups of journals relevant to the RAE. The higher groups are found to correspond well with previous studies of top management journals and also gave, unlike them, equal coverage to all the management disciplines. The RAE Business and Management panel have a huge and unenviable task in trying to judge the quality of over 10,000 publications and they will inevitably have to resort to some standard mechanistic procedures to do so. This work will hopefully contribute by producing a ranking based on a statistical analysis of a variety of measures. European Journal of Information Systems (2007) 16, 303–316. doi:10.1057/palgrave.ejis.3000696 Keywords: citation indices; cluster analysis; journal rankings; research assessment exercise (RAE)

Introduction

Received: 2 July 2007 Accepted: 1 August 2007

Creating rankings of academic journals is an important but contentious issue. There are many arguments against doing it: how should it be done – using peer review, some form of metrics, for example, citations, or a combination of the two? How can one compare generalist journals, for example, Management Science, with specialist ones, for example, Mathematical Programming? How can you compare journals across disciplinary areas, for example, HRM with Marketing? And, in any case, what is the point: academics generally know the pecking order in their particular subject? While these points have validity, the reality throughout academia is an increasing insistence on the use of formal and explicit measures in decisions concerning funding, appointments, tenure, promotions and above all assessments of the quality of research departments. This is especially the ca