What is meaningful research and how should we measure it?

  • PDF / 765,091 Bytes
  • 17 Pages / 439.37 x 666.142 pts Page_size
  • 110 Downloads / 241 Views

DOWNLOAD

REPORT


What is meaningful research and how should we measure it? Sven Helmer1   · David B. Blumenthal2   · Kathrin Paschen3 Received: 14 November 2019 © The Author(s) 2020

Abstract We discuss the trend towards using quantitative metrics for evaluating research. We claim that, rather than promoting meaningful research, purely metric-based research evaluation schemes potentially lead to a dystopian academic reality, leaving no space for creativity and intellectual initiative. After sketching what the future could look like if quantitative metrics are allowed to proliferate, we provide a more detailed discussion on why research is so difficult to evaluate and outline approaches for avoiding such a situation. In particular, we characterize meaningful research as an essentially contested concept and argue that quantitative metrics should always be accompanied by operationalized instructions for their proper use and continuously evaluated via feedback loops. Additionally, we analyze a dataset containing information about computer science publications and their citation history and indicate how quantitative metrics could potentially be calibrated via alternative evaluation methods such as test of time awards. Finally, we argue that, instead of over-relying on indicators, research environments should primarily be based on trust and personal responsibility. Keywords  Research evaluation · Quantitative metrics · Essentially contested concepts

Introduction Quantitative metrics are used for managing education, evaluating health services, and measuring employee performance in corporations, e.g. see Austin (1996). This trend does not spare academia: there is a perception among researchers that appraisal of their work is focusing more and more on quantitative metrics. Participants in a series of workshops organized by the Royal Society reported that “current measures of recognition and esteem * Sven Helmer [email protected] David B. Blumenthal [email protected] Kathrin Paschen [email protected] 1

Department of Informatics, University of Zurich, Zurich, Switzerland

2

Chair of Experimental Bioinformatics, Technical University of Munich, Freising, Germany

3

Nephometrics GmbH, Zurich, Switzerland



13

Vol.:(0123456789)

Scientometrics

in the academic environment were disproportionately based on quantitative metrics” such as publication and citation count, h-index, i10-index, number of Ph.D. students graduated, and grant income (Royal Society 2017). These numbers are used to rank individuals in job applications or promotion procedures, departments in national research quality assessments (e.g. the Research Excellence Framework (REF) in the United Kingdom, the Valutazione della Qualità della Ricerca (VQR), or Research Quality Assessment, in Italy, and the Excellence in Research for Australia (ERA) in Australia), or even universities in international ranking lists (e.g. the Times Higher Education (THE) or Quacquarelli Symonds (QS) World University Rankings). This is because research funding needs to be managed, of