Evaluating Rank-Coherence of Crowd Rating in Customer Satisfaction

  • PDF / 1,089,040 Bytes
  • 21 Pages / 439.37 x 666.142 pts Page_size
  • 9 Downloads / 162 Views

DOWNLOAD

REPORT


Evaluating Rank‑Coherence of Crowd Rating in Customer Satisfaction Venera Tomaselli1   · Giulio Giacomo Cantone2 Accepted: 27 November 2020 © The Author(s) 2020

Abstract Crowd rating is a continuous and public process of data gathering that allows the display of general quantitative opinions on a topic from online anonymous networks as they are crowds. Online platforms leveraged these technologies to improve predictive tasks in marketing. However, we argue for a different employment of crowd rating as a tool of public utility to support social contexts suffering to adverse selection, like tourism. This aim needs to deal with issues in both method of measurement and analysis of data, and with common biases associated to public disclosure of rating information. We propose an evaluative method to investigate fairness of common measures of rating procedures with the peculiar perspective of assessing linearity of the ranked outcomes. This is tested on a longitudinal observational case of 7 years of customer satisfaction ratings, for a total amount of 26.888 reviews. According to the results obtained from the sampled dataset, analysed with the proposed evaluative method, there is a trade-off between loss of (potentially) biased information on ratings and fairness of the resulting rankings. However, computing an ad hoc unbiased ranking case, the ranking outcome through the time-weighted measure is not significantly different from the ad hoc unbiased case. Keywords  Crowd rating · Ranking · Rank-coherence · Customer satisfaction · Tourism

1 Introduction: Rating from a Crowd Crowdsourcing is generic terminology to categorise different practices in technological design and management. According to Estellés-Arolas and González-Ladrón-de-Guevara (2012), different definitions of crowdsourcing co-existed: some authors presented certain specific cases as paradigmatic, but no consensus was reached. The common factors are: (1) there is a multitude of individuals, (2) these individuals cooperate towards a common task * Venera Tomaselli [email protected] Giulio Giacomo Cantone [email protected] 1

Department of Political and Social Sciences, University of Catania, 8, Vittorio Emanuele II, 95131 Catania, Italy

2

Department of Physics and Astronomy, University of Catania, 64, S. Sofia, 95123 Catania, Italy



13

Vol.:(0123456789)



V. Tomaselli, G. G. Cantone

or a common goal, (3) these individuals are connected through a web technology (‘platform’) and generally they can mutually monitor each other (at least partially). We propose to take into account the paradigm of Geiger et al. (2012). Authors proposed four “archetypes of crowdsourcing information systems”: crowd rating, crowd creation, crowd processing and crowd solving (ivi, pp. 4–6). In crowd rating the task is to bring “votes on given topics, […] such as a spectrum of opinions or collective assessments and predictions that reflect the ‘wisdom of crowds’.” (ivi, p. 5). Therefore, the crowd estimates a numerical value. Crowd rating’s tasks are twice us