Comparing alternative estimation methods of a public goods game

  • PDF / 966,190 Bytes
  • 12 Pages / 439.37 x 666.142 pts Page_size
  • 60 Downloads / 199 Views

DOWNLOAD

REPORT


Comparing alternative estimation methods of a public goods game Danielle Kent1  Received: 3 March 2019 / Revised: 12 July 2020 / Accepted: 17 August 2020 © Economic Science Association 2020

Abstract This paper empirically compares the use of straightforward verses more complex methods to estimate public goods game data. Five different estimation methods were compared holding the dependent and explanatory variables constant. The models were evaluated using a large out-of-sample cross-country public goods game data set. The ordered probit and tobit random-effects models yielded lower p values compared to more straightforward models: ordinary least squares, fixed and random effects. However, the more complex models also had a greater predictive bias. The straightforward models performed better than expected. Despite their limitations, they produced unbiased predictions for both the in-sample and out-of-sample data. Keywords  Public goods games · Economic experiments · Fixed effects · Tobit random effects · Ordered probit JEL Classification  C13 · H41 · C92 · C81

1 Introduction The public goods game (PGG) is extensively used by experimental economists as a tool to study social dilemmas and cooperation.1 However, even though it has been over 30 years since the first laboratory Public Goods Game experiments were published (Isaac et  al. 1984; Kim and Walker 1984; Isaac et  al. 1985) the empirical analysis of the game choice data has still not moved beyond descriptive statistics in most papers. The likely reason for this is that the distribution of the choice data for

1

  For example, a review paper by Chaudhuri (2011) cites 146 Public Goods experiment publications.

* Danielle Kent [email protected] 1



Department of Economics, Macquarie University, Sydney, NSW 2109, Australia

13

Vol.:(0123456789)

D. Kent

this game is highly non-standard and is complicated by its discrete, censored and panel nature. There have been a few exceptions though. Carpenter (2004), for example, used Tobit random effects estimation and Ashley et al. (2010) used inconsistent Tobit fixed effects estimation, to account for data censoring to model contribution choice in a 10-period public goods game. Bardsley and Moffatt (2007) made a clear attempt at advancing the analytical toolbox for public goods experiments by proposing that public goods data be modelled using a finite mixture model to incorporate heterogeneity of types within a population with Tobit components to address censoring, and a tremble term to model decision error. Despite the sophistication of the model and compelling rationale for the approach, the approach was never taken up in the Public Goods experimental literature, probably due to its complexity. Random effects estimation has been used by Tan and Bolle (2007), and Nikiforakis (2010). Breitmoser (2013) finds that the experimenter’s choice of structural model can also impact estimation performance as measured by Bayes information criterion (BIC). In this study, the structural model estimated is held constant in o