Optimal design of experiments to identify latent behavioral types

  • PDF / 2,519,847 Bytes
  • 28 Pages / 439.37 x 666.142 pts Page_size
  • 30 Downloads / 219 Views

DOWNLOAD

REPORT


Optimal design of experiments to identify latent behavioral types Stefano Balietti1,2   · Brennan Klein3   · Christoph Riedl3,4,5,6  Received: 29 August 2019 / Revised: 23 July 2020 / Accepted: 8 September 2020 © Economic Science Association 2020

Abstract Bayesian optimal experiments that maximize the information gained from collected data are critical to efficiently identify behavioral models. We extend a seminal method for designing Bayesian optimal experiments by introducing two computational improvements that make the procedure tractable: (1) a search algorithm from artificial intelligence that efficiently explores the space of possible design parameters, and (2) a sampling procedure which evaluates each design parameter combination more efficiently. We apply our procedure to a game of imperfect information to evaluate and quantify the computational improvements. We then collect data across five different experimental designs to compare the ability of the optimal experimental design to discriminate among competing behavioral models against the experimental designs chosen by a “wisdom of experts” prediction experiment. We find that data from the experiment suggested by the optimal design approach requires significantly less data to distinguish behavioral models (i.e., test hypotheses) than data from the experiment suggested by experts. Substantively, we find that reinforcement learning best explains human decision-making in the imperfect information game and that behavior is not adequately described by the Bayesian Nash equilibrium. Our procedure is general and computationally efficient and can be applied to dynamically optimize online experiments. Keywords  Optimal experimental design · Behavioral types · Expert prediction · Active learning JEL Classification  C90 · C80 · C72

Electronic supplementary material  The online version of this article (https​://doi.org/10.1007/s1068​ 3-020-09680​-w) contains supplementary material, which is available to authorized users. * Christoph Riedl [email protected] Extended author information available on the last page of the article

13

Vol.:(0123456789)



S. Balietti et al.

1 Introduction Experimentation in the social sciences is a fundamental tool for understanding the mechanisms and heuristics that underlie human behavior. At the same time, running experiments is a costly process and requires careful design in order to test hypotheses while maximizing statistical power. This experimental design process is often guided by the intuition of the scientists conducting the research. While there are many benefits in relying on the intuition of experienced researchers, there is often a lack of principled guides when choosing which experiment to run (Fisher 1936; Hill 1995). As a result, experiments may have low power to distinguish between different models of behavior (Salmon 2001) and lead to increased costs for data collection. At worst, they lead to reduced effect sizes, and incorrect rejection or acceptance of a null hypothesis (Berman et al. 2018). A growing body of resea