LIONESS Lab: a free web-based platform for conducting interactive experiments online

  • PDF / 1,307,384 Bytes
  • 17 Pages / 439.37 x 666.142 pts Page_size
  • 29 Downloads / 133 Views

DOWNLOAD

REPORT


LIONESS Lab: a free web‑based platform for conducting interactive experiments online Marcus Giamattei1,2,3   · Kyanoush Seyed Yahosseini4 · Simon Gächter2,5,6 · Lucas Molleman2,4,7  Received: 25 March 2019 / Revised: 28 May 2020 / Accepted: 6 June 2020 © The Author(s) 2020

Abstract LIONESS Lab is a free web-based platform for interactive online experiments. An intuitive, user-friendly graphical interface enables researchers to develop, test, and share experiments online, with minimal need for programming experience. LIONESS Lab provides solutions for the methodological challenges of interactive online experimentation, including ways to reduce waiting time, form groups on-the-fly, and deal with participant dropout. We highlight key features of the software, and show how it meets the challenges of conducting interactive experiments online. Keywords  Experimental software · Interactive online experiments · Experimental standards JEL Classification C90

* Marcus Giamattei [email protected] 1

Chair in Economic Theory, University of Passau, Innstraße 27, 94032 Passau, Germany

2

Center for Decision Research and Experimental Economics, University of Nottingham, University Park, Nottingham NG7 2RD, UK

3

Bard College Berlin, Platanenstr. 24, 13156 Berlin, Germany

4

Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany

5

Institute of Labour Economics, Schaumburg‑Lippe‑Straße 5‑9, 53113 Bonn, Germany

6

Center for Economic Studies, Poschingerstraße 5, 81679 Munich, Germany

7

Amsterdam Brain and Cognition, University of Amsterdam, Nieuwe Achtergracht 129b, 1018 WT Amsterdam, The Netherlands



13

Vol.:(0123456789)



M. Giamattei et al.

1 Introduction A rapidly growing number of behavioural researchers use online experiments to study human decision-making. Online labour markets, such as Amazon Mechanical Turk (MTurk; www.mturk​.com) and Prolific  (www.proli​fic.co), allow researchers to conveniently recruit participants for experiments and compensate them for their efforts. The quality of data from online experiments is generally deemed comparable to data obtained in the laboratory (Berinsky et al. 2012; Buhrmester et al. 2011; Hauser and Schwarz 2016; Mason and Suri 2012; Paolacci and Chandler 2014; Paolacci et al. 2010; Snowberg and Yariv 2018; Thomas and Clifford 2017; but see Hergueux and Jacquemet 2015), making online experimentation a promising complement to laboratory research. However, online experiments have typically used non-interactive tasks that participants complete on their own, either using survey software (e.g., SurveyMonkey, Qualtrics) to document decisions or emulating social interactions by using the strategy method and matching participants post hoc. Online studies using designs with live interactions between participants have typically employed tailor-made software (Egas and Riedl 2008; Gallo and Yan 2015; Nishi et al. 2015; Schmelz and Ziegelmeyer 2015; Suri and Watts 2011; Wang et al. 2012). A number of software platf