ADOpy: a python package for adaptive design optimization
- PDF / 3,004,058 Bytes
- 24 Pages / 595.224 x 790.955 pts Page_size
- 100 Downloads / 213 Views
ADOpy: a python package for adaptive design optimization Jaeyeong Yang1 · Mark A. Pitt2 · Woo-Young Ahn1 · Jay I. Myung2
© The Psychonomic Society, Inc. 2020
Abstract Experimental design is fundamental to research, but formal methods to identify good designs are lacking. Advances in Bayesian statistics and machine learning offer algorithm-based ways to identify good experimental designs. Adaptive design optimization (ADO; Cavagnaro, Myung, Pitt, & Kujala, 2010; Myung, Cavagnaro, & Pitt, 2013) is one such method. It works by maximizing the informativeness and efficiency of data collection, thereby improving inference. ADO is a generalpurpose method for conducting adaptive experiments on the fly and can lead to rapid accumulation of information about the phenomenon of interest with the fewest number of trials. The nontrivial technical skills required to use ADO have been a barrier to its wider adoption. To increase its accessibility to experimentalists at large, we introduce an open-source Python package, ADOpy, that implements ADO for optimizing experimental design. The package, available on GitHub, is written using high-level modular-based commands such that users do not have to understand the computational details of the ADO algorithm. In this paper, we first provide a tutorial introduction to ADOpy and ADO itself, and then illustrate its use in three walk-through examples: psychometric function estimation, delay discounting, and risky choice. Simulation data are also provided to demonstrate how ADO designs compare with other designs (random, staircase). Keywords Cognitive modeling · Bayesian adaptive experimentation · Optimal experimental design · Psychometric function estimation · Delay discounting · Risky choice
Introduction A main goal of psychological research is to gain knowledge about brain and behavior. Scientific discovery is guided in part by statistical inference, and the strength of any inference depends on the quality of the data collected. Because human data always contain various types of noise, researchers need to design experiments so that the signal of interest (experimental manipulations) is amplified while unintended influences from uncontrolled variables (noise) are still present. The design space, the stimulus set
Woo-Young Ahn
[email protected] Jay I. Myung
[email protected] 1
Department of Psychology, Seoul National University, Seoul, Korea
2
Department of Psychology, Ohio State University, Columbus, OH, USA
that arises from decisions about the independent variable (number of variables, number of levels of each variable) is critically important for creating a high-signal experiment. A similarly important consideration is the stimulus presentation schedule during the experiment. This issue is often guided by two competing goals: efficiency and precision. How much data must be collected to be confident that differences between conditions could be found? This question is similar to that asked when performing a power analysis, but is focused on the performance of the participant duri
Data Loading...