Sequential sampling models without random between-trial variability: the racing diffusion model of speeded decision maki
- PDF / 6,836,300 Bytes
- 26 Pages / 595.224 x 790.955 pts Page_size
- 80 Downloads / 165 Views
THEORETICAL REVIEW
Sequential sampling models without random between-trial variability: the racing diffusion model of speeded decision making Gabriel Tillman1,2 · Trish Van Zandt3 · Gordon D. Logan2
© The Psychonomic Society, Inc. 2020
Abstract Most current sequential sampling models have random between-trial variability in their parameters. These sources of variability make the models more complex in order to fit response time data, do not provide any further explanation to how the data were generated, and have recently been criticised for allowing infinite flexibility in the models. To explore and test the need of between-trial variability parameters we develop a simple sequential sampling model of N-choice speeded decision making: the racing diffusion model. The model makes speeded decisions from a race of evidence accumulators that integrate information in a noisy fashion within a trial. The racing diffusion does not assume that any evidence accumulation process varies between trial, and so, the model provides alternative explanations of key response time phenomena, such as fast and slow error response times relative to correct response times. Overall, our paper gives good reason to rethink including between-trial variability parameters in sequential sampling models Keywords Response time · Sequential sampling models · Decision making Evidence accumulation is arguably the most dominant theory of how people make speeded decisions (see Donkin & Brown, 2018, for a review), and it is typically instantiated in sequential sampling models (e.g., Ratcliff, 1978; Usher & McClelland, 2001; Brown & Heathcote, 2008). These models provide an accurate account of correct and error response time (RT) distributions as well as the corresponding accuracy rates in speeded decision making tasks. The models also allow researchers to translate the data into the meaningful psychological parameters that generate the data. Sequential sampling models assume a simple cognitive architecture consisting of stimulus encoding, response selection, and overt response execution. To make a
Gabriel Tillman
[email protected] 1
School of Health and Life Sciences, Federation University, Ballarat, Australia
2
Department of Psychology, Vanderbilt University, Nashville, TN, USA
3
Department of Psychology, The Ohio State University, Columbus, OH, USA
decision, people begin with an initial amount of evidence for all response options, the starting point of evidence accumulation (Fig. 1). From the starting point, more evidence is continually sampled from the stimulus, which accumulates at a rate of drift rate towards the corresponding response threshold. When the accumulated evidence crosses a response threshold this triggers the corresponding overt response. The quality of evidence sampled from the stimulus governs the drift rate, which can be interpreted as the speed of information processing. Higher response thresholds mean that a person needs more evidence to trigger a response, and so, threshold settings represent how cautious a pe
Data Loading...