Robustly estimating the marginal likelihood for cognitive models via importance sampling
- PDF / 556,275 Bytes
- 18 Pages / 595.224 x 790.955 pts Page_size
- 5 Downloads / 200 Views
Robustly estimating the marginal likelihood for cognitive models via importance sampling M.-N. Tran1 · M. Scharth1 · D. Gunawan2 · R. Kohn3 · S. D. Brown4 · G. E. Hawkins4
© The Psychonomic Society, Inc. 2020
Abstract Recent advances in Markov chain Monte Carlo (MCMC) extend the scope of Bayesian inference to models for which the likelihood function is intractable. Although these developments allow us to estimate model parameters, other basic problems such as estimating the marginal likelihood, a fundamental tool in Bayesian model selection, remain challenging. This is an important scientific limitation because testing psychological hypotheses with hierarchical models has proven difficult with current model selection methods. We propose an efficient method for estimating the marginal likelihood for models where the likelihood is intractable, but can be estimated unbiasedly. It is based on first running a sampling method such as MCMC to obtain samples for the model parameters, and then using these samples to construct the proposal density in an importance sampling (IS) framework with an unbiased estimate of the likelihood. Our method has several attractive properties: it generates an unbiased estimate of the marginal likelihood, it is robust to the quality and target of the sampling method used to form the IS proposals, and it is computationally cheap to estimate the variance of the marginal likelihood estimator. We also obtain the convergence properties of the method and provide guidelines on maximizing computational efficiency. The method is illustrated in two challenging cases involving hierarchical models: identifying the form of individual differences in an applied choice scenario, and evaluating the best parameterization of a cognitive model in a speeded decision making context. Freely available code to implement the methods is provided. Extensions to posterior moment estimation and parallelization are also discussed. Keywords Bayesian inference · Hierarchical LBA model · Model selection · Parallel computation · Standard error · Unbiased likelihood estimate
Introduction Many psychologically interesting research questions involve comparing competing theories: Does sleep deprivation cause attentional lapses? Does alcohol impair the speed of information processing or reduce cautiousness, or both? Does the forgetting curve follow a power or expo-
G. E. Hawkins
[email protected] 1
The University of Sydney Business School, Sydney, Australia
2
School of Mathematics and Statistics, University of Wollongong, Wollongong, Australia
3
UNSW Business School, University of New South Wales, Kensington, Australia
4
School of Psychology, University of Newcastle, Callaghan, Australia
nential function? In many cases, the competing theories can be represented as a set of quantitative models that are applied (“fitted”) to the observed data. We can then estimate a metric that quantifies the degree to which each model accounts for the patterns observed in data balanced against its flexibility. Model flexibility i
Data Loading...