On the Convergence Complexity of Gibbs Samplers for a Family of Simple Bayesian Random Effects Models

  • PDF / 734,773 Bytes
  • 29 Pages / 439.642 x 666.49 pts Page_size
  • 55 Downloads / 243 Views

DOWNLOAD

REPORT


On the Convergence Complexity of Gibbs Samplers for a Family of Simple Bayesian Random Effects Models Bryant Davis1

· James P. Hobert1

Received: 28 April 2020 / Revised: 29 June 2020 / Accepted: 3 July 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract The emergence of big data has led to so-called convergence complexity analysis, which is the study of how Markov chain Monte Carlo (MCMC) algorithms behave as the sample size, n, and/or the number of parameters, p, in the underlying data set increase. This type of analysis is often quite challenging, in part because existing results for fixed n and p are simply not sharp enough to yield good asymptotic results. One of the first convergence complexity results for an MCMC algorithm on a continuous state space is due to Yang and Rosenthal (2019), who established a mixing time result for a Gibbs sampler (for a simple Bayesian random effects model) that was introduced and studied by Rosenthal (Stat Comput 6:269–275, 1996). The asymptotic behavior of the spectral gap of this Gibbs sampler is, however, still unknown. We use a recently developed simulation technique (Qin et al. Electron J Stat 13:1790–1812, 2019) to provide substantial numerical evidence that the gap is bounded away from 0 as n → ∞. We also establish a pair of rigorous convergence complexity results for two different Gibbs samplers associated with a generalization of the random effects model considered by Rosenthal (Stat Comput 6:269–275, 1996). Our results show that, under a strong growth condition, the spectral gaps of these Gibbs samplers converge to 1 as the sample size increases. Keywords Convergence rate · Geometric ergodicity · High-dimensional inference · Monte Carlo · Quantitative bound · Spectral gap · Total variation distance · Trace-class operator · Wasserstein distance

1 Introduction Markov chain Monte Carlo (MCMC) is one of the most commonly used tools in modern Bayesian statistics. It is well known that the practical performance of an MCMC algorithm is  Bryant Davis

[email protected] James P. Hobert [email protected] 1

Department of Statistics University of Florida, Gainesville, FL, USA

Methodology and Computing in Applied Probability

directly related to the speed at which the underlying Markov chain converges to its stationary distribution. Over the last three decades, a great deal of work has been done to establish the convergence properties of many different practical Monte Carlo Markov chains. Recently, with the emergence of big data, interest has shifted away from the analysis of individual Markov chains (for fixed data sets), and towards the study of how algorithms behave as the sample size, n, and/or the number of parameters, p, in the data set increase. This type of study, which is called convergence complexity analysis, is often quite challenging, in part because the dimension of the Markov chain typically increases as n and/or p grow, and existing results for fixed n and p are simply not sharp enough to yield good asymptotic results (see,