Pseudo-Likelihood or Quadrature? What We Thought We Knew, What We Think We Know, and What We Are Still Trying to Figure
- PDF / 520,455 Bytes
- 18 Pages / 496.063 x 720 pts Page_size
- 94 Downloads / 381 Views
Pseudo-Likelihood or Quadrature? What We Thought We Knew, What We Think We Know, and What We Are Still Trying to Figure Out Walt Stroup and Elizabeth Claassen Two predominant computing methods for generalized linear mixed models (GLMMs) are linearization, e.g., pseudo-likelihood (PL), and integral approximation, e.g., Gauss– Hermite quadrature. The primary GLMM package in R, LME4, only uses integral approximation. The primary GLMM procedure in SAS®, PROC GLIMMIX, was originally developed using linearization, but integral approximation methods were added in the 2008 release. This presents a dilemma for GLMM users: Which method should one use, and why? Linearization methods are more versatile and able to handle both conditional and marginal GLMMs. Linearization can be implemented with REML-like variance component estimation, whereas quadrature is strictly maximum likelihood. However, GLMM software documentation and the literature on which it is based tend to focus on linearization’s limitations. Stroup (Generalized linear mixed models: modern concepts, methods and applications, CRC Press, Boca Raton, 2013) reiterates this theme in his GLMM textbook. As a result, “conventional wisdom” has arisen that integral approximation—quadrature when possible—is always best. Meanwhile, ongoing experience with GLMMs and research about their small sample behavior suggest that “conventional wisdom” circa 2013 is often not true. Above all, it is clear there is no onesize-fits-all best method. The purpose of this paper is to provide an updated look at what we now know about quadrature and PL and to offer some general operating principles for making an informed choice between the two. A series of simulation studies investigating distributions and designs representative of research in agricultural and related disciplines provide an overview of each method with respect to estimation accuracy, type I error control, and robustness (or lack thereof) to model misspecification. Supplementary materials accompanying this paper appear online.
Key Words: Generalized linear mixed model; Linearization; Integral approximation; Type I error control; Model misspecification; Residual maximum likelihood.
W. Stroup (B) Department of Statistics, University of Nebraska-Lincoln, Lincoln, USA (E-mail: [email protected]). E. Claassen JMP Division, SAS Institute, Cary, USA. © 2020 International Biometric Society Journal of Agricultural, Biological, and Environmental Statistics https://doi.org/10.1007/s13253-020-00402-6
W. Stroup, E. Claassen
1. INTRODUCTION Over the past two decades, generalized linear mixed model (GLMM) software has been developed—e.g., PROC GLIMMIX in SAS/STAT®and R packages glmmPQL and LME4— and the GLMM has gained acceptance as a mainstream method of analyzing non-Gaussian data from studies that call for a mixed model approach. Unlike fixed-effects-only generalized linear models (GLMs) and linear mixed models (LMMs) for Gaussian data, both of which have likelihood functions from which estimating equations can be derived, GLMM likelihoo
Data Loading...