Covariance pattern mixture models: Eliminating random effects to improve convergence and performance

  • PDF / 2,130,330 Bytes
  • 33 Pages / 595.276 x 790.866 pts Page_size
  • 39 Downloads / 175 Views

DOWNLOAD

REPORT


Covariance pattern mixture models: Eliminating random effects to improve convergence and performance Daniel McNeish 1 & Jeffrey Harring 2

# The Psychonomic Society, Inc. 2019

Abstract Growth mixture models (GMMs) are prevalent for modeling unknown population heterogeneity via distinct latent classes. However, GMMs are riddled with convergence issues, often requiring researchers to atheoretically alter the model with crossclass constraints simply to obtain convergence. We discuss how within-class random effects in GMMs exacerbate convergence issues, even though these random effects rarely help answer typical research questions. That is, latent classes provide a discretization of continuous random effects, so including additional random effects within latent classes can unnecessarily complicate the model. These random effects are commonly included in order to properly specify the marginal covariance; however, random effects are inefficient for patterning a covariance matrix, resulting in estimation issues. Such a goal can be achieved more simply through covariance pattern models, which we extend to the mixture model context in this article (covariance pattern mixture models, or CPMMs). We provide evidence from theory, simulation, and an empirical example showing that employing CPMMs (even if they are misspecified) instead of GMMs can circumvent the computational difficulties that can plague GMMs, without sacrificing the ability to answer the types of questions commonly asked in empirical studies. Our results show the advantages of CPMMs with respect to improved class enumeration and less biased class-specific growth trajectories, in addition to their vastly improved convergence rates. The results also show that constraining the covariance parameters across classes in order to bypass convergence issues with GMMs leads to poor results. An extensive software appendix is included to assist researchers in running CPMMs in Mplus. Keywords Finite mixture modeling . Convergence . Growth mixture modeling . Constraints . Latent class analysis In longitudinal data analysis, mixture models are commonplace in the empirical literature in which the primary goal is to identify unobserved, latent classes of growth trajectories (Jung and Wickrama, 2007). As a hypothetical example, researchers may follow students’ test scores over time and wish to identify which students in the sample are “on-pace” learners, “accelerated” learners or “slow” learners (e.g., Musu-Gillette, Wigfield, Harring, & Eccles, 2015). These subgroups are latent and are not identified a priori as observed variables like other independent variables that may be of interest (e.g., gender, socioeconomic status [SES], treatment condition). Instead, their existence must be inferred from characteristics of the growth patterns themselves. * Daniel McNeish [email protected] 1

Arizona State University, Tempe, AZ, USA

2

University of Maryland, College Park, MD, USA

Common goals of a mixture analysis in this longitudinal context are to identify how many classes exist, to