A Multiple Imputation Framework for Massive Multivariate Data of Different Variable Types: A Monte-Carlo Technique

The purpose of this chapter is to build theoretical, algorithmic, and implementation-based components of a unified, general-purpose multiple imputation framework for intensive multivariate data sets that are collected via increasingly popular real-time da

  • PDF / 262,948 Bytes
  • 20 Pages / 439.37 x 666.142 pts Page_size
  • 31 Downloads / 196 Views

DOWNLOAD

REPORT


Abstract The purpose of this chapter is to build theoretical, algorithmic, and implementation-based components of a unified, general-purpose multiple imputation framework for intensive multivariate data sets that are collected via increasingly popular real-time data capture methods. Such data typically include all major types of variables that are incomplete due to planned missingness designs, which have been developed to reduce respondent burden and lower the cost associated with data collection. The imputation approach presented herein complements the methods available for incomplete data analysis via richer and more flexible modeling procedures, and can easily generalize to a variety of research areas that involve internet studies and processes that are designed to collect continuous streams of real-time data. Planned missingness designs are highly useful and will likely increase in popularity in the future. For this reason, the proposed multiple imputation framework represents an important and refined addition to the existing methods, and has potential to advance scientific knowledge and research in a meaningful way. Capability of accommodating many incomplete variables of different distributional nature, types, and dependence structures could be a contributing factor for better comprehending the operational characteristics of today’s massive data trends. It offers promising potential for building enhanced statistical computing infrastructure for education and research in the sense of providing principled, useful, general, and flexible set of computational tools for handling incomplete data.

1 Introduction Missing data are a commonly occurring phenomenon in many contexts. Determining a suitable analytical approach in the presence of incomplete observations is a major focus of scientific inquiry due to the additional complexity that arises through missing data. Incompleteness generally complicates the statistical analysis in terms of biased H. Demirtas (B) Division of Epidemiology and Biostatistics (MC923), University of Illinois at Chicago, 1603 West Taylor Street, Chicago, IL 60612, USA e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2017 D.-G. Chen and J.D. Chen (eds.), Monte-Carlo Simulation-Based Statistical Modeling, ICSA Book Series in Statistics, DOI 10.1007/978-981-10-3307-0_8

143

144

H. Demirtas

parameter estimates, reduced statistical power, and degraded confidence intervals, and thereby may lead to false inferences (Little and Rubin 2002). Advances in computational statistics have produced flexible missing-data procedures with a sound statistical basis. One of these procedures involves multiple imputation (MI), which is a stochastic simulation technique in which the missing values are replaced by m > 1 simulated versions (Rubin 2004). Subsequently, each of the simulated complete data sets is analyzed by standard methods, and the results are combined into a single inferential statement that formally incorporates missing-data uncertainty to the modeling process. MI has gained widesp