Model Comparison, Model Checking, and Hypothesis Testing
In most real data analysis situations, researchers consider several statistical models that might be appropriate for the application. They establish criteria for determining which of the candidate models is best, and whether even that model is good enough
- PDF / 277,915 Bytes
- 18 Pages / 439.36 x 666.15 pts Page_size
- 51 Downloads / 309 Views
Model Comparison, Model Checking, and Hypothesis Testing
“All models are wrong. Some models are useful.” This well-known quotation by the statistician George E. P. Box captures one challenge facing the applied statistician: how to determine whether the statistical model(s) entertained to address a particular research question is not wrong in ways that lead to useless or incorrect conclusions regarding important aspects of the question at hand. In most real data analysis situations, researchers consider several statistical models that might be appropriate for the application. They establish criteria for determining which of the candidate models is best, and whether even that model is good enough to use as the basis for inference. This chapter explores Bayesian methods of comparing models, testing hypotheses, and assessing model adequacy. Specifically, we look at two Bayesian tools for model comparison— Bayes factors (Kass and Raftery 1995) and the more recently proposed Deviance Information Criterion (Spiegelhalter et al. 2002). We then see how to apply posterior predictive model checking (Gelman et al. 1996) to determine whether a chosen model is adequate for the research purpose.
11.1 Bayes Factors for Model Comparison and Hypothesis Testing We first will investigate Bayes factors, which have a long history in Bayesian model comparison and hypothesis testing.
11.1.1 Bayes Factors in the Simple/Simple Case Let’s first consider the most straightforward application of Bayes factors—that of making a decision between two models for the state of the world, or equivalently, M.K. Cowles, Applied Bayesian Statistics: With R and OpenBUGS Examples, Springer Texts in Statistics 98, DOI 10.1007/978-1-4614-5696-4 11, © Springer Science+Business Media New York 2013
207
208
11 Model Comparison, Model Checking, and Hypothesis Testing Table 11.1 Prior likelihood, and posterior probabilities Model
Prior probabilities
Likelihood for M+
Prior × likelihood
Posterior probabilities
M0 No breast cancer M1 Breast cancer
0.9955 0.0045
0.0274 0.724
0.0273 0.0033
0.893 0.107
between two simple hypotheses about an unknown parameter. Recall that a simple hypothesis is a statement that a parameter takes on a specific value. As an example, let’s revisit the breast cancer problem from Chap. 1. Recall that the two possible states of the world (or models) were that my friend had breast cancer and that she did not have breast cancer (with prior probabilities 0.0045 and 0.9955, respectively). Furthermore, the probability of a positive screening mammogram was 0.724 for a woman who has breast cancer and 0.0274 for a woman who does not. Table 11.1 is a rearrangement of Table 1.3, in which we used Bayes’ theorem to move from the prior probabilities of the two models, through the likelihood, to the posterior probabilities. Since the model (breast cancer or not) determines the probability of a positive mammogram, this problem can be cast equivalently as a hypothesis test about the parameter representing this probability. If we call the para
Data Loading...