Fitting More Complex Bayesian Models: Markov Chain Monte Carlo

So far we have been dealing primarily with simple, conjugate Bayesian models for which it was possible to perform exact posterior inference analytically. In more realistic and complex Bayesian models, such analytical calculations generally are not feasibl

  • PDF / 1,086,629 Bytes
  • 35 Pages / 439.36 x 666.15 pts Page_size
  • 8 Downloads / 221 Views

DOWNLOAD

REPORT


Fitting More Complex Bayesian Models: Markov Chain Monte Carlo

So far we have been dealing primarily with simple, conjugate Bayesian models for which it was possible to perform exact posterior inference analytically. In more realistic and complex Bayesian models, such analytical calculations generally are not feasible. This chapter introduces the sampling-based methods of fitting Bayesian models that have transformed Bayesian statistics over the last 20 years.

8.1 Why Sampling-Based Methods Are Needed As we already have discussed, the goals of Bayesian analysis are to make inference about unknown model parameters and to make predictions about unobserved data values. The computationally challenging aspects of these tasks involve integration. In this section, we will see why integration is an essential element in Bayesian analysis and will investigate the limitations of some popular methods of integration.

8.1.1 Single-Parameter Model Example The challenge of integration arises even in single-parameter models with nonconjugate priors—for example, the model with a histogram prior on a binomial success parameter π discussed in Sect. 5.2.2. Let’s see what is required to carry out the following standard inferential procedures: Plot the posterior density p(π |y) as in Fig. 5.3, calculate the posterior mean E(π |y), and obtain posterior predictive probabilities for the number of successes in a future sample of size 20.

M.K. Cowles, Applied Bayesian Statistics: With R and OpenBUGS Examples, Springer Texts in Statistics 98, DOI 10.1007/978-1-4614-5696-4 8, © Springer Science+Business Media New York 2013

111

112

8 Fitting More Complex Bayesian Models: Markov Chain Monte Carlo

8.1.1.1 Plotting the Posterior Density As long as we can write down the likelihood and the prior(s) in mathematical form, we always can obtain an expression proportional to the resulting posterior distribution. (This is just Bayes’ theorem, and it is true regardless of how many parameters there are in the model.) However, in order to plot a density, we need the normalizing constant, so that the area under our density plot will be 1. When the prior is nonconjugate and the posterior density is not of a recognizable family, the normalizing constant must be obtained by integration: We must find out to what numeric value the unnormalized density integrates, and then the normalizing constant is just its inverse. In the example from Sect. 5.2.2, the required integral is  1 0

π 7 (1 − π )43 p(π )d π

(8.1)

where the histogram prior p(π ) is as given below. The third column shows the normalized prior densities, such that the areas of the histogram bars sum to 1. Interval

Prior probability

Prior density

(0, 0.1] (0.1, 0.2] (0.2, 0.3] (0.3, 0.4] > 0.4

0.25 0.50 0.20 0.05 0.00

2.5 5.0 2.0 0.5 0.0

In this case, doing the integral analytically is possible (but tedious, since the integrand is a 44-term polynomial). Since the prior density is 0 for π > 0.4, the posterior will also be 0 there, and hence, the integral needs to be evaluated only over (0.0