The Central Limit Theorem
The law of large numbers states that the arithmetic mean of independent, identically distributed random variables converges to the expected value. One interpretation of the central limit theorem is as a (distributional) rate result.
- PDF / 507,727 Bytes
- 54 Pages / 439.37 x 666.142 pts Page_size
- 47 Downloads / 198 Views
The Central Limit Theorem
The law of large numbers states that the arithmetic mean of independent, identically distributed random variables converges to the expected value. One interpretation of the central limit theorem is as a (distributional) rate result. Technically, let variables with mean X, X 1 , X 2 , . . . be independent, identically distributed random μ. The weak and strong laws of large numbers state that n1 nk=1 X k → μ in probability and almost surely, respectively, as n → ∞. A distributional rate result deals with the question of how one should properly “blow up” the difference n1 nk=1 X k − μ in order for the limit to be non-trivial as n tends to infinity. The corresponding theorem was first stated by Laplace. The first general version with a rigorous proof is due to Lyapounov [186, 187]. √ It turns out that if, in addition, the variance exists, then a multiplication by n yields a normal distribution in the limit. Our first result is a proof of this fact. We also prove the Lindeberg–Lévy–Feller theorem which deals with the same problem under the assumption that the summands are independent, but not identically distributed, and Lyapounov’s version. Another variant is Anscombe’s theorem, a special case of which is the central limit theorem for randomly indexed sums of random variables. After this we turn our attention to uniform integrability and convergence of moments, followed by the celebrated Berry–Esseen theorem, which is a convergence rate result for the central limit theorem, in that it provides an upper bound for the difference between the distribution functions of the standardized arithmetic mean and the normal distribution, under the additional assumption of a finite third moment. The remaining part of the chapter (except for the problems) might be considered as somewhat more peripheral for the non-specialist. It contains various rate results for tail probabilities, applications to our companions renewal theory and records, some remarks on so-called local limit theorems for discrete random variables, and a mention of the concept of large deviations.
A. Gut, Probability: A Graduate Course, Springer Texts in Statistics, DOI: 10.1007/978-1-4614-4708-5_7, © Springer Science+Business Media New York 2013
329
330
7 The Central Limit Theorem
There also exist limit theorems when the variance does not exist and/or when the summands are not independent. An introduction to these topics will be given in Chap. 9.
1 The i.i.d. Case In order to illustrate the procedure, we begin with the following warm-up; the i.i.d. case. Theorem 1.1 Let X, X 1 , X 2 , . . . be independent, identically distributed random variables with finite expectation μ, and positive, finite variance σ 2 , and set Sn = X 1 + X 2 + · · · + X n , n ≥ 1. Then Sn − nμ d → N (0, 1) as n → ∞. √ σ n Proof In view of the continuity theorem for characteristic functions (Theorem 5.9.1), it suffices to prove that −t /2 as n → ∞, for − ∞ < t < ∞. ϕ Sn √ −nμ (t) → e σ n n √ √ Since (Sn −nμ)/σ n = k=1 (X k −μ)/σ / n, we may assume w.l
Data Loading...