The Laws of Large Numbers

One of the fundamental results of Probability Theory is the Strong Law of Large Numbers. It helps to justify our intuitive notions of what probability actually is (Example 1), and it has many direct applications, such as (for example) Monte Carlo estimati

  • PDF / 256,160 Bytes
  • 7 Pages / 439 x 666 pts Page_size
  • 77 Downloads / 209 Views

DOWNLOAD

REPORT


One of the fundamental results of Probability Theory is the Strong Law of Large Numbers. It helps to justify our intuitive notions of what probability actually is (Example 1), and it has many direct applications, such as (for example) Monte Carlo estimation theory (see Example 2). Let (Xn )n≥1 be a sequence nof random variables defined on the same probability space and let Sn = j=1 Xj . A theorem that states that n1 Sn converges in some sense is a law of large numbers. There are many such results; for example L2 ergodic theorems or the Birkhoff ergodic theorem, considered when the measure space is actually a probability space, are examples of laws of large numbers. (See Theorem 20.3, for example). The convergence can be in probability, in Lp , or almost sure. When the convergence is almost sure, we call it a strong law of large numbers. Theorem 20.1 (Strong Law of Large Numbers). Let (Xn )n≥1 be independent and identically distributed (i.i.d.) and defined on the same space. Let

Let Sn =

n j=1

μ = E{Xj }

and

2 σ 2 = σX < ∞. j

Xj . Then Sn 1 = lim Xj = μ a.s. and in L2 . n→∞ n n→∞ n j=1 n

lim

2 , since all the (Xj )j≥1 have Remark 20.1. We write μ, σ 2 instead of μj , σX j the same distribution and therefore the same mean and variance. Note also that limn→∞ Snn = μ in probability, since L2 and a.s. convergence both imply convergence in probability. It is easy to prove limn→∞ Snn = μ in probability using Chebyshev’s inequality, and this is often called the Weak Law of Large Numbers. Since it is a corollary of the Strong Law given here, we do not include its proof. The proof of Theorem 20.1 is also simpler if we assume only Xj ∈ L3 (all j), and it is often presented this way in textbooks. A stronger result, where the Xn ’s are integrable but not necessarily square-integrable is stated in Theorem 20.2 and proved in Chapter 27.

Proof of Theorem 20.1: First let us note that without loss of generality we can assume μ = E{Xj } = 0. Indeed if μ = 0, then we can replace Xj with J. Jacod et al., Probability Essentials © Springer-Verlag Berlin Heidelberg 2004

174

20 The Laws of Large Numbers

Zj = Xj − μ. We obtain limn→∞

1 n

n j=1

1 (Xj − μ) = lim n→∞ n n→∞ j=1 n

Zj = 0 and therefore

lim

1 Xj n

−μ=0

from which we deduce the result. n We henceforth assume μ = 0. Recall Sn = j=1 Xj and let Yn = Snn . n  Then E{Yn } = n1 j=1 E{Xj } = 0. Moreover E{Yn2 } = n12 1≤j,k≤n E{Xj Xk }. However if j = k then E{Xj Xk } = E{Xj }E{Xk } = 0 since Xj and Xk are assumed to be independent. Therefore E{Yn2 } =

n 1  E{Xj2 } n2 j=1

(20.1)

=

n 1  2 1 σ = 2 (nσ 2 ) 2 n j=1 n

=

σ2 n

and hence lim E{Yn2 } = 0. Since Yn converges to 0 in L2 we know there is a subsequence converging to 0 a.s. However we want to conclude the original sequence converges a.s. To do this we find a subsequence converging a.s., and then treat the terms in between successive terms of the subsequence. 2 Since E{Yn2 } = σn , let us choose the subsequence n2 ; then ∞ 

E{Yn22 }

n=1

∞  σ2 = < ∞; n2 n=1

∞ therefore by Theorem 9.2 we know