Sums of Independent Random Variables
Many of the important uses of Probability Theory flow from the study of sums of independent random variables. A simple example is from Statistics: if we perform an experiment repeatedly and independently, then the “average value” is given by \( \bar x = \
- PDF / 255,464 Bytes
- 7 Pages / 439 x 666 pts Page_size
- 81 Downloads / 257 Views
Many of the important uses of Probability Theory flow from the study of sums of independent random variables. A simple example is from Statistics: if we perform an experiment nrepeatedly and independently, then the “average value” is given by x = n1 j=1 Xj , where Xj represents the outcome of the j th experiment. The r.v. x is then called an estimator for the mean μ of each of the Xj . Statistical theory studies when (and how) x converges to μ as n tends to ∞. Even once we show that x tends to μ as n tends to ∞, we also need to know how large n should be in order to be reasonably sure that x is close to the true value μ (which is, in general, unknown). There are other, more sophisticated questions that arise as well: what is the probability distribution of x? If we cannot infer the exact distribution of x, can we approximate it? How large need n be so that our approximation is sufficiently accurate? If we have prior information about μ, how do we use that to improve upon our estimator x? Even to begin to answer some of these fundamentally important questions we need to study sums of independent random variables. Theorem 15.1. Let X, Y be two R-valued independent random variables. The distribution measure μZ of Z = X + Y is the convolution product of the probability measures μX and μY , defined by 1A (x + y)μX (dx)μY (dy). (15.1) μX ∗ μY (A) = Proof. Since X and Y are independent, we know that the joint distribution of (X, Y ) is μX ⊗ μY . Therefore E{g(X, Y )} = g(x, y)μX (dx)μY (dy), and in particular, using g(x, y) = f (x + y): E{f (X + Y )} = f (x + y)μX (dx)μY (dy),
(15.2)
for any Borel function f on R for which the integrals exist. It suffices to take f (x) = 1A (x). J. Jacod et al., Probability Essentials © Springer-Verlag Berlin Heidelberg 2004
118
15 Sums of Independent Random Variables
Remark 15.1. Formula (15.2) above shows that for f : R → R Borel measurable and Z = X + Y with X and Y independent: E{f (Z)} = f (z)(μX ∗ μY )(dz) = f (x + y)μX (dx)μY (dy). Theorem 15.2. Let X, Y be independent real valued random variables, with Z = X + Y . Then the characteristic function ϕZ is the product of ϕX and ϕY ; that is: ϕZ (u) = ϕX (u)ϕY (u). Proof. Let f (z) = ei u,z and use formula (15.2).
Caution: If Z = X + Y , the property that ϕZ (u) = ϕX (u)ϕY (u) for all u ∈ R is not enough to ensure that X and Y are independent. Theorem 15.3. Let X, Y be independent real valued random variables and let Z = X + Y . a) If X has a density fX , then Z has a density fZ and moreover: fZ (z) = fX (z − y)μY (dy) b) If in addition Y has a density fY , then fZ (z) = fX (z − y)fY (y)dy = fX (x)fY (z − x)dx. Proof. (b): Suppose (a) is true. Then fZ (z) = fX (z − y)μY (dy). However μY (dy) = fY (y)dy, and we have the first equality. Interchanging the roles of X and Y gives the second equality. (a): By Theorem 15.1 we have 1A (x + y)μX (dx)μY (dy) μZ (A) = = 1A (x + y)fX (x)dx μY (dy). Next let z = x + y; dz = dx; = 1A (z)fX (z − y)dz μY (dy) and applying the Tonelli-Fubini theorem:
Data Loading...