Characteristic Functions

Adding independent random variables is a frequent occupation in probability theory. Mathematically this corresponds to convolving functions. Just as there are Fourier transforms and Laplace transforms which transform convolution into multiplication, there

  • PDF / 437,728 Bytes
  • 44 Pages / 439.37 x 666.142 pts Page_size
  • 110 Downloads / 244 Views

DOWNLOAD

REPORT


Characteristic Functions

Adding independent random variables is a frequent occupation in probability theory. Mathematically this corresponds to convolving functions. Just as there are Fourier transforms and Laplace transforms which transform convolution into multiplication, there are transforms in probability theory that transform addition of independent random variables into multiplication of transforms. Although we shall mainly use one of them, the characteristic function, we shall, in this chapter, briefly also present three others—the cumulant generating function, which is the logarithm of the characteristic function; the probability generating function; and the moment generating function. In Chap. 5 we shall prove so-called continuity theorems, which permit limits of distributions to be determined with the aid of limits of transforms. Uniqueness is indispensable in order to make things work properly. This means that if we replace the adding of independent random variables with the multiplication of their transforms we must be sure that the resulting transform corresponds uniquely to the distribution of the sum under investigation. The first thing we therefore have to do in order to see that we are on the right track is to prove that • summation of independent random variables corresponds to multiplication of their transforms; • the transformation is 1 to 1; there is a uniqueness theorem to the effect that if two random variables have the same transform then they also have the same distribution.

1 Definition and Basics In this first section we define characteristic functions and prove some basic facts, including uniqueness, inversion, and the “multiplication property”, in other words, verify that characteristic functions possess the desired features.

A. Gut, Probability: A Graduate Course, Springer Texts in Statistics, DOI: 10.1007/978-1-4614-4708-5_4, © Springer Science+Business Media New York 2013

157

158

4 Characteristic Functions

Definition 1.1 The characteristic function of the random variable X is  ∞ ϕ X (t) = E eit X = eit x dFX (x). −∞



√ Remark 1.1 Apart from a minus sign in the exponent (and, possibly, a factor 1/2π), characteristic functions coincide with Fourier transforms in the absolutely continuous case and with Fourier series in the lattice case.  Before we prove uniqueness and the multiplication theorem we present some basic facts. Note that the first property tells us that characteristic functions exist for all random variables. Theorem 1.1 Let X be a random variable. Then (a) |ϕ X (t)| ≤ ϕ X (0) = 1; (b) ϕ X (t) = ϕ X (−t) = ϕ−X (t); (c) ϕ X (t) is uniformly continuous. Proof (a): We have |E eit X | ≤ E |eit X | = E 1 = 1 = E ei·0·X = ϕ X (0). To prove (b) we simply let the minus sign wander through the exponent: eixt = cos xt + i sin xt = cos xt − i sin xt (= e(−i)xt ) = cos(x(−t)) + i sin(x(−t)) (= eix(−t) ) = cos((−x)t) + i sin((−x)t) (= ei(−x)t ). As for (c), let t be arbitrary and h > 0 (a similar argument works for h < 0). Apart from the trivial estimate |eix − 1| ≤ 2, we k