Limit Theorems: Extensions and Generalizations
Let us recapitulate what we have learned so far. After an introductory chapter containing some set theory and measure theory, we met a chapter on random variables and expectations, the probabilistic equivalent of Lebesgue integration on finite measure spa
- PDF / 363,233 Bytes
- 44 Pages / 439.37 x 666.142 pts Page_size
- 108 Downloads / 213 Views
Limit Theorems: Extensions and Generalizations
Let us recapitulate what we have learned so far. After an introductory chapter containing some set theory and measure theory, we met a chapter on random variables and expectations, the probabilistic equivalent of Lebesgue integration on finite measure spaces. This was then followed by a number of probabilistic tools and methods, and one chapter each on the three cornerstone results, the law of large numbers, the central limit theorem, and the law of the iterated logarithm—LLN, CLT, and LIL. So, what’s up next? The main emphasis in the last three chapters has been on sums of independent, identically distributed random variables. In the last two chapters finite variance was an additional assumption. Some natural objects that arise are the following: • What happens if the variance does not exist? • What happens if the summands are no longer independent? • Are there interesting objects aside from sums? These and other problems certainly deserve a chapter of their own in a book like this. However, one has to make choices. Such choices necessarily are made via a blend of “importance of the various topics” and “personal taste” (which are not completely disjoint). In this chapter we provide an introduction, sometimes a little more than that, to some more general limit theorems, with the hope that the reader will be tempted to look into the literature for more. However, there is one exception—the theory of martingales, which is given a chapter of its own; the next, and final one. The first three sections are devoted to an extension of the central limit problem. The problem was “what can be said if the variance does not exist?” By departing from the . . have a symmetric normal analog we remember from Sect. 4.2.4 that if X 1 , X 2 , . stable distribution with index α, where 0 < α ≤ 2, then so has nk=1 X k /n 1/α . that if the variance does not exist, then the sum nIs there a connection? Is it possible1/α X , suitably normalized—by n for some convenient α?—is asymptotically k k=1 stable?
A. Gut, Probability: A Graduate Course, Springer Texts in Statistics, DOI: 10.1007/978-1-4614-4708-5_9, © Springer Science+Business Media New York 2013
423
424
9 Limit Theorems: Extensions and Generalizations
The answer to this question is essentially “yes” and the limits are the stable distributions, but the details are technically more sophisticated than when the variance is finite. So, before turning to asymptotics we have to investigate the stable distribution themselves, and to prove the so-called convergence to types theorem, which states that if a sequence of random variables converges in distribution and a linearly transformed sequence does too, then the limits must be linear transformations of each other. A further extension is to consider arrays of random variables and the corresponding classes of limit distributions. The class of limit distributions for arrays is the class of infinitely divisible distributions. This will be the topic of Sect. 4. Another generalization is to su
Data Loading...