Introductory Measure Theory

The object of probability theory is to describe and investigate mathematical models of random phenomena, primarily from a theoretical point of view. Closely related to probability theory is statistics, which is concerned with creating principles, methods,

  • PDF / 235,520 Bytes
  • 24 Pages / 439.37 x 666.142 pts Page_size
  • 52 Downloads / 185 Views

DOWNLOAD

REPORT


Introductory Measure Theory

1 Probability Theory: An Introduction The object of probability theory is to describe and investigate mathematical models of random phenomena, primarily from a theoretical point of view. Closely related to probability theory is statistics, which is concerned with creating principles, methods, and criteria in order to treat data pertaining to such (random) phenomena or data from experiments and other observations of the real world, by using, for example, the theories and knowledge available from the theory of probability. Probability models thus aim at describing random experiments, that is, experiments that can be repeated (indefinitely) and where future outcomes cannot be exactly predicted — due to randomness — even if the experimental situation at hand can be fully controlled. The basis of probability theory is the probability space. The key idea behind probability spaces is the stabilization of the relative frequencies. Suppose that we perform “independent” repetitions of a random experiment and that we record each time if some “event” A occurs or not (although we have not yet mathematically defined what we mean by independence or by an event). Let f n (A) denote the number of occurrences of A in the first n trials, and rn (A) the relative frequency, rn (A) = f n (A)/n. Since the dawn of history one has observed the stabilization of the relative frequencies, that is, one has observed that (it seems that) rn (A)

converges to some real number as n → ∞.

The intuitive interpretation of the probability concept is that if the probability of some event A is 0.6, one should expect that by performing the random experiment “many times” the relative frequency of occurrences of A should be approximately 0.6. The next step is to axiomatize the theory, to make it mathematically rigorous. Although games of chance have been performed for thousands of years,

A. Gut, Probability: A Graduate Course, Springer Texts in Statistics, DOI: 10.1007/978-1-4614-4708-5_1, © Springer Science+Business Media New York 2013

1

2

1 Introductory Measure Theory

a mathematically rigorous treatment of the theory of probability only came about in the 1930s by the Soviet/Russian mathematician A.N. Kolmogorov (1903–1987) in his fundamental monograph Grundbegriffe der Wahrscheinlichkeitsrechnung [170], which appeared in 1933. The first observation is that a number of rules that hold for relative frequencies should also hold for probabilities. This immediately calls for the question “which is the minimal set of rules?” In order to answer this question one introduces the probability space or probability triple (Ω, F , P), where • Ω is the sample space; • F is the collection of events; • P is a probability measure. The fact that P is a probability measure means that it satisfies the three Kolmogorov axioms (to be specified ahead). In a first course in probability theory one learns that “the collection of events = the subsets of Ω”, maybe with an additional remark that this is not quite true, but true enough for the purpos