Random Variables and Distribution Functions
Section 3.1 introduces the formal definitions of random variable and its distribution, illustrated by several examples. The main properties of distribution functions, including a characterisation theorem for them, are presented in Sect. 3.2 . This is foll
- PDF / 473,036 Bytes
- 33 Pages / 441 x 666 pts Page_size
- 30 Downloads / 233 Views
Random Variables and Distribution Functions
Abstract Section 3.1 introduces the formal definitions of random variable and its distribution, illustrated by several examples. The main properties of distribution functions, including a characterisation theorem for them, are presented in Sect. 3.2. This is followed by listing and briefly discussing the key univariate distributions. The second half of the section is devoted to considering the three types of distributions on the real line and the distributions of functions of random variables. In Sect. 3.3 multivariate random variables (random vectors) and their distributions are introduced and discussed in detail, including the two key special cases: the multinomial and the normal (Gaussian) distributions. After that, the concepts of independence of random variables and that of classes of events are considered in Sect. 3.4, establishing criteria for independence of random variables of different types. The theorem on independence of sigma-algebras generated by independent algebras of events is proved with the help of the probability approximation theorem. Then the relationships between the introduced notions are extensively discussed. In Sect. 3.5, the problem of existence of infinite sequences of random variables is solved with the help of Kolmogorov’s theorem on families of consistent distributions, which is proved in Appendix 2. Section 3.6 is devoted to discussing the concept of integral in the context of Probability Theory (a formal introduction to Integration Theory is presented in Appendix 3). The integrals of functions of random vectors are discussed, including the derivation of the convolution formulae for sums of independent random variables.
3.1 Definitions and Examples Let Ω, F, P be an arbitrary probability space. Definition 3.1.1 A random variable ξ is a measurable function ξ = ξ(ω) mapping Ω, F into R, B, where R is the set of real numbers and B is the σ -algebra of all Borel sets, i.e. a function for which the inverse image ξ (−1) (B) = {ω : ξ(ω) ∈ B} of any Borel set B ∈ B is a set from the σ -algebra F. A.A. Borovkov, Probability Theory, Universitext, DOI 10.1007/978-1-4471-5201-9_3, © Springer-Verlag London 2013
31
32
3
Random Variables and Distribution Functions
For example, when tossing a coin once, Ω consists of two points: heads and tails. If we put 1 in correspondence to heads and 0 to tails, we will clearly obtain a random variable. The number of points showed up on a die will also be a random variable. The distance between the origin to a point chosen at random in the square [0 ≤ x ≤ 1, 0 ≤ y ≤ 1] will also be a random variable, since the set {(x, y) : x 2 + y 2 < t} is measurable. The reader might have already noticed that in these examples it is very difficult to come up with a non-measurable function of ω which would be related to any real problem. This is often the case, but not always. In Chap. 18, devoted to random processes, we will be interested in sets which, generally speaking, are not events and which require special modi
Data Loading...