Random Walks, a Good Place to Begin

In order to introduce as soon as possible some of the ideas that underlie the whole book, Chap. 1 provides an introduction of Bernoulli random walks in one and higher dimensions. Emphasis is placed on the calculation of passage times and the dependence of

  • PDF / 381,096 Bytes
  • 23 Pages / 441 x 666 pts Page_size
  • 84 Downloads / 252 Views

DOWNLOAD

REPORT


Random Walks, a Good Place to Begin

The purpose of this chapter is to discuss some examples of Markov processes that can be understood even before the term “Markov process” is. Indeed, anyone who has been introduced to probability theory will recognize that these processes all derive from consideration of elementary “coin tossing.”

1.1 Nearest Neighbor Random Walks on Z Let p be a fixed number from the open interval (0, 1), and suppose that1 {Bn : n ∈ Z+ } is a sequence of {−1, 1}-valued, identically distributed Bernoulli random variables2 which are 1 with probability p. That is, for any n ∈ Z+ and any E ≡ (1 , . . . , n ) ∈ {−1, 1}n , P(B1 = 1 , . . . , Bn = n ) = p N (E) q n−N (E) N (E) ≡ #{m : m = 1} =

n + Sn (E) 2

where q ≡ 1 − p

when Sn (E) ≡

n 

and (1.1.1)

m .

1

Next, set X0 = 0 and Xn =

n 

Bm

for n ∈ Z+ .

(1.1.2)

m=1

The above family of random variables {Xn : n ∈ N} is often called a nearest neighbor random walk on Z. Nearest neighbor random walks are examples of 1 Z is used to denote the set of all integers, of which N and Z+ are, respectively, the non-negative and positive members. 2 For historical reasons, mutually independent random variables which take only two values are often said to be Bernoulli random variables. A mathematically rigorous proof that infinitely many exist can be found in Sect. 2.2 of [8].

D.W. Stroock, An Introduction to Markov Processes, Graduate Texts in Mathematics 230, DOI 10.1007/978-3-642-40523-5_1, © Springer-Verlag Berlin Heidelberg 2014

1

2

1

Random Walks, a Good Place to Begin

Markov processes, but the description that I have just given is the one which would be given in elementary probability theory, as opposed to a course, like this one, devoted to stochastic processes. When studying stochastic processes the description should emphasize the dynamic nature of the family. Thus, a stochastic process oriented description might replace (1.1.2) by P(X0 = 0) = 1 and

 p P(Xn − Xn−1 =  | X0 , . . . , Xn−1 ) = q

if  = 1 if  = −1,

(1.1.3)

where P(Xn − Xn−1 =  | X0 , . . . , Xn−1 ) denotes the conditional probability (cf. Sect. 7.4.1) that Xn − Xn−1 =  given σ ({X0 , . . . , Xn−1 }). Certainly (1.1.3) is more dynamic a description than the one in (1.1.2). Specifically, it says that the process starts from 0 at time n = 0 and proceeds so that, at each time n ∈ Z+ , it moves one step forward with probability p or one step backward with probability q, independent of where it has been before time n.

1.1.1 Distribution at Time n In this subsection I will present two approaches to computing P(Xn = m). The first computation is based on the description given in (1.1.2). From (1.1.2) it is clear that P(|Xn | ≤ n) = 1. In addition, it is obvious that n odd

=⇒

P(Xn is odd) = 1 and n even

=⇒

P(Xn is even) = 1.

Finally, given m ∈ {−n, . . . , n} with the same parity as n and a string E = (1 , . . . , n ) ∈ {−1, 1}n with (cf. (1.1.1)) Sn (E) = m, N (E) = n+m 2 and so n+m

n−m

P(B1 = 1 , . . . , Bn = n ) = p 2 q 2 .  ! Hence, because, when k ≡