Scaling exponents of step-reinforced random walks

  • PDF / 403,025 Bytes
  • 21 Pages / 439.37 x 666.142 pts Page_size
  • 38 Downloads / 208 Views

DOWNLOAD

REPORT


Scaling exponents of step-reinforced random walks Jean Bertoin1 Received: 17 February 2020 / Revised: 16 September 2020 / Accepted: 25 September 2020 © The Author(s) 2020

Abstract Let X 1 , X 2 , . . . be i.i.d. copies of some real random variable X . For any deterministic ε2 , ε3 , . . . in {0, 1}, a basic algorithm introduced by H.A. Simon yields a reinforced sequence Xˆ 1 , Xˆ 2 , . . . as follows. If εn = 0, then Xˆ n is a uniform random sample from Xˆ 1 , . . . , Xˆ n−1 ; otherwise Xˆ n is a new independent copy of X . The purpose of this work is to compare the scaling exponent of the usual random walk S(n) = X 1 + · · · + X n ˆ with that of its step reinforced version S(n) = Xˆ 1 + · · · + Xˆ n . Depending on the tail of X and on asymptotic behavior of the sequence (εn ), we show that step reinforcement may speed up the walk, or at the contrary slow it down, or also does not affect the scaling exponent at all. Our motivation partly stems from the study of random walks with memory, notably the so-called elephant random walk and its variations. Keywords Reinforcement · Random walk · Scaling exponent · Heavy tail distribution Mathematics Subject Classification 60G50 · 60G51 · 60K35

1 Introduction In 1955, Herbert A. Simon [24] introduced a simple reinforcement algorithm that runs as follows. Consider a deterministic sequence (εn ) in {0, 1} with ε1 = 1. The n-th step of the algorithm corresponds to an innovation if εn = 1, and to a repetition if εn = 0. Specifically, denote the number of innovations after n steps by σ (n) =

n 

εi

for n ≥ 1,

i=1

and let also X 1 , X 2 , . . . denote a sequence of different items (in [24], these items are words). One constructs recursively a random sequence of items Xˆ 1 , Xˆ 2 , . . . by

B 1

Jean Bertoin [email protected] Institute of Mathematics, University of Zurich, Zurich, Switzerland

123

J. Bertoin

deciding that Xˆ n = X σ (n) if εn = 1, and that Xˆ n = Xˆ U (n) if εn = 0, where U (n) is random with the uniform distribution on [n − 1] = {1, . . . , n − 1} and U (2), U (3), . . . are independent. Simon was especially interested in regimes where either εn converges in Césaro’s mean to some limit q ∈ (0, 1), which we will refer to as steady innovation with rate q, or σ (n) grows like1 n ρ for some ρ ∈ (0, 1) as n → ∞, which we will call slow innovation with exponent ρ. By analyzing the frequencies of words with some fixed number of occurrences, he pointed out that these regimes yield a remarkable one-parameter family of power tail distributions, that are known nowadays as the Yule–Simon laws and arise in a variety of empirical data. This is also closely related to preferential attachment dynamics, see e.g. [10] for an application to the World Wide Web. Clearly, repetitions in Simon’s algorithm should be viewed as a linear reinforcement, the probability that a given item is repeated being proportional to the number of its previous occurrences. In the present work, the items X 1 , X 2 , . . . are i.i.d. copies of some real random variable X , which we further