On a Class of Random Walks with Reinforced Memory
- PDF / 518,451 Bytes
- 31 Pages / 439.37 x 666.142 pts Page_size
- 28 Downloads / 195 Views
On a Class of Random Walks with Reinforced Memory Erich Baur1 Received: 11 September 2019 / Accepted: 25 June 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract This paper deals with different models of random walks with a reinforced memory of preferential attachment type. We consider extensions of the Elephant Random Walk introduced by Schütz and Trimper (Phys Rev E 70:044510(R), 2004) with stronger reinforcement mechanisms, where, roughly speaking, a step from the past is remembered proportional to some weight and then repeated with probability p. With probability 1 − p, the random walk performs a step independent of the past. The weight of the remembered step is increased by an additive factor b ≥ 0, making it likelier to repeat the step again in the future. A combination of techniques from the theory of urns, branching processes and α-stable processes enables us to discuss the limit behavior of reinforced versions of both the Elephant Random Walk and its α-stable counterpart, the so-called Shark Random Swim introduced by Businger (J Stat Phys 172(3):701–717, 2004). We establish phase transitions, separating subcritical from supercritical regimes. Keywords Reinforced random walks · Preferential attachment · Memory · Stable processes · Branching processes · Pólya urns Mathematics Subject Classification 60G50 · 60G52 · 60K35 · 05C85
1 Introduction In the last decades, there has been a constant interest in (usually non-Markovian) random walks with reinforcement. Arguably the most important class is formed by edge (or vertex) reinforced random walks. We point to the survey of Pemantle [26] or the more recent works [15,22,28] with references therein, just to mention a few. Loosely speaking, an edge reinforced random walk crosses an edge with a probability proportional to a weight associated to that edge, which increases after each visit. Edge reinforced random walks have found several applications in statistical physics and Bayesian statistics, see, e.g., [16,27,28].
Communicated by Antti Knowles.
B 1
Erich Baur [email protected] Bern University of Applied Sciences, Bern, Switzerland
123
E. Baur
In this paper, we shall be interested in another class of random walks with reinforcement, where at each time n and with a certain probability p, a step from the past is selected according to some weight (which may change over time) and then repeated, whereas with the complementary probability 1 − p, a new step independent of the past is performed. One of the practical interests in such walks comes from the fact that they serve as toy models for anomalous diffusion, describing many phenomena in physics, chemistry and biology [23,24]. A prominent example in this class is the Elephant Random Walk (ERW for short) introduced by Schütz and Trimper [30] (equal weights, symmetric ±1 steps), which has drawn a lot of attention in recent years, see [1,4–7,13,14,17,25], though this is a non-exhaustive list. It is the purpose of this paper to extend both the ERW and its α-stable version, the Shark
Data Loading...