Origins and Generation of Long Memory

In this chapter we discuss typical methods for constructing long-memory processes. Many models are motivated by probabilistic and statistical principles. On the other hand, sometimes one prefers to be lead by subject specific considerations. Typical for t

  • PDF / 782,725 Bytes
  • 64 Pages / 441 x 666 pts Page_size
  • 79 Downloads / 211 Views

DOWNLOAD

REPORT


Origins and Generation of Long Memory

In this chapter we discuss typical methods for constructing long-memory processes. Many models are motivated by probabilistic and statistical principles. On the other hand, sometimes one prefers to be lead by subject specific considerations. Typical for the first approach is the definition of linear processes with long memory, or fractional ARIMA models. Subject specific models have been developed for instance in physics, finance and network engineering. Often the occurrence of long memory is detected by nonspecific, purely statistical methods, and subject specific models are then developed to explain the phenomenon. For example, in economics aggregation is a possible reason for long-range dependence, in computer networks long memory may be due to certain distributional properties of interarrival times. Often long memory is also linked to fractal structures.

2.1 General Probabilistic Models 2.1.1 Linear Processes with Finite Second Moments 2.1.1.1 General Definition of Linear Processes The simplest time series models are linear processes. Given independent identically distributed variables εt (t ∈ Z), a causal linear process (or causal linear sequence, infinite moving average) is defined by Xt = μ +

∞ 

aj εt−j = μ + A(B)εt

j =0

=μ+

∞ 

(2.1)

 aj B

j

εt

(t ∈ Z)

(2.2)

j =0

J. Beran et al., Long-Memory Processes, DOI 10.1007/978-3-642-35512-7_2, © Springer-Verlag Berlin Heidelberg 2013

43

44

2

Origins and Generation of Long Memory

with B denoting the backshift operator defined by Bεt = εt−1 . Here, “causal” refers to the fact that Xt does not depend on any future values of εt . For simplicity of notation and without loss of generality, we will assume in the following that μ = 0. In order that Xt is well defined, convergence of the infinite series has to be guaranteed in a suitable way. If Xt has tohave finite second moments, then we need to 2 impose that σε2 = var(εt ) < ∞ and ∞ j =0 aj < ∞. Also, since εt−j are supposed to model random mean-adjusted deviations (“innovations”) at time t , it is assumed that E(εt ) = 0. Under these conditions, the series is convergent in the L2 (Ω)-sense, i.e. for each t , there is a random variable Xt such that  2 n      lim Xt − aj εt−j  n→∞  2 j =0

 = lim E n→∞

Xt −

L (Ω)

n 

2  aj εt−j

= 0.

j =0

We will also call Xt an L2 -linear process.

2.1.1.2 Ergodicity The first essential question one has to ask before thinking of statistical methods is whether the ergodic  property with constant limit holds, i.e. for instance if the sample mean x¯ = n−1 ni=1 Xt converges to μ = E(Xt ) in a well-defined way. If almost sure convergence is required, then the fundamental result to answer this question is Birkhoff’s ergodic theorem (Birkhoff 1931, also see e.g. Breiman 1992, Chap. 6). It states that x¯ converges almost surely to μ if Xt is strictly stationary, E(|Xt |) < ∞ and Xt is ergodic. The last property, ergodicity, means that for tail events (“asymptotic events”), measurable with respect to the σ -algebr