Strong approximations for long memory sequences based partial sums, counting and their Vervaat processes

  • PDF / 541,720 Bytes
  • 16 Pages / 439.37 x 666.142 pts Page_size
  • 97 Downloads / 173 Views

DOWNLOAD

REPORT


Strong approximations for long memory sequences based partial sums, counting and their Vervaat processes Endre Csáki1 · Miklós Csörg˝o2 · Rafał Kulik3

© Akadémiai Kiadó, Budapest, Hungary 2016

Abstract We study the asymptotic behaviour of partial sums of long range dependent random variables and that of their counting process, together with an appropriately normalized integral process of the sum of these two processes, the so-called Vervaat process. The first two of these processes are approximated by an appropriately constructed fractional Brownian motion, while the Vervaat process in turn is approximated by the square of the same fractional Brownian motion. Keywords Long range dependence · Linear process · Partial sums · Vervaat-type processes · Strong approximation · Fractional Brownian motion Mathematics Subject Classification

Primary 60F15 · Secondary 60F17 · 60G22

1 Introduction Let {η˜ j , j ≥ 0} be a stationary long range dependent (LRD) centered Gaussian sequence with E(η˜ 02 ) = 1 and covariance function of the form ρk := E(η˜ 0 η˜ k ) = k −α L(k), k = 1, 2, . . . ,

(1.1)

where α ∈ (0, 1), and L(·) is a slowly varying function at infinity. Let G(·) be a real valued Borel measurable function with E(G(η˜ 0 )) = μ and E(G 2 (η˜ 0 )) < ∞. It may be expanded as

B

Endre Csáki [email protected]

1

A. Rényi Institute of Mathematics, Hungarian Academy of Sciences, P.O.B. 127, Budapest 1364, Hungary

2

School of Mathematics and Statistics, Carleton University, Ottawa, ON K1S 5B6, Canada

3

Department of Mathematics and Statistics, University of Ottawa, Ottawa, ON K1N 6N5, Canada

123

E. Csáki et al.

G(η˜ 0 ) − μ =

∞  Jq Hq (η˜ 0 ), q! q=m

where convergence is in L 2 , Hq (x): = (−1)q e x

2 /2

d q −x 2 /2 e , dxq

q = 1, 2, . . . ,

are Hermite polynomials, and Jq : = E(G(η˜ 0 )Hq (η˜ 0 )). The index m in the above expansion is defined as m: = min{q ≥ 1 : Jq  = 0}. We then say that the Hermite rank of G(·) is m. Consider the subordinated sequence {Y j = G(η˜ j ), j ≥ 0}. Then E(Y j ) = μ, j = 0, 1, 2, . . . Given (1.1) with α ∈ (0, 1), assume that the Hermite rank m of G(·) is such that 0 < α < 1/m. Then, as n → ∞, with L(·) as in (1.1), we have (cf. Lemma 3.1 and Theorem 3.1 of Taqqu [23], or [24]), ⎛ ⎞ n  2 J2 2 σn,m := Var ⎝ (Y j − μ)⎠ ∼ m (1.2) n 2−mα L m (n), m! (1 − αm)(2 − αm) j=1

where the symbol ∼ stands for the indicated terms being asymptotically equal to each other. We consider such an LRD Gaussian sequence that satisfies (1.1), and it can be written in the form of a linear (moving average) process ηj =

∞ 

ψk ξ j−k ,

j = 0, 1, 2, . . . ,

(1.3)

k=0

where {ξk , −∞ < k < ∞} is a doubly infinite sequence of independent standard normal random variables, and the  sequence of weights {ψk , k = 0, 1, 2, . . .} is square summable. 2 2 Then E(η0 ) = 0, E(η02 ) = ∞ k=0 ψk =: σ and, on putting η˜ j = η j /σ , {η˜ j , j = 0, 1, 2, . . .} is a stationary Gaussian sequence with E(η˜ 0 ) = 0 and E(η˜ 02 ) = 1. If ψk ∼ k −(1+α)/2 (k) with a slowly varying function, (k), at infinity,