Stationary Processes

A key role in time series analysis is played by processes whose properties, or some of them, do not vary with time. If we wish to make predictions, then clearly we must assume that something does not vary with time. In extrapolating deterministic function

  • PDF / 460,047 Bytes
  • 33 Pages / 595.232 x 790.987 pts Page_size
  • 56 Downloads / 275 Views

DOWNLOAD

REPORT


Stationary Processes

2.1 2.2 2.3 2.4 2.5 2.6

Basic Properties Linear Processes Introduction to ARMA Processes Properties of the Sample Mean and Autocorrelation Function Forecasting Stationary Time Series The Wold Decomposition

A key role in time series analysis is played by processes whose properties, or some of them, do not vary with time. If we wish to make predictions, then clearly we must assume that something does not vary with time. In extrapolating deterministic functions it is common practice to assume that either the function itself or one of its derivatives is constant. The assumption of a constant first derivative leads to linear extrapolation as a means of prediction. In time series analysis our goal is to predict a series that typically is not deterministic but contains a random component. If this random component is stationary, in the sense of Definition 1.4.2, then we can develop powerful techniques to forecast its future values. These techniques will be developed and discussed in this and subsequent chapters.

2.1 Basic Properties In Section 1.4 we introduced the concept of stationarity and defined the autocovariance function (ACVF) of a stationary time series {Xt } as γ (h) = Cov(Xt+h , Xt ),

h = 0, ±1, ±2, . . . .

The autocorrelation function (ACF) of {Xt } was defined similarly as the function ρ(·) whose value at lag h is ρ(h) =

γ (h) . γ (0)

© Springer International Publishing Switzerland 2016 P.J. Brockwell, R.A. Davis, Introduction to Time Series and Forecasting, Springer Texts in Statistics, DOI 10.1007/978-3-319-29854-2_2

39

40

Chapter 2

Stationary Processes

The ACVF and ACF provide a useful measure of the degree of dependence among the values of a time series at different times and for this reason play an important role when we consider the prediction of future values of the series in terms of the past and present values. They can be estimated from observations of X1 , . . . , Xn by computing the sample ACVF and ACF as described in Section 1.4.1. The role of the autocorrelation function in prediction is illustrated by the following simple example. Suppose that {Xt } is a stationary Gaussian time series (see Definition A.3.2) and that we have observed Xn . We would like to find the function of Xn that gives us the best predictor of Xn+h , the value of the series after another h time units have elapsed. To define the problem we must first say what we mean by “best.” A natural and computationally convenient definition is to specify our required predictor to be the function of Xn with minimum mean squared error. In this illustration, and indeed throughout the remainder of this book, we shall use this as our criterion for “best.” Now by Proposition A.3.1 the conditional distribution of Xn+h given that Xn = xn is    N μ + ρ(h)(xn − μ), σ 2 1 − ρ(h)2 , where μ and σ 2 are the mean and variance of {Xt }. It was shown in Problem 1.1 that the value of the constant c that minimizes E(Xn+h − c)2 is c = E(Xn+h ) and that the function m of Xn that minimizes E(Xn+h − m(Xn ))2 is the condi