The Covariance Matrix of the Error Vector
Assumption (iv) of the linear regression model claims the covariance matrix of the error vector ɛ to be Cov(ɛ) = σ2In with an unknown parameter σ2 ∈ (0, ∞). This chapter discusses the estimation of σ2 in detail, and introduces situations under which it ap
- PDF / 2,732,053 Bytes
- 33 Pages / 439.37 x 666.14 pts Page_size
- 35 Downloads / 250 Views
		    Assumption (iv) of t he linear regression mod el claims t he covariance mat rix of t he error vector e to be Cov(e ) = (J"2 I n wit h an unknown paramet er (J"2 E (0, 00) . T his chapter discusses t he est ima tion of (J"2 in det ail , and introduces sit uations und er which it appears t o be reasonable t o extend assum ption (iv) to COV(e) = (J" 2y for some symmet ric posit ive/ non negative definit e matrix
 
 v ¥= t ;
 
 5.1 Estimation of the Error Variance Section 2.2.4 introduces t he least squa res variance est imato r
 
 a
 
 2
 
 1 ~ ~ (y - X f3 Y (y - X f3 ) n- p
 
 = -
 
 as an unbi ased est imator for (J"2 and ment ions t he existence of ot her possible est imators like a? and a~L" This sect ion deals with t he esti mation of (J"2 in more det ail. 5.1.1 The Sample Variance
 
 In t he linear regression mod el y = X f3 + e , the random variables Y1, . . . , Yn can also be seen as a sample of size n , which, however , does not meet t he requirements of a simp le random sample [79, Definit ion 2.2.2], since t he ind ividu al Yi are not identically distribut ed as long as not all independ ent variables are const ant. T he appropriate measure for t he dispersion in a simple random sample is t he sample varianc e
 
 ~2 (J"y
 
 1 ~( - )2 = n _ 1 L... Yi - Y . i= l
 
 It coincides wit h t he least squa res variance est imator for (J"2 in t he special case t hat t he linear regression model is t he simple mean shift model, describ ed by the equation
 
 J. Groß, Linear Regression © Springer-Verlag Berlin Heidelberg 2003
 
 260
 
 5 The Covariance Matrix of the Error Vector
 
 where In denot es th e n x 1 vect or whose every element is equal to 1. In th at case t he ordinary least squares est imator for /1 is ji = ( l /n) I~ y, and t he vecto r of ordinary least squares residuals is given by g = Cy , where 1
 
 ,
 
 C=In--InI n n is t he cente ring matrix. T hen 1 y'c y a~2 = - n-l
 
 is identi cal to O'~. Hence, und er t he simple mean shift model, the estimator O'~ is unbi ased for a 2 . If we consider O'~ as an est imator for a 2 under th e linear regression mod el y = Xf3 + e with assumptions (i) to (iv) , then ~ 2 ) _ 2 t r (C ) E ( a y - a n- 1
 
 +
 
 f3'X'CXf3
 
 n- l '
 
 where tr (C ) = n -1. Hence, we have E(O'~) = a 2 for every a 2 E (0, 00) and every f3 E IRP if and only if C X = o. Th e latter mean s t hat t he column space of X is cont ained in t he column space of In which can only happen if each column of X consists of ident ical elements. Hence, the sample variance as an estimator for a 2 is usually biased upwards in t he linear regression model, meaning t hat the actua l est imate must be expected to be greater t han t he t rue a 2 . Therefore, wit h respect to t he bias , t he least squa res est imator 0'2 is mor e favor able t han the sa mple variance a y . This is not sur prising resul t , since und er the linear regression model the random variables Yl, . . . , Yn do not have ident ical expectations (unless all independ ent variables are constant ), t hus cont radict ing t he usual assumptions for a reasonabl e applica tio n of th e sa		
Data Loading...
 
	 
	 
	 
	 
	 
	 
	 
	 
	 
	 
	