KEMBAR78
Extended Notes on the Gauss-Markov estimator | PDF | Ordinary Least Squares | Actuarial Science
0% found this document useful (0 votes)
3 views5 pages

Extended Notes on the Gauss-Markov estimator

Extended Class Notes on the Gauss-Markov estimator

Uploaded by

Costanzo Manes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views5 pages

Extended Notes on the Gauss-Markov estimator

Extended Class Notes on the Gauss-Markov estimator

Uploaded by

Costanzo Manes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

The Gauss-Markov Estimator: Theory and Example

Extended Class Notes by Perpaolo Lexity

Abstract
This document provides a detailed exposition of the Gauss-Markov estimator
within the classical linear regression framework. The Gauss-Markov theorem is
stated and proved under standard assumptions, properties of the ordinary least
squares estimator are discussed, and a detailed numerical example is worked out
for clarity.

Contents
1 Introduction 2

2 The Linear Model 2


2.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

3 Ordinary Least Squares Estimator 2

4 Gauss-Markov Theorem 3
4.1 Proof Sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

5 Properties of the OLS Estimator 3

6 Numerical Example 3
6.1 Step 1: Construct the design matrix . . . . . . . . . . . . . . . . . . . . . 4
6.2 Step 2: Compute X T X and X T y . . . . . . . . . . . . . . . . . . . . . . 4
6.3 Step 3: Calculate (X T X)−1 . . . . . . . . . . . . . . . . . . . . . . . . . 4
6.4 Step 4: Calculate β̂ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
6.5 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

7 Residuals and Fitted Values 4

8 Remarks 4

9 Conclusion 5

1
1 Introduction
Linear regression models are foundational in statistical analysis, modeling the relation-
ship between a response variable and explanatory variables. The Gauss-Markov theorem
identifies the best linear unbiased estimator (BLUE) of the coefficients, guaranteeing
minimum variance among linear unbiased estimators under specific assumptions.

2 The Linear Model


Consider the linear model
y = Xβ + ε,
where

• y ∈ Rn is the observed response vector,

• X ∈ Rn×p is the design matrix of explanatory variables with full column rank p,

• β ∈ Rp is the vector of unknown regression coefficients,

• ε ∈ Rn is the vector of random errors.

2.1 Assumptions
Throughout, we assume:

1. Linearity: The model is linear in β.

2. Full Rank: The matrix X has full column rank p.

3. Zero Mean Errors: E[ε] = 0.

4. Homoscedasticity and No Autocorrelation: Var(ε) = σ 2 In , with σ 2 > 0


unknown.

3 Ordinary Least Squares Estimator


The ordinary least squares (OLS) estimator

β̂ = arg min ∥y − Xβ∥22


β

solves the normal equation:


X T X β̂ = X T y.
Since X T X is invertible by full rank assumption,

β̂ = (X T X)−1 X T y.

2
4 Gauss-Markov Theorem
theorem 1 (Gauss-Markov) Under the assumptions above, the OLS estimator β̂ is
the Best Linear Unbiased Estimator (BLUE) of β. This means:
• β̂ is linear in y,
• β̂ is unbiased: E[β̂] = β,
• β̂ has minimum variance among all linear unbiased estimators. In particular,
Var(β̂) ≤ Var(β̃),
for any linear unbiased estimator β̃ (variance inequality in the positive semidefinite
sense).

4.1 Proof Sketch


1. Unbiasedness: Since y = Xβ + ε and E[ε] = 0, we have
E[β̂] = (X T X)−1 X T E[y] = (X T X)−1 X T Xβ = β.
2. Variance of OLS
Var(β̂) = Var((X T X)−1 X T y) = (X T X)−1 X T Var(y)X(X T X)−1 = σ 2 (X T X)−1 .
3. Efficiency: For any linear unbiased estimator β̃ = Ay with AX = Ip , one can
show that
Var(β̃) − Var(β̂) ≥ 0,
where ≥ 0 means positive semidefinite, concluding that OLS has minimum variance.
(See standard texts for the complete proof.)

5 Properties of the OLS Estimator


• Linearity: β̂ is a linear function of y.
• Unbiasedness: E[β̂] = β.
• Variance-Covariance Matrix:
Var(β̂) = σ 2 (X T X)−1 .

• Normality: If ε ∼ N (0, σ 2 I), then β̂ is normally distributed:


β̂ ∼ N (β, σ 2 (X T X)−1 ).

6 Numerical Example
Consider data with n = 3 observations aiming to fit a linear regression with intercept:

yi = β0 + β1 xi + εi , i = 1, 2, 3,
with data:    
1 2
x = 2 ,
 y = 3 .

3 5

3
6.1 Step 1: Construct the design matrix

1 1
X = 1 2  .
1 3

6.2 Step 2: Compute X T X and X T y


   
T 3 6 T 10
X X= , X y= .
6 14 23

6.3 Step 3: Calculate (X T X)−1


det(X T X) = 3 × 14 − 6 × 6 = 42 − 36 = 6.
 
T −1 1 14 −6
(X X) = .
6 −6 3

6.4 Step 4: Calculate β̂


       1
T −1 1
T 14 −6 10 1 140 − 138 1 2
β̂ = (X X) X y = = = = 33 .
6 −6 3 23 6 −60 + 69 6 9 2

6.5 Result
The estimated regression line is
1 3
ŷ = + x.
3 2

7 Residuals and Fitted Values


Compute fitted values:
  1 3  
1 1 1 3
+ 2
1.83
ŷ = X β̂ = 1 2 33 =  13 + 3  = 3.33 .
1
1 3 2
3
+ 92 4.83
Residuals:
   
2 − 1.83 0.17
r = y − ŷ = 3 − 3.33 = −0.33 .
5 − 4.83 0.17

8 Remarks
- This example illustrates the explicit calculation of the Gauss-Markov estimator. - The
estimator has desirable optimality properties guaranteed by the Gauss-Markov theorem
under classical assumptions.

4
9 Conclusion
The Gauss-Markov estimator is crucial in linear regression analysis, enabling efficient,
unbiased parameter estimation. Its theoretical foundation ensures it is minimum vari-
ance among linear unbiased estimators, making it a fundamental tool in statistics and
econometrics.

References
• G.A.F. Seber, Alan J. Lee, Linear Regression Analysis, Wiley, 2nd Ed.

• C.R. Rao, Linear Statistical Inference and Its Applications, Wiley.

• A.C. Rencher, Methods of Multivariate Analysis, Wiley.

• W. Greene, Econometric Analysis, Pearson.

You might also like