No perfect Collinearity

atandon

Member
Hi David,

I am little confused with the “No perfect collinearity” statements on page 66 & 67 (2012.T2.Quantitative.pdf) -

1. On page 66 - The regressors exhibit perfect multi-collinearity if one of the regressors is a perfect linear function of the other regressors. The fourth least squares assumption is that the regressors are not perfectly multicollinear.

2. On page 67 - Imperfect multicollinearity is when two or more of the independent variables (regressors) are highly correlated: there is a linear function of one of the regressors that is highly correlated with another regressor

Are perfect and non perfect sounding same from point - 1 & 2….I must be def missing something. Pls explain the concept.

Regards,
atandon
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi atandon,

To elaborate on ahansen's illustration, there seems to be a surprising conceptual difference between perfect correlation (1.0) and even high correlation (e.g., 0.89) between the regressors. I'd offer two pointers here.
  1. Stock & Watson p. 202: "Despite its similar name, imperfect multicollinearity is conceptually quite different from perfect multicollinearity. Imperfect multicollinearity means that two or more of the regressors are highly correlated in the sense that there is a linear function of the regressors that is highly correlated with another regressor. Imperfect multicollinearity does not pose any problems for the theory of the OLS estimators; indeed, a purpose of OLS is to sort out the independent influences of the var-ious regressors when these regressors are potentially correlated."
  2. The counter-intuitive notion that OMITTED VARIABLE BIAS actually has as a condition that the omitted variable is imperfectly correlated to another regressor. There is even the expectation that a "forgotten" regressor will be somewhat correlated (a "good" thing) but not so much as to perfectly correlated (that's multicollinearity, an assumptional violation)
Thanks,
 
Top