Multicollinearity

liordp

New Member
Hi David,

If there is a relationship between the independent variables, but it is not linear relationship, is the model still contained multicollinearity ?

Thanks
 

T.Flockert

New Member
Hi,

no, the problem with multicollinearity only applies if one independent variable can be described as linear combination of one (or more!) of the others.

Greetings
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Convexity,

I had to check my references (e.g., Peter Kennedy and 4 others in my library) and they agree with T.Flockert: multicollinearity refers to linear combination of explanatory variables. In a strict sense, this must follows because multi-collinearity is a violation of the CLRM assumption (Gujarati A8.6) that "no exact linearity exists between X2 and X3." In other words, it seems to me, it is tautologically true that multi-collinearity implies linear b/c it is a violation of the assumption that X2 and X3 are non-linear!

What i don't know, because it is beyond me but i'm sure an econometrician would know: what is the nature/implication of "non-linear dependence" between explanatory variables in the CLRM. (see how i have changed the subject and now refer to the assumption rather than the narrowly defined violation?). We know it's fine if they are orthogonal (collinearity = 0) but the CLRM assumption does not insist on orthogonal; and we know imperfect mulit-collinearity is not necessarily deadly....
.... so i confidently agree with T. Flockert about the narrow meaning of multi-collinearity and just add that it points us back to a narrow CLRM assumption where I am not really sure I understand why that assumption does not prohibit a dependence rather than a correlation.

David
 
Perhaps I can shed some light on this:

Multicollinarity, simply refers to the situation where two or more independent variables i.e. the X's in a multiple regression move in unison. The extreme would be either a very close to one or negative one correlation.

For example, say you regressed Ice cream sales on two explanitory variables, the average temperature and the density of ice cream vendors. We know intuitively that these two variables will be correlated.

Running the regression Y = Bx1 + Bx2 we may get a result that (using an F test) tells us that the B's together are significant. But cannot say either beta is significant in itself (using a t-test).

Implication: Multicolliniarity - the variables are too closely correlated to distinguish individual effect (the B's).

Note: The telltale sign here is a significant F test and insignificant individual t-tests.

Hope this helps.

Brian.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Brian,

Thank you, I like the ice cream example (memorable metaphors are yummy!).

My read is that Convexity raised a specific question along the lines of, what if the explanatory variables are dependent but not correlated (correlation = linear dependence), does that fall under multicollinearity?
(i.e., what about non-linear dependence?)
… I agreed with T. Flockert--as the term "multicollinearity" implies linear dependence (correlation)--but my literature (eg, Gujarati) does not explicitly address. (i suppose it falls into non-linear class of models but, off the cuff, I can imagine non-linear dependence finding some treatments in the multicollinear tests/methods). But that you for the help, for sure!

David
 
I may be wrong but as far as the FRM curriculum is concerned we do not have to calculate and/or interpret multicollinearity, autocorrelation in the residuals durbin-Watson and serial correlation the same way we have to in CFA level 2. Correct me if I'm wrong

Thanks
Nik
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Niko,

I mostly do agree except, technically, Brian's discussion above in regard to the F-test is assigned (not under regression. Under significance test of two sample variances. But personally, I find the F-test of joint regression hypothesis easier to remember). So where you say we don't need to worry about "calculate" I agree. But interpretation of F-test is on the table.

About autocorrelation, while I agree about that too in the regression context, please note auto (serial) correlation of returns is relevant under VaR; e.g., we scale VaR with the S.R.R. under i.i.d. return assumption. Auto/serial-correlation is a violation of i.i.d. such that if + autocorrelation, SRR will understate VaR/volatility. This sort of thing is testable

Thanks, David
 
Top