If you take the correlated prices of assets to which you refer, and generate a linear regression, the line will pass thru the cloud of observations. The error is the (vertical) distance from the observation to the line; autocorrelation refers to errors that are correlated (related to each other). A condition of the classic linear regression in Gujarati is the assumption of no autocorrelation (p 170). Under this LACK of autocorrelation, those errors should plot randomly (and combined with the other CLRM assumptions, the errors should plot i. with mean of zero, ii. with constant variance and iii. without autocorrelation).
Autocorrelation typically means (but not technically necessarily) a statement about a TIME SERIES. In our case (FRM), a series of returns or volatilities/variances. For example, as Linda Allen shows (Ch 2), we take a series of periodic returns and compute a volatility for some horizon; e.g., 10-day volatility. We scale this to 30-day volatility by applying the square root rule; 10-day vol * SQRT[30/10]. But this only works if the periodic returns are i.i.d.!! The "independent" in i.i.d. refers to periodic returns that are NOT autocorreleted. Specifically, today's high return does not imply tomorrow's high return (positive autocorrelation) or tomorrow's low return (negative autocorrelation). If today's return gives information about ("biases") tomorrow's return, the return series is autocorrelated. In short, lack of autocorrelation implies a random walk in the return series.
* as an assumption (LACK OF) in classical regression
* as a real-life dynamic that violates i.i.d. and renders square root rule dubious
* as a proxy for illiquidity in Andrew Lo's test of Hedge Funds
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.