P1.T2.506. Covariance stationary time series

Nicole Seaman

Director of CFA & FRM Operations
Staff member
Subscriber
Learning outcomes: Define covariance stationary, autocovariance function, autocorrelation function, partial autocorrelation function and autoregression. Describe the requirements for a series to be covariance stationary. Explain the implications of working with models that are not covariance stationary

Questions:

506.1. In regard to covariance stationary stochastic processes each of the following statements is true EXCEPT which is inaccurate?

a. In time series analysis, "cycles" refer to a general, all-encompassing notion of cyclicality: any sort of dynamics with some persistence that is not captured by trends or seasonality
b. An upward trend corresponds to a steadily increasing mean, and seasonality corresponds to means that vary with the season; both of which are violations of covariance stationarity
c. Due to the stringent requirements for covariance stationarity, and a practical inability to transform non-stationary series, forecasting models avoid applying covariance stationarity in practice
d. In a stationary process, distributions depend on displacement (i.e., the difference between time subscripts) but do not depend on the time subscripts. A "strictly stationary" stochastic process requires that the multivariate distribution function, including skew and kurtosis, be stable and finite; but "weak stationarity" (aka, covariance stationarity) only requires means and covariances to be stable and finite


506.2. Each of the following is a requirement for a series to be covariance stationary (aka, weak stationarity, second-order stationarity) EXCEPT which is not a requirement?

a. The mean of the series is stable over time; E[y(t)] = μ(t)
b. The covariance structure of the series is stable over time; cov[y(t), y(t-π)] = γ(t,π)
c. The variance of the series (i.e., the autocovariance at displacement 0) is finite
d. The autocovariance depends on time (t), but does not depend on the displacement (π)


506.3. What is the difference between autocorrelation and partial autocorrelation?

a. In the time series context, autocorrelation and partial autocorrelation are synonyms; i.e., there is no real difference, they are identical concepts
b. The partial autocorrelation is the correlation between y(t) and y(t-π) multiplied by the variance of y(0), an operation which standardizes the association across cycles
c. In the time series context, partial autocorrelation is the second moment of autocorrelation; if autocorrelation is positive (negative), then partial autocorrelation must be positive (negative)
d. Autocorrelation is the typical correlation between y(t) and y(t-π) while partial autocorrelation measures the association between y(t) and y(t-π) after controlling for the effects of y(t-1), ..., y(t-π+1)

Answers here:
 
Last edited by a moderator:
Top