P1.T2.20.22. Stationary Time Series: autoregressive (AR) and moving average (MA) processes

yLam4028

Active Member
Hello I am having a hard time with MA process.

lets say now we have a set of S&P stock return in % and we would like to model a ma(1).
yes we checked the ACF and PACF and assumed it is a good fit.

the model itself is:

Observed Y = mean(u) + coefficient * Previous Error + Current Error

example:

Actual YPredicted YError Term
1.5%
2%

how do we get the error term?
Do we start from assuming the first predicted Y is mean(u) and 0 for previous error ? thus the error term will be = Actual - Predicted = 1.5% - ( mean(u) + 0 * coefficient )
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
HI @yLam4028 Say it were unrealistically perfect MA(1) series such that Y(1) = 1.5, Y(2) = 2.0, Y(3) = average(1.5, 2) = 1.75, Y(4) = average(2.0, 1.75) = 1.8750. Then your errors (aka, innovations) are e(2) = Y(2) - Y(1) = +0.50 per your schedule (notice that's the first actual innovation I can observe); e(3) = Y(3) - Y(2) = -0.250; e(4) = +0.125. I definitely don't have time to go deep on a single use case (here is an actual code-based example of mine at https://dh-data.org/posts/2022-01-26-bt-pq-p1t220222-ar-versus-ma-process/ )

... but this unrealistically perfect MA(1) converges to mean of 1.8333 so we can fit with MA(1) where mu = 1.8333 and theta = 1/3 such that MA(1) = 1.8333 + 1/3*e(t-1) + e(t). You'll see that's a perfect fit; e.g., Y(4) = 1.8333 + 1/3*(-0.250) + 0.125 = 1.8750. My model is totally unrealistic b/c the fit is perfect; rather, we the coefficients vary to create a predicted series that will vary from the observed in-sample, much like a regression line doesn't exactly match the observations. That is, we use actual observed errors in the historical sample to inform our coefficients. While the MA(1) as a function of the error sequence is mathematically perfect (mathematically true), our observations won't match the series implied by this function because each observation does not actually behave with the same weighting. Hope that's a good start.

P.S. It occurs to me that, for myself when I haven't played with MA(1) in a while, I forgot why the moving average is such a natural function of errors when it's literally averaging the values (in your example, 1.5% and 2.0%) but the MA series is a function of errors as ChatGPT shows

1675361311067.png
 
Last edited:
Top