What turns a process to be "ARMA"

jwavasseur

New Member
Hi everyone,
It's my first thread on this forum so I hope I don't post it wrongly.
Nice to meet you by the way, I'm Julien, and I'm a french preparer to the FRM (hopefully November session) working since 10 years in Operational Risk area.

I have hard time understanding the following sentence in :
P1.T2. Quantitative Analysis
Francis X. Diebold, Elements of Forecasting, 4th Edition

p.40

"Lastly AR processes observed subject to measurement error also turn out to be ARMA processes."

I guess it's not a huge and revolutionary idea but I do not understand the syntax (and thus the meaning) of this sentence. Could somebody help me figure it out please?

Thanks.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @jwavasseur Welcome, you posted correctly! :) Great question, I do not know really why this assertion, hopefully somebody else can help; if i get time later, I will try a bit of research among my several texts. Sorry! All I can say is what seems to be the assumption about "measurement error." He seems to be saying that measurement error itself can be modeled as MA(1).

Say AR(1) is a good model for today's variance such that's today's variance is a function of yesterday's variance. Then analogous to the β*σ(n-1)^2 in GARCH, the fundamental AR(1) model (letting σ^2 = X) would be given by X(n) = β*X(n-1) + e(t), where e(t) is the white noise. I am not clear on why exactly this white noise term, e(t), which is already in the AR(1), does not suffice to account for the measurement error. But Diebold's assertion implies instead that measurement error contributes a lagged white noise, α*e(t-1), in addition to the current white nose. The ARMA (ie, MA plus measurement error) is given by X(n) = β*X(n-1) + α*e(t-1) + e(t). As white noise, e(t) is N(0,σ^2) and the key feature of white noise (as Diebold shows) is zero serial correlation ("serially uncorrelated"). But the key feature of MA(1) is serial correlation equal to α/(1+α^2). So, I have not answered your question (!) :rolleyes:, i am just interpreting the assertion, but my interpretation is this: measurement error for some reason manifests as serially correlated white noise, and therefore introduces as MA(1) into an AR(1) model. Intuitively this doesn't offend me; e.g., mis-measurement would seem to have a bias rather than be independent from each step to the next. I hope that adds something, and I hope somebody else can help! Thanks,
 
Last edited:

ami44

Well-Known Member
Subscriber
I think the trick is, that the error you made in the previous measurement bounces back in your current measurement.
Assume our AR(1) process looks like X(t) = β * X(t-1) + e(t) without measurement error.
Now we add an measurement error m(t) to it and call the resulting process X'(t) = X(t) + m(t).
From
X'(t) = β * X(t-1) + e(t) + m(t)
it follows that
X'(t) = β * ( X'(t-1) - m(t-1) ) + e(t) + m(t)
or
X'(t) = β * X'(t-1) - β * m(t-1) + e(t) + m(t)
and that is an ARMA Process.
The term m(t-1) apprears, since you have to deduct the measurement error from your previous measurement m(t-1) from X'(t-1) to get the true value X(t-1). From there you can jump then to X(t).

I guess that is the effect Diebold is talking about, but since I do not have his book, I'm not 100% sure.
 
Top