Conditional Expectation of MA(1)

brian.field

Well-Known Member
Subscriber
@David Harper CFA FRM

Can you explain why the Conditional Mean for an MA(1) is not 0? I see the explanation in the previous chapter regarding:

upload_2016-1-15_9-14-50.png

This makes sense. But, if that was the case, then upload_2016-1-15_9-16-4.png would not be 0 correct? The below seems to indicate to me that upload_2016-1-15_9-16-33.png=0 and that upload_2016-1-15_9-16-46.png= upload_2016-1-15_9-16-56.png.

I don't really understand this.

upload_2016-1-15_9-13-40.png
 

jairamjana

Member
The mean of the disturbance term e(t) conditional on information set omega(t-1) ={e(t-1),e(t-2),e(t-3)......} will be the unconditional mean of e(t).. So E[e(t) | omega(t-1)] will be equal to E[e(t)]... The disturbance term being white noise is uncorrelated with past disturbance and hence e(t) = 0.. .. Disturbance is also called innovation... Anyway for the equation part you should try re reading the topic General linear process in the chapter characterising cycles by diebold.. He starts with assuming a stronger independent white noise for using the models... The second indication is simple too..
upload_2016-1-15_9-16-46-png.384
Is like saying E[e(t-1)|e(t-1)] hence it equals e(t-1)..
That sentence about computers talks about y(t) as computer sales...and y(t-1) as previous quarter sales.. This will not be constant for when dependent on lagged variables unless y(t) is independent white noise..

Hope it's clear
 

brian.field

Well-Known Member
Subscriber
Thank you @jairamjana

E [ (e_t-1) | e_t-1) ] = the expected value of e_t-1 given e_t-1.

This explains it.

I was thinking that the expected value of e_t-1 = 0, which is true as an unconditional mean (i think) by covariance stationarity. But if we are given the actual e_t-1, then the expected value of it is simply it!
 

jairamjana

Member
Just remember that it's same for AR(1).. We always assume e(t) is iid WN(1,sigma^2).. Wold decomposition theory should be reread multiple times..and also general linear process
 

brian.field

Well-Known Member
Subscriber
Along these same lines, is Variance(e_t-1 | e_t-1) = 0 since e_t-1 would be known and constant given itself? Make sense?
 

jairamjana

Member
Whether variance is e_t-1 or e_t it's a constant sigma^2..

Yes I made a typo for the et white noise part it's 0 and sigma ^2
 

brian.field

Well-Known Member
Subscriber
Does this make sense?

Var(Yt | Omega_t-1) = Var((e_t + theta*e_t-1) | Omega_t-1) = Var(e_t | Omega_t-1) + theta^2Var(e_t-1 | Omega_t-1) = sigma^2 + theta^2(0) = sigma^2
 

jairamjana

Member
That doesn't work
%E6%93%B7%E5%8F%96.PNG

var(Yt |omega_t-1) = E[(Yt - E(Yt | omega_t-1)^2 | omega_t-1 ]= E[(Yt- theta*(e_t-1))^2| omega_t-1] = E[(e_t)^2 | omega_t-1] = sigma^2
 

jairamjana

Member
Var(aX+bY) is an unconditional form right.. So that it will become a^2Var(x) + b^2Var(Y) .. I am just saying I am unsure if it holds for conditional VAR.. Await further feedback from others...
 

ami44

Well-Known Member
Subscriber
I think var(X + Y) = var(X) + var(Y) is only true if X and Y are independent. In general it is
var(a*X + b*Y) = a^2 * var(X) + b^2 * var(Y) + 2*a*b*Cov(X,Y)

This formula can be derived by writing down the definition of var() and Cov() and using the linearity of the expected value:
E(a*X + b*Y) = a * E(X) + b * E(Y)

Since the conditional expected value has the same linearity property, the general formula for var(a*X + b*Y) should be also true for conditional var (you have to define something like conditional covariance).

The question in what cases the covariance is 0 is difficult. From the independence of X and Y you can not conclude the independence of X | Z and Y | Z.

So to come back to the original question:
var(X + Y | Z) = var(X | Z) + var(Y | Z)
is true, but only if X and Y are conditional independent, which is not the same as ordinary independence.

Im pretty sure, that Jairamjana's calculation above shows that conditional independence is in fact given here.
 

brian.field

Well-Known Member
Subscriber
Fantastic! Both @ami44 and @jairamjana are absolutely fantastic! Thank you both! I can't believe I forgot the covariance term....if I remembered that, I probably would have thought about conditional independence.....
 
Top