RiskMetrics (EWMA)

idalton

Director
My question has to do with lambda and the value you used within your spreadsheet and screencast. Listening to your screencast I hear the standard value of lambda is 94%. In your spreadsheet you use 90% and I know you put the comment (If lamdba is higher (lower), volatility update RESPONDS SLOWLY (QUICKLY) to new returns) next to your input, but for best practice what value should be used?

Thanks
 

idalton

Director
another question with regards to the spreadsheet example. I took what you did in the spreadsheet and tried to use a larger dataset (252 prices), but the weight using EWMA only gave a weight up to 118th price and the prices after that got no weight at all. I am using a 94% lambda. Does that sound correct?
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Ian,

The 94% is because it's the value used by RiskMetrics (although, i've been echoing that for so long, i'd love to hear if it's still true these days; 94% is an average anyhow which varies by asset class).

But it's "merely" an empirical parameter, so i am not sure a best practice exists. Put another way, if a went to go sample some historical price series and fit the EWMA to it, I'll get something different for each asset class and time frame. Although, this smoothing/decay/persistence parameter seems to run high; e.g., I've read J.P. Morgan assumes 97%. L. Allen (assigned reading) says this estimator tends to be above 0.90 (I used 0.9 in the XLS just to exaggerate & illustrate), currencies tend to be low (nearer to 0.9, money markets nearer to 1.0.

(Since you raised it, and it's not really to your point, the marginal precision in the lambda param can be overwhelmed by the several *messy* empirical realities observed in data. I mean, we can fine tune 94% to 97%, but if the series is regime switching, our EWMA series is going to be overwhelmed anyhow. I just raise this b/c it can emerge from a study of the models, the natural tendency to think these params have natural values. I recall years ago working on an EVA project and i was so excited about CAPM, sort of reveling in taking cost of equity capital out to 2-3 decimal places. Like, i've got it, COC is 10.54%! Took it to the CFO and he laughed at me b/c he didn't believe in the capm in the first place. We ended up using 10% for COC which really offended my academic sensibilities. But now I sort of feel this way about volatility. The "model precision" is only appropriate to the extent your connection btwn the model and reality is not loose in the first place. Put another way, "lambda was 94% yesterday, could change tomorrow"...okay off my soapbox!)

David
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Ian,

if you upload it, i'll take a look (tonight, i am working hard on Monday's episode)

the EWMA series is technically infinite, so when the historical sample is smaller than infinite (always), you actually have to "true up" the gap by distributing it over the series. I think my spreadsheet illustrates that further down?

(unless it's just a rounding thing, your values will get very small quickly, perhaps they are merely formatting to zero).

David
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Ian,

your 118th weight should be (1-lambda)(lambda^117). So, that's very tiny. (that's before truing up the residual, which will make a very very tiny # very tiny)

David
 

idalton

Director
Thank you for your insight...very heplful! You are right about the weight I changed the column to more decimal places and they do become very very small numbers. I'm sorry to keep bugging you about this example, but I just want to make sure I understand the in's & out's.

I'm attaching a spreadsheet that has prices for 2006 on one sheet and 2007 on another sheet. My questions are:

1) Why does my volatility stay the exact same for both 2006 & 2007 for EWMA/GARCH(1,1). It's not the same dataset, so how could it be the same to the 3rd decimal place?

2) With your spreadsheet the EWMA recursive & elaborate methods come up with the same volatility, but in my examples they don't and I can't figure out what I might have done wrong.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Ian,

Looks like both (1) and (2) due to error in column P: was multiplying by variances instead of (weight)(period return^2). See attached. They differ now (2006 v 2007) and the recursive/elaborate match.

Also, i took the LR Variance in GARCH(1,1) down to a smaller more "realistic" number so GARCH is in the vicinity of EMWA; and, you know, a check of GARCH is to set gamma = 0, alpha + beta = 1.0 b/c that is EWMA (EWMA is a special case of GARCH) so that output, as a test, setting gamma = 0, should also match EWMA.

Finally, since you've gone this far (nicely done!), the one other thing to note (and Linda Allen chapter mentions this): the recursive and elaborate EMWA look the same but they aren't EXACTLY the same. The recursive collapses an infinite series, which the elaborate is not (do a sum of your weight and it will be < 100%). Technically, there is that bit of residual it just doesn't show up even to several decimal places. That's why the weights, instead of each weight being (1-lambda)(lambda^[n-1]), should be (1-lambda)(lambda^[n-1])/(1-lambda^[total number in the series]). That extra divisor is very near to 1.0 so this slightly pluses up each weight such that the weights will sum to 1.0.

Thanks, David
 

idalton

Director
Thank you for correcting my mistake in my spreadsheet. I'd like to extend my question now to see how the models might be able to handle a fixed income asset which has two variables that effects volatility which are price and L +spread. Is there a way to extend either RiskMetrics and/or GARCH(1,1) to bring both variables into the equation to estimate volatility?
 

vjoyram

New Member
David,
I have a question on page 96 of the quantitative discipline. You mention that "In EWMA, ...... a lambda that is close to one exhibits slow decay". I am a bit confused. A high lambda in this case means a higher weighting on the previous day's variance. Wouldn't this translate into a rapid decay, i.e. a quick reversion to the mean. I would appreciate your help on this matter.

many thanks.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi vjoyram,

Yes, part of this merely semantics, as between Allen/Hull/Jorion the lambda is referred to as smoothing/decay/persistence parameter.

First, please note: EWMA does not mean revert. GARCH(p,q) adds the term for mean reversion. So, in EWMA the notion of decay/persistence can only relate to either (i) change versus previous variance (i.e., high lambda implies little change, or last return has smaller impact) or (ii) the ratio of weights on the series of lagged return

Here is what Allen says: "a lambda param closer to one exhibits a slower decay in information's relevance with less weight on recent observations"

what she means is, if lambda is 94%, then the infinite series is

weight 1 = 6%
weight 2 = 6%(94%) = 5.64%
weight 3 = 6%(94%)^2 = 5.3%

versus say if lambda were 50% (unrealistic)

weight 1 = 50%
weight 2 = 25%
weight 3 = 50%(50%)^2 = 12.5%

see how the first has a "slower decay" on the weights?

But it is still consistent with what you say: the "low/slow decay" is "high persisence:" the high lambda is giving less weight to the most recent return (the newest "innovation") and more weight (less decay away from) the information in the last variance.

David
 

humheehum

New Member
Hi David - I have question on EWMA.

While on page 42 Allen quotes

“a lambda parameter closer to unity exhibits a slower decay in information’s relevance with less weight on recent observations”

I have now understood this with the example you have given above.

However, on page 45 Allen quotes:

" A higher lamda not only means more weight on recent observations,it also means that our current beliefs have not changed dramatically...."

While the second part of the statement seems consistent with the slow decay, can you please explain why does a high lamda (presumably closer to 1) means more weight on recent observations ?

Thanks for your help
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Ashim,

I see your point. I find that difficult, too.
It's because, on 45, Allen is referring to the recursive EWMA. The recursive being:
variance estimate = lambda*(previous variance) + (1-lambda)*(previous return ^2)

This recursive formula above is the elegant solution to the infinite series of squared returns mentioned earlier in the thread.
In the longhand (infinite) EWMA, we only have weights multiplied by squared returns.
But, as recursive EWMA above reduces to an elegant formula which includes the previous *variance,* she is on p.45 meaning the last variance when she says observation.
So, the point is: high lamba means less weight on most recent squared return, but more weight on most recent variance (if using the recursive). The confusion stems from using 'observation' for the both.

(it may help to note that, lamda * (previous variance) in the recursive EWMA is mathematically equal to the series of weighted squared returns in the infinite series EWMA, excluding the most recent squared return).

IMO, the easiest way to recall this is to focus on: high lamba implies (1-lambda) on last squared return. That's true of either the infinite series (longhand) EWMA and the recursive EWMA. And that last squared return is the new information (new innovation). If it's high, we "pay less attention" to the new information, so we are more "sticky to" or more "persistent to" or "less decaying from" the previous stream of returns (which, in the recursive EWMA, is summarized in the recent variance).

David
 

humheehum

New Member
David - Thanks for the reply. I think I have understood what you posted. I need to re-read Allen again just to make myself comfortable with the interpretation of lamda.

Thanks
Ashim
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Ashim

If you have Hull, Chapter 19 (6th ed) has a cleaner explain. And, maybe better, shows EWMA as a special case of GARCH(1,1) where lambda = beta. IMO, the difficulty is mostly about weighting previous returns^2 versus weighting previous variance and how the former collapses into the latter. David
 

humheehum

New Member
David - thanks I have now understood this.

The diagram in Allen showing how weights are applied to squared returns under high and low lamda, along with re-reading your posts above have cleared the doubt. You were right the confusion was coming from the use of the word observation to mean squared returns and variance.

Thanks
Ashim
 

sam4risk

New Member
Hi David,

Is it necessary to consider lambda value as 0.94 for any asset class pertaining to any country? Can we use lambda value other than 0.94? If yes then what are the factors to be considered to decide upon any particular lambda value? Say for example, if I am working on time series only from debt market then what should be the approximate value of lambda to be taken for calculating variance forecast?
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi sam,

I attached Ch 5 of the RM technical document. I'm not current on lambdas in practice, sorry; I had understood from Carol Alexander that they (decay factors) are essentially subjective between 0.75 and 0.98 (source: C Alexander Vol 2)

Please note about the attched RM technical document:
1. Table 5.8 shows various "optimal" decay factors;
2. Table 5.7 takes a more interesting view: the decay factor as a function of tolerance level and size of sample (days)
... so for a given tolerance level (e.g., 1%), a certain number of days are used (458 @ 0.99). So, it seems to me this almost could flip the question around as in, "given we have (n) useful days in our data set, a lambda of X is implied" ... maybe you could start with the useful (n) days to inform a lambda?

David
 

sam4risk

New Member
Thanks David,

Your reply has definitely given me some direction to work on. Can you please throw some light on tolerance level as in how the tolerance level is decided upon?
 

sam4risk

New Member
Hi David,

I have made an attempt to calculate a lambda value for a particular country. I have used a method (RMSE criterion) mentioned in RiskMetrics technical document for determining the optimal decay factor. I have considered 37 benchmark time series for 500 days and calculated decay factor for each series by minimizing RMSE and then took weighted average to arrive at a optimal lambda value of 0.906. Now, I need to perform backtesting on lambda =0.906 showing that even this value should be acceptable for that particular market. Could you help me on the kind of backtesting method I can perform to achieve the objective??

Thanks.
 
Top