Cope reading

shanlane

Active Member
Hello,

Cope says a few things in this chapter that do not seem to agree with other readings and even itself.

For one, he says that EVT can only be used if the same mechanism is responsible for all losses. He then says this is not realistic, but goes on to say how GPD can be used if the data can reliably estimate the tail parameter. Isn't the GPD used in conjunction with EVT? I thought that POT was a form of EVT and the the GPD was the distribution to use when using POT.

It is just really not clear when we are able to use any of these methods.

Any advice on how to think about this a bit more clearly?

Thanks!

Shannon
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Shannon,

As I'm trying to prepare videos (and I already have a backlog of your queries: I may not be able to keep up with you on a daily basis, in the short run, FYI), I lack the time to reference Cope and reconcile, but your statement here is CORRECT in any non-quibbling manner: "I thought that POT was a form of EVT and the the GPD was the distribution to use when using POT." A few thoughts (without reference to Cope, I'm not recalling his exact caveats):
  • You are correct, and Dowd conveys it maybe better than Cope: EVT is the broad discipline/sub-discipline (actually, looks like it is a branch: http://en.wikipedia.org/wiki/Extreme_value_theory)
  • Within that branch, the FRM has struggled to find readings, but Dowd reflects their line: EVT has two big approaches to parametrically characterizing the extreme tail: 1. to collect the worst losses with historical time slices (block maxima approach; considered "old fashioned); and 2. to collect losses above a threshold, without time-slicing the historical sample; i.e., the peak-over threshold approach (POT). In summary, EVT approaches include BM and POT.
  • Just as a general set of conditions under the BM approach imply mathematically (the dataset ought to converge on) the correct distribution is GEV distribution; the POT approach implies that, as the POT threshold, the dataset ought to converge on a GPD distribution. Some author, somewhere, drew this analogy that I like: as the central limit theorem (CLT) implies the sampling distribution of the sample mean converges on a normal distribution; a theorem within EVT implies the distribution of extreme losses under POT converges on a GPD distribution. If you like, EVT is the school, POT is a method (class?), and GPD is the obvious tool (but among others) in the POT method.
  • Just as a normal sample mean only strictly holds up under the "unrealistic" conditions of CLT (e.g., i.i.d. is actually unrealistic), it is consistent with the title of Cope's article that he explore the implications of violating strict assumptions. In many ways, I think this is the HUGE THEME of model risk and we are constantly observing: here is a model (e.g., CAPM, APT, VaR); it only works under a set of assumptions; the assumptions are not realistic; but the model is useful; how do we use the model with an awareness that technically we are violating assumptions; how far can we go with that? My favorite is simply scaling VaR. A 1-day VaR of (x) scaled to 10-days is x*SQRT(10). We just assumed i.i.d. which is not realistic.
  • At some point, the theory doesn't survive actual practice ("In theory, there is no difference between theory and practice. But, in practice, there is."); in the extreme tail, I don't perceive much attachment to the GPD distribution actually. I perceive a lot of it is actuarial: fitting distributions to actual datasets.
I hope that helps, thanks!
 

shanlane

Active Member
I am sorry for taking up so much of your time! I guess I assumed that you had a bunch of people answering questions for you, but it does not seem like that is the case.

Thanks again for all of the help!

Shannon
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
I wish that were the case, but nope: the people who can help answer the questions (with some care and without a total disregard for accuracy) are frankly too expensive.

With a forum, I had originally hoped for more candidate/customer participation than we've historically observed (and we do get some help, like ahansens has been terrific), but I think I've learned that most candidates are either (i) just trying to comprehend what they are seeing and/or (ii) time stressed.

But we are going to experiment with some awards for participation (iTunes, Amazon gift cards) to try and make it more fun to participate, but I'm not holding my breath... :)

Finally, you do ask a lot of questions but most of them have been highly instructive and useful, in my judgement, to the forum and to myself.
 

shanlane

Active Member
Thanks! I try to look things up on my own before I ask anything. Most of the time I can figure things out on my own, but where there are discrepencies between authors or the text I am reading is THE authority on a topic and it still does not make sense I find that I have no other option but to bounce them off of you.

Thanks again!

Shannon
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Right, agreed. You have identified the primary weakness of the FRM (IMO): year after year, careful candidates encounter a list of perceived discrepancies (I have come to perceive it as an irony, in that the more careful is the candidate, the more they will observe this and be frustrated by it). CFA does not engender quite the same issue.

The silver lining is, i think, it pushes everybody's understanding and is actually more realistic (the real world is not standardized) and, I personally do not think it impact the exam result whatsoever. Nevertheless, I do frankly think it is part of GARP's job (i.e., part of what the fees should go toward) to reduce this phenomena with some standard definitions. Thanks,
 

shanlane

Active Member
I like the way you think. It goes back and forth between frustration and anger. As you said, the harder I work, the more frustrated I get.

Thanks again!

Shannon
 
Top