Why Reducing Bunching is an advantage??

King St

New Member
In the Reading 34, regarding to the Hull and White approach.

The bunching is defined to be the tail events tend to happen in close succession.

Later, Hull and White Measure claims its advantage is to greatly reduces bunching.

However, this is my question:

I think bunching is a nice feature to have. People try so hard to incorporate Regime swtiching model just to keep this bunching effect. Why do they argue removing bunching is an advantage?
 

King St

New Member
I do some research on the White hull's paper.

I think both schweser notes and study notes makes the bunching definition very misleading.
i.e The bunching is defined to be the tail events tend to happen in close succssion.

This defintion is taking the sentense out of context from original Hull White paper. If one read the original paper carefully, you will realize that Hull White refers the bunching as the consequence from traditional simulation. In other words, bunching is just the thing that will make the historical simulation very inefficient. (As a aside, quasi monte carlo simulation would solve this problem)

I do not think this bunching has anything to do with tail events. I mean normal events would have bunching as well.

If we intepreate bunching is the negative consequence of regular simulation, then it is an advantage to remove it.
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @King St I agree (with a qualifier) with your first post but disagree with your second.
  • With respect to your second post, I believe that Hull defines bunching as a property of a dataset, which is somewhat intuitive but non-trivial to measure. While I might quibble with some imprecision, to me, he clearly means bunching to be "tail bunching" as a feature of the dataset; i.e., the observable phenomena of consecutive tail events, or more specifically, the observance of consecutive losses in excess of a (e.g, VaR) threshold. Before we attempt to operationalize or evaluate this conceptual definition, I believe it makes sense as a dataset property and it correctly summarized by the paper's only quotable definition: "However, there is significant bunching; that is, tail events tend to happen in close succession rather than occurring randomly throughout the days covered by the data." (Hull page 5). BTW, I do think he means "tail bunching" only, he's just assuming that part.
  • There then a different question: how do we measure tail bunching. For example, he illustrates how MAPE is a good measure of tail bunching. As in, his second bunched dataset exhibits "obvious" tail bunching, regardless of the measure we use. We can use a good measure or a bad measure that fails to detect it well. As I read him, bunching is not a property of a measure or a VaR approach, it is only that some measures detect (or incorporate) it better.
  • Then there is yet a different question: do we want to reflect or ignore observed tail bunching in our VaR method. I do agree that Hull seems to categorically dismiss [tail] bunching as something to be minimized in the developed VaR measure, although I think he may be a bit imprecise in implicitly wanting to preserve tail bunching if it is currently-recently occurring (which BRW would tend to do implicitly via the weighting schedule). So, i think it's actually a non-categorical denial of tail bunching
So, to me, the logic is the following:
  1. How can we measure "[tail] bunching" in a historical dataset?
  2. Should a VaR approach reflect or minimize/ignore [tail] bunching in the historical dataset? And this question may itself be imprecise, it may need to be: How do we treat distant bunching (e.g., minimize)? How do we treat effectively current bunching (e.g., reflect in the metric or approach, as regime shifting model would do)?
 
Last edited:

King St

New Member
@David Harper CFA FRM CIPM

Thanks for your reply. I think my point in the second post is correct.

The dataset that described in Hull and White's paper is not the realized dataset (i.e the reality data). Rather, the dataset is obtained from tradtional monte carlo simulation. I think we need to be very clear regarding to the bunching from reality and the bunching from a particular simulation.

* The bunching from reality: This is something happen in reality, if the model would incoporate that. i.e regime swtiching model, this bunching in the model will be an advantage.

* The bunching from traditional simulation (which Hull White refers in their paper): This bunching is sourced from using tradition simulation, which the traditional simulation fails to incorporate a low disperancy sequences in random number generating process. In Hull and white's paper, since the primary concerns are the tail events (VaR estimations), of course, they will conveniently defined the bunching as be the tail events tend to happen in close succssion. This defintion is under the context of Hull and White are using traditional monte carlo simulation.

In my opinion, the bunching that is sourced from the reality and we model it, keeping the bunching will be the advantage of model. On the other hand, if the bunching is sourced from simulation, then removing them from the model is actually a advatnge, because it ensures the 5 % VaR will have 5% risks behond the VaR.

The only point I want to make is when the defition comes to bunching, we need to clear the sources of them.
 
Top