FRM Practice question - Binomial Test and VAR calculation

Hi David,

Kindly request you to help with the below question (from the GARP 2010 practice questions). I am not able to follow how the binomial test was worked out as well as the answer given. (I am not able to find a similar problem in the referenced reading material). Once again, many thanks for your help.

Question:
In 2006, UBS reported no exceedences on its daily 99% VaR. In 2007, UBS reported 29 exceedances. To test whether the VaR was biased, you consider using a binomial test. Assuming no serial correlation, 250 trading days, and an accurate VaR measure, you calculate the probability of observing n exceedances, for n = 0, 1, . . .
n Prob(observing n exceedances)
0 7.9%
1 20.2%
2 25.6%
3 21.6%
4 13.6%
5 6.8%
6 2.8%
7 1.0%
8 0.3%
9 0.1%

Which of the following statements is not correct?
a. At the 5% probability level, you cannot reject that the VaR was unbiased in 2006 using a binomial test.
b. The lack of exceedances in 2006 demonstrates that UBS failed to take into account the existence of fat tails in estimating the distribution of its market risk.
c. It is difficult to evaluate the implications of the lack of exceedances if the VaR is forecasted for a static portfolio and it is compared against the trading P&L.
d. At the 5% probability level, you can reject that the VaR was unbiased in 2007 using a binomial test.

Answer: b

Explanation: A and D are correct. Using 250 days in a year, the binomial test rejects for 2006 at the 8% level and for 2006 at
less than the 1% level. C is correct since the trading P&L includes intra-day trading as well as market-making income. B is
wrong since exceedances alone tell us nothing about the existence of fat tails.


Topic: Valuation and Risk Models
Subtopic: Value-at-Risk definition and methods
Reference: Allen, Boudoukh, Saunders, Understanding Market, Credit and Operational Risk, chapters 2, 3.
 
Hi Lakish,

This question was featured in our webinar. It is showing you the distribution for a correct VaR model with expected exceptions (mean) = 1% * 250 = 2.5 per year. We shouldn't be surprised if randomness (sampling variations) produced exceptions of 2 or 3 or 4; those are near enough to the mean. Big differences from the mean are what allow us to reject the null.

The 29 exceptions is easiest. The probability of 9 is 0.1%; so 29 is so far off the distribution (to the right) that its probability is essentially zero. If the mean is 2.5, it is virtually impossible to get 29 exceptions, this cannot be due merely to random sampling variation. Must reject the 29. Our hypothesis that the model is correct, rather, must be the mistake.

What about 0? That is perfect, cannot get any fewer. But that can happen 7.9%. 7.9% is essentially the p value. We could reject with 1-7.9% = 92.1% confidence or less, but not greater. If we soften our confidence to, say, 90%, then we can reject this zero (i.e., too few exceptions to be plausible). As 7.9% > 5%, the zero is still a reasonably random outcome for a binomial with mean of 2.5. Cannot quite reject the zero.

Hope that helps, David
 
Top