YouTube T4-01: Three approaches to value at risk (VaR) and volatility

Nicole Seaman

Director of CFA & FRM Operations
Staff member
Subscriber
The three approaches are 1. Parametric; aka, analytical; 2. Historical simulation; and 3. Monte Carlo simulation (MCS). The parametric approach assumes a clean function, the other two work with messy data. Historical simulation is betrayed by a histogram, MCS is betrayed by a random number generator. In terms of the PARAMETRIC APPROACH (the 1st of three approaches), the key question is, "How do we estimate current volatility?" and we can take three basic approaches to this question.


YT sub small.png
 
Last edited:

ivan.jarmanovic

New Member
Hi David
Great video! I guess the delta-normal approach (as described in ch2 P1.T4 "calculating and applying VaR") would fall under the parametric classification as a non-linear model? I was struggling to combine the various wordings placed between ch 1 to 3 (VaR and Volatiliy) till I saw the overview @5:35 of the video.
Best regards
Ivan
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @ivan.jarmanovic Glad you found help at https://forum.bionicturtle.com/thre...r-versus-worst-case-scenarios-allen-3-3.7209/ (Thank you for searching!)

Although terminologies vary, I am fond of Carol Alexander's exactness on this issue (MRA Vol 4: https://amzn.to/2tfgFIL)

First, clearly delta-normal is parametric (aka, analytical) approach, as the most common analytical/parametric approach is the delta-normal VaR (and some authors appear to consider them synonymous!).

I think Linda Allen would say that the most common delta-normal VaR is better called normal linear VaR, because the risk-factors are presumed normally distributed and the usage of delta implies a linear relationship; e.g., if we only estimate an option's price change with delta, we approximate with a linear relationship. However, we can use a delta-gamma VaR which is still parametric but non-linear. I hope that's helpful,
 

Anirudhda77

New Member
Hello David.
I calculated VaR and ES by ranking the losses and gains and then cutting a 5% percentile to calculate VaR. And another time I used the exactly same data, calculated the mean return and SD and z-value and calculated VaR. The two answers were very different. Is it normal? If yes, then which method above the two should I be using in general?

Thank you
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Subscriber
HI @Anirudhda77 It's a great question. In my opinion, we must first say that neither approach is wrong. You have elegantly illustrated the difference between historical simulation VaR and the most common parametric (aka, analytical) VaR which is normal VaR. Much can be said about this, but I'd start with:
  • The first approach identifies the 5.0% quantile from the raw empirical dataset. As such, it has the benefit of honoring the "messy" actual, empirical distribution. No distributional assumption required.
  • The second approach "imposes normality" on the dataset: it fits a normal distribution to the dataset. Begging the question: if we are going to fit a distribution, why be normal? There are many distributions, why not fit another? Importantly, we cannot fall back to the CLT which is a most magical justification for using the normal when applying a distribution to the sample mean, because we are using it here for a quantile in the tail. CLT, so often useful (e.g., justifying tests of regression coefficients) does not give us air cover for normal. Still, maybe it's a good approximation, as prediction is a tough business! The chief advantage of imposing normality (i.e., using the dataset to retrieve a mean and variance is what we are doing here) is that, once we get our mean and variance, we can discard the dataset and use a very convenient distribution function. I hope that's helpful,
 

Anirudhda77

New Member
HI @Anirudhda77 It's a great question. In my opinion, we must first say that neither approach is wrong. You have elegantly illustrated the difference between historical simulation VaR and the most common parametric (aka, analytical) VaR which is normal VaR. Much can be said about this, but I'd start with:
  • The first approach identifies the 5.0% quantile from the raw empirical dataset. As such, it has the benefit of honoring the "messy" actual, empirical distribution. No distributional assumption required.
  • The second approach "imposes normality" on the dataset: it fits a normal distribution to the dataset. Begging the question: if we are going to fit a distribution, why be normal? There are many distributions, why not fit another? Importantly, we cannot fall back to the CLT which is a most magical justification for using the normal when applying a distribution to the sample mean, because we are using it here for a quantile in the tail. CLT, so often useful (e.g., justifying tests of regression coefficients) does not give us air cover for normal. Still, maybe it's a good approximation, as prediction is a tough business! The chief advantage of imposing normality (i.e., using the dataset to retrieve a mean and variance is what we are doing here) is that, once we get our mean and variance, we can discard the dataset and use a very convenient distribution function. I hope that's helpful,
Thank you for such a detailed reply David. You have always been helpful.
 
Top