FRM T1. Foundation > Learning Spreadsheets: Commentary

David Harper CFA FRM

David Harper CFA FRM
Subscriber
FRM P1 > T1 > Intro to VaR (xls): Value at Risk

Hello! We are uploading revised learning spreadsheets that I've been working to improve. I wanted to share some information about them. I am going to start at the beginning in this informal series.

In terms of a natural sequence, the first learning workbook (R1-P1-T1-Intro-VaR-v2.xlsx) is here in the Study Planner at https://learn.bionicturtle.com/topic/learning-spreadsheet-intro-var/. Within this workbook, the first sheet is T1.1-IntroVar and it introduces value at risk (VaR). Unlike most of our learning workbooks, this XLS is not linked directly to an assigned reading. That's because these value at risk (VaR) calculations are fundamental and make several appearances in the FRM syllabus. First I want to remind of the context. There are three broad approaches to estimating VaR: historical simulation, Monte Carlo simulation, and parametric (aka, analytical). This sheet is a parametric approach because it does not use raw data but instead assumes a normal probability distribution (although data would inform the inputs). So, I like to remind folks that we tend to assume the normal distribution because it is elegant and convenient but parametric VaR does not necessarily imply a normal distribution. In fact, it's an FRM theme that realistic returns are heavy-tailed; i.e., non-normal. Just today this post is in my feed reader, "Risk management based on normality is a loser's game." At the same time, I don't mean to dismiss the justifiable use of the normal distribution when the utterly amazing central limit theorem (https://en.wikipedia.org/wiki/Central_limit_theorem) does apply. But the thing is, risk is preoccupied with loss tails, not so much the center of the distribution, alas.

The left panel (see below) in this first sheet of the workbook (again that's sheet T1.1-IntroVar within workbook R1-P1-T1-Intro-VaR-v2.xlsx) illustrates the most basic VaR approach: we specify a confidence level and a horizon (e.g., one-day 95.0%) and, given the asset value and volatility, we retrieve the %aVaR with the following: %aVaR = -µ*(Δt/T) + σ*sqrt*(Δt/T)*z(α). The aVaR refers to absolute VaR which is the loss relative to the current position (relative to today), in contrast to relative VaR which would exclude the drift term and would be the loss relative to the future expected value. In market risk (aka, mVaR) when the horizon is often one day such that the drift effectively rounds to zero anyway, by convention we can exclude the drift (ie, expected return term). But it is good practice to start with absolute VaR in your thinking because basically it can't be wrong. I like to think of aVaR as return-adjusted risk (compare to risk-adjusted return) because, while rVaR is just the worst-expected loss, aVaR mitigates the worst-expected loss with expected return. Further, aVaR is consistent (imo) with risk theory: in contrast to valuation (and risk-neutral pricing) which are preoccupied with precise present value estimation, risk is concerned with the the approximate future value in the loss tail and expected return is part of the future state; e.g., consider the drift in the Merton model.

The right panel only performs a relative VaR but on two assets (I wanted to keep the introduction as simple as possible). So we also need a correlation parameter. The VaR here relies on the traditional mean-variance framework (aka, MPT) taught in all basic finance courses, so you do want to understand this specifically. At the same time, you are probably aware that it requires assumptions which render it controversial.

0426-P1-T1-VaR-Intro-2.png


Another note about terminology. The sheet refers to delta normal VaR because Jorion uses this term (so historically it's been more familiar to FRM candidates). Delta normal is a bit of overkill here, because we only have a single risk factor (asset price). If we were instead computing the VaR of a call option on this asset, then the rVaR = S*σ*sqrt(Δt/T)*Δ*z(α), where Δ is the option's Greek delta and this would be "delta normal" because we are assuming the risk factor (asset price) has a normal distribution and the sensitivity to the risk factor is a first partial derivative with respect to the risk factor (ie, delta). In the case of a bond, when we use duration to estimate risk, that's also a "delta normal" approach because duration is also (a function of) a first derivative; it could fairly be called duration-normal but nobody does that (in the bond case, the risk factor is the yield and we do tend to assume very short term yields are normally distributed even as their long-term evolution is decidedly non-normal). So that's a long way to explain why we use delta normal: delta refers to first-derivative nature of the sensitivity to the risk factor, and normal refers to the assumption about the risk factor. My XLS here does not have a delta. This approach is called normal linear VaR by Carol Alexander, which makes a lot of sense in the multi-asset situation because it treats VaR as a linear function of risk factor shocks. In that way, her linear normal VaR is a special case of the set of parametric linear VaR models. I hope that's helpful!
 
Last edited:

emilioalzamora1

Well-Known Member
Particularly like this one:

'This approach is called normal linear VaR by Carol Alexander, which makes a lot of sense in the multi-asset situation because it treats VaR as a linear function of risk factor shocks'
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
FRM P1 > T1 > Intro to VaR (xls): Autocorrelated returns

The second sheet (T1.1-autocorrelated-returns) in our first learning workbook (located here) illustrates (see below) the square root rule (SRR). The SRR is so common that many users don't know when it's justified; for example, I've seen authors imply that it's a byproduct of normality. But actually the SRR depends not on a distributional assumption, but rather the assumption of independent and identically distributed (i.i.d.) returns. Consider the variance properties (https://en.wikipedia.org/wiki/Variance): σ^2(A+B) = σ^2(A) + σ^2(B) + 2*σ(A)*σ(B)*ρ(A,B). If A and B are i.i.d., then σ^2(A+A) = 2*σ^2(A), and the therefore the volatility (standard deviation) of (A+A) is given by sqrt(2)*σ(A). In this way, if σ(r) is the daily volatility and the returns are i.i.d. over (n) days, then the n-day volatility = sqrt(n)*σ(r), because the n-day variance = n*σ^2(r). So we say "volatility scales with the square root of time as variance scales with time." But this depends on i.i.d. returns, which is not realistic. (This is why I always try to include the i.i.d. assumption in practice questions that utilize the SRR).

The sheet (below) illustrates three scenarios. All share the same VaR assumptions: we are looking for the 10-day horizon 99.0% confident normal VaR when the one-day volatility is 1.0%. As shown, we typically scale the one-day to 10 days with 1.0% * SQRT(10/1) * 2.33 = 7.36%; i.e., with 99.0% confidence our worst expected loss over the ten-day horizon is 7.36%. That's shown on row 15 under all three scenarios because this row assumes i.i.d.. Below that we show the VaR that incorporates the autocorrelation input on row 10. The first scenario assumes zero autocorrelation: its scaling factor is 10 so that 3.16%*SQRT(10) = 7.36% which is unchanged the result under an i.i.d. assumption. So the autocorrelation assumption gets included by replacing SQRT(10/1) with SQRT(scaling factor). The second column illustrates autocorrelation of (aka, serial correlation) of + 0.30 such that the 10-day VaR increases to 9.69%; we do expect an increase! (You can probably figure out what happens as the autocorrelation increases toward perfect 1.0?). On the other hand, mean reversion is illustrated (in the third column) with an autocorrelation assumption of -0.30 such that the 10-day VaR decreases to 5.57%.

Hopefully you will notice this can impact the way we talk about per annum volatility, which is a common dimension. Say the measured daily volatility is 1.0%. Typically, this scales to 1.0% * sqrt(250/1) = 15.81% per annum, and perhaps a 99.0% VaR of 15.81% * 2.33 = 36.78%; parametric VaR is just volatility scaled by the deviate. But (as covered by assigned Linda Allen in FRM P1.T4.) this 36.78% should maybe come with a warning label: if returns are positively autocorrelated, this understates the one year VaR. If returns mean-revert, this overstates the one year VaR. I hope that's helpful!

0427-autocorrelated-returns4.png
 
Last edited:

rajeshtr

Member
Hi David,
I follow Linda Allens autocorrelation in returns
(1) Mean Reversion : Negative correlation in returns : end up Overestimating by using square root rule.
(2) Auto Correlation in Returns : end up underestimating Var.


Can you clarify w.r.t to "Time Varying Volatility".

Where i am confused is "In Messages from Academic Literature on Risk Management: Topic : Is Square-Root of Time Scaling a good idea?" : they mentioned
(3) If Risk factors follow GARCH process : time varying volatility --> we overestimate Var by using Square Root Rule
(4) If underlying Risk Factors follow Jump Diffusion process --> we underestimate Var..

w.r.t (3) Time Varying Vol : It can be higher or lower --> how can it be always overestimating Var
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi @rajeshtr Great points! My definition of time-varying volatility is periodic volatility that is not constant. As such, time-varying volatility is by definition a violation of the i.i.d. requirement above (required to scale volatility per the SRR). It might be better to say that time-varying volatility is conditional volatility that is not constant. I say that because (eg) the (Wiener GBM) BSM does assumes a constant volatility, but GARCH(1,1) is interesting: the unconditional (aka, long run) variance is constant but the conditional variance is time-varying. As in, we do not expect tomorrow's variance to equal today's variance (even as tomorrow's has some weight pulling it to the unconditional variance). So, the safest thing I can say about time-varying (which is true of GARCH) is that it's a violation of i.i.d. but, by itself, it does not implicate either a persistent directional over- or under-estimation bias over n-period volatility compared to the SRR.

I see what you are referring to in Messages ("The accuracy of square-root of time scaling depends on the statistical properties of the data generating process of the risk factors. Diebold et al (1998) show that, if risk factors follow a GARCH(1,1) process, scaling by the square-root of time over-estimates long horizon volatility and consequently VaR is over-estimated."). I just took a brief look at the cited paper (here is a copy and http://trtl.bz/2qtFrxu) and I do not either exactly understand the BIS claim here, sorry. I see how (i) that paper makes a statement that seems to agree with L. Allen that scaling can go in either direction ["Moreover, we cannot claim that the scaled volatility estimates are 'conservative' in any sense; rather, they are sometimes too high and
sometimes too low] and (ii) the specific assertion about an persistent over-estimation is due strictly to a comparison not to data or something else but Drost-Nijman formula, with which i am not familiar.

Without speaking to the Message paper, then, what I can tell you with confidence is:
  • As above, the basic requirement to scale per SRR is i.i.d.
  • Time-varying volatility is conditional variance that is not constant over time, which is violation of the "identical" in i.i.d.; so the violation is enough to "disqualify" over-reliance on the SRR
  • There are at least two definitions of mean reversion (I have a paper somewhere in my library that gives six or seven definitions, no kidding!). My XLS above, with the input for autocorrelated returns, models mean reversion in returns by accepting a negative input; a negative here --> negative return autocorrelation (aka, negative serial correlation). This is your point (1) above and the XLS model illustrates how the SRR will overestimate n-period volatility when the returns are mean-reverting (i.e., exhibit negative autocorrelation). Surprisingly, this is not a feature of GARCH(1,1), which assumes zero correlations between returns.
  • When people refer to mean reversion in GARCH(1,1), they are referring not to reversion in returns but rather mean reversion in the variance (or volatility, of course). This is the omega term, and per L. Allen, it is unclear whether the SRR over- or under-states the n-period volatility. For this mean reversion, it depends. If our current variance is lower than the long-run/unconditional variance, then the SRR will underestimate because the GARCH is going to produce conditional variances, in each forward day, that are increasing toward the long-run variance, which is a violation of "identical" in a specific direction. Basically, GARCH violates i.i.d. not due to "independence" but rather to "identical" due to the long-run factor. I hope that helps!
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Subscriber
FRM P1 > T1 > Intro to VaR (xls): Simple historical simulation (HS)

The third sheet in our new P1.T1.Intro-VaR XLS is a simple historical simulation value at risk (HS VaR). My favorite definition of VaR is Dowd's super-short "VaR is a quantile." I like that because, while VaR methods and management systems can be complicated, as a measure VaR is just a feature of a distribution (like mean or standard deviation). The hard part is specifying a reasonably accurate future distribution. Once we have the distribution specified, VaR just looks to the loss tail and asks, where is the worst expected loss at 95%, or put another way, where does the 5.0% loss tail begin? Our distributions can be either parametric (eg., normal) or empirical (aka, non-parametric). Simple HS is the first and most obvious non-parametric candidate (although there are many variations and refinements ....)

This sheet accepts only two input assumptions (in yellow): volatility of the asset (e.g., 1.0% daily) and the VaR confidence level (e.g., 95%). It them simulates twenty days (n = 20) of returns using = NORMINV(RAND(), 0, σ), which an FRM candidate does want to understand; this is the inverse transformation of a random uniform variable, RAND(), into its corresponding standard normal cumulative distribution. So the P/L is random but it's a random normal. But I did not need to assume normal; I could have inverse transformed into any distribution. This is critical FRM knowledge. The general probability is p = F(q); i.e., what is the probability our loss will exceed the quantile where F(.) is the probability CDF. If the distribution is normal, then by convention, we use z for the quantile and N(.) for the normal cumulative distribution, so the normal format of this is p = N(q); e.g., 5% = N(1.645) = NORM.S.DIST(1.645, TRUE). An inverse transformation is N^-1(p) = z or F^1(p) = q; e.g., N^-1(1.0%) = 2.33 = NORM.S.INV(0.01). In this way, the simulation is translating ("inverse transforming") a random uniform probability, [0.1], into its associated normal quantile; i.e., random (p) into associated Z, but scaled by the volatility assumption.

Then the sheet sorts the returns (from the worst at top, to the best at bottom). The 95% HS is represented by the horizontal lines. This is a discrete distribution, so there is more than one valid solution. Dowd's is the easiest, here: the 95% VaR is the second-worst return. This to me is elegant because the worst loss is located exactly in the 5.0% tail; 5%*20 = 1.0. Linda Allen's 95% VaR is the average of the worst and second-worst (respond if you want further clarification on the logic). I do not show Jorion's approach which would actually be to use the worst loss; we have tried to discourage this approach. Thanks!

0606-VaR-HS.png
 
Last edited:

David Harper CFA FRM

David Harper CFA FRM
Subscriber
FRM P1 > T1 > Intro to VaR (xls): WACC and certainty-equivalent (CE) cash flows (Appendix only)

The final sheet in the Intro to VaR workbook is in the Appendix because GARP discontinued the reading. It's not required for current FRM candidates, but it's actually a very efficient introduction to some basic governance concepts. (And I think it's Risk Policy Committee tool is handy; I don't see a lot of good material on the nuts and bolts of board committee roles and responsibilities).

What I retained is the certainty-equivalent (CE) approach to discounting cash flows, in case you haven't seen it before. Here is the previously FRM (P!.T1) assigned publication, Risk Taking: A Corporate Governance Perspective. My XLS mimics the publication's example which concerns the calculation of the weighted average cost of capital (WACC) for Grana Montero, a Peruvian company that has three businesses: software and software consulting, construction, and oil extraction.

I won't detail the procedure for estimating WACC (if you'd like to see WACC in action, many years ago I wrote a tutorial for investopedia on economic value added, see http://www.investopedia.com/university/eva, as in my consulting life I helped introduce EVA into companies).

I did want to just highlight the certainty-equivalent cashflows, see right-hand side of the snapshot below. The key difference is that instead of expected future cash flows, the certainty-equivalent cash flows are estimated. Whereas expected cash flows are risky, their certainty-equivalents are the cash flows that we will accept instead if they are guaranteed. For example, this firm projects $453.0 million in year 5 but of course assuming a certain level of risk; in theory, investors would be indifferent between receiving this risky $453.0 versus a guaranteed $356.0. If that comparison is true, then the CE cash flows can be discounted at the risk-free rate.

What's the point of this? It separates the time value of money component (reflected in the denominator's discounting at the risk-free rate) from the risk component (which is then reflected entirely in the numerator). I think this is a very elegant complement to the traditional DCF. The risk of the firm or project can be "re-phrased" into its (minimum) guaranteed cash flows. From my perspective, as is the case with models much of the time, the purpose is not the single, final point estimate but the thinking and discussion that is provoked by deciding on certain assumptions! In this case, the questions "what are our CE cash flows?" seems like an elegant way to tease out the risk of a project.

0608-ifc-ce.png
 
Last edited:

Amierul

New Member
Hi @David Harper CFA FRM ! Good day. I have gone through your direction on the T1.1-IntroVar. I found myself rather lost, Can you please more specific on what to expect from the excel sheets i.e. from changing the values, it would be very helpful for me.
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
HI @Amierul The learning spreadsheets are self-service (I wouldn't know where to begin!). I'm always trying to improve them, but they implement all of the quantitative ideas. If you have a specific question, I'm always here to help utilize them. But I don't know how to answer, "Can you please more specific on what to expect from the excel sheets i.e. from changing the values, it would be very helpful for me." Obviously you need Excel proficiency. Beyond that, their objective is concrete implementation of the quantitative concepts/applications. Thanks,
 

gsarm1987

FRM Content Developer
Staff member
Subscriber
FRM P1 > T1 > Intro to VaR (xls): Value at Risk

Hello! We are uploading revised learning spreadsheets that I've been working to improve. I wanted to share some information about them. I am going to start at the beginning in this informal series.

In terms of a natural sequence, the first learning workbook (R1-P1-T1-Intro-VaR-v2.xlsx) is here in the Study Planner at https://learn.bionicturtle.com/topic/learning-spreadsheet-intro-var/. Within this workbook, the first sheet is T1.1-IntroVar and it introduces value at risk (VaR). Unlike most of our learning workbooks, this XLS is not linked directly to an assigned reading. That's because these value at risk (VaR) calculations are fundamental and make several appearances in the FRM syllabus. First I want to remind of the context. There are three broad approaches to estimating VaR: historical simulation, Monte Carlo simulation, and parametric (aka, analytical). This sheet is a parametric approach because it does not use raw data but instead assumes a normal probability distribution (although data would inform the inputs). So, I like to remind folks that we tend to assume the normal distribution because it is elegant and convenient but parametric VaR does not necessarily imply a normal distribution. In fact, it's an FRM theme that realistic returns are heavy-tailed; i.e., non-normal. Just today this post is in my feed reader, "Risk management based on normality is a loser's game." At the same time, I don't mean to dismiss the justifiable use of the normal distribution when the utterly amazing central limit theorem (https://en.wikipedia.org/wiki/Central_limit_theorem) does apply. But the thing is, risk is preoccupied with loss tails, not so much the center of the distribution, alas.

The left panel (see below) in this first sheet of the workbook (again that's sheet T1.1-IntroVar within workbook R1-P1-T1-Intro-VaR-v2.xlsx) illustrates the most basic VaR approach: we specify a confidence level and a horizon (e.g., one-day 95.0%) and, given the asset value and volatility, we retrieve the %aVaR with the following: %aVaR = -µ*(Δt/T) + σ*sqrt*(Δt/T)*z(α). The aVaR refers to absolute VaR which is the loss relative to the current position (relative to today), in contrast to relative VaR which would exclude the drift term and would be the loss relative to the future expected value. In market risk (aka, mVaR) when the horizon is often one day such that the drift effectively rounds to zero anyway, by convention we can exclude the drift (ie, expected return term). But it is good practice to start with absolute VaR in your thinking because basically it can't be wrong. I like to think of aVaR as return-adjusted risk (compare to risk-adjusted return) because, while rVaR is just the worst-expected loss, aVaR mitigates the worst-expected loss with expected return. Further, aVaR is consistent (imo) with risk theory: in contrast to valuation (and risk-neutral pricing) which are preoccupied with precise present value estimation, risk is concerned with the the approximate future value in the loss tail and expected return is part of the future state; e.g., consider the drift in the Merton model.

The right panel only performs a relative VaR but on two assets (I wanted to keep the introduction as simple as possible). So we also need a correlation parameter. The VaR here relies on the traditional mean-variance framework (aka, MPT) taught in all basic finance courses, so you do want to understand this specifically. At the same time, you are probably aware that it requires assumptions which render it controversial.

0426-P1-T1-VaR-Intro-2.png


Another note about terminology. The sheet refers to delta normal VaR because Jorion uses this term (so historically it's been more familiar to FRM candidates). Delta normal is a bit of overkill here, because we only have a single risk factor (asset price). If we were instead computing the VaR of a call option on this asset, then the rVaR = S*σ*sqrt(Δt/T)*Δ*z(α), where Δ is the option's Greek delta and this would be "delta normal" because we are assuming the risk factor (asset price) has a normal distribution and the sensitivity to the risk factor is a first partial derivative with respect to the risk factor (ie, delta). In the case of a bond, when we use duration to estimate risk, that's also a "delta normal" approach because duration is also (a function of) a first derivative; it could fairly be called duration-normal but nobody does that (in the bond case, the risk factor is the yield and we do tend to assume very short term yields are normally distributed even as their long-term evolution is decidedly non-normal). So that's a long way to explain why we use delta normal: delta refers to first-derivative nature of the sensitivity to the risk factor, and normal refers to the assumption about the risk factor. My XLS here does not have a delta. This approach is called normal linear VaR by Carol Alexander, which makes a lot of sense in the multi-asset situation because it treats VaR as a linear function of risk factor shocks. In that way, her linear normal VaR is a special case of the set of parametric linear VaR models. I hope that's helpful!
Hello David, sorry of its not the right spot, just wanted to let you know that one of the spreadsheets isnt there for downloads. talking about: https://learn.bionicturtle.com/cour...-7/topic/learning-spreadsheet-dowd-chapter-3/
 
Top