FRM Fun 10 (Mon): JPMorgan's VaR Model

Suzanne Evans

Well-Known Member
FRM Fun 10. JPMorgan's CIO VaR

JPMorgan's first quarter filing (Q1 10Q) originally reported a value at risk (VaR) of $67 million for its CIO desk (recall the CIO desk put on the synthetic credit derivative trade that's lost the firm ~$5.9 billion) under a new version of their VaR model. The same VaR was later restated to $129 million, when JPMorgan reinstated their old model. Colin Lokey argues that JPMorgan switched from the old to the new VaR model precisely in order to justify a risker position(s), see http://seekingalpha.com/article/713571-jpmorgan-likely-switched-to-new-model-to-pave-way-for-risky-trade.

According to the published task force report (see https://www.dropbox.com/s/l9plg31ga9zujkf/JP-Morgan-CIO_Taskforce_FINAL_0713.pdf) in Early May: "VaR model analyzed and implementation errors detected; previous model reinstated."
To recap (as the above links are NOT needed to answer the question): the old VaR model produced a $129 VaR, but JPMorgan switched to a new VaR model that produced a $67 million VaR, under otherwise identical portfolio and market circumstances.

Question:

Here is the JPM Q2 10Q: http://www.sec.gov/Archives/edgar/data/19617/000001961712000213/jpm-2012033110q.htm
  1. Among the three (3) major VaR approaches, which does JPMorgan utilize?
  2. If the agenda was indeed to lower the VaR by manipulating the risk model, how might "implementation" be exploited to lower the VaR under this approach?
 
JPMorgan was using Historical Simulation method for Var Calculation. In the mean time due to changes in the synthetic credit VaR portfolio the risk of the positions increased manyfold. The estimate of certain parameters requires estimates like the volatility which can be manipulated by JPM to decrease the estimated VaR as was reflected in the new model which institute has implemented
The old method might consider the different volatility and other estimates of the current market situation to establish the VaR which the new method does not take into account.So the VaR amount actually increased instead from 67 to 129. So volatility weighted historical simulation can be the appropriate model being employed by the institute which could had been modified to simple historical simulation to lower the VaR. But as the risk of synthetic credit instruments increased then institute might had restated the model by adopting different implementation of the model using different parameters in the model.
 
Would add my perspective. JP Morgan based its VAR calculations for its market exposures on historical simulation approach.The new model probably didn't take into account the apparent change (increase) in the correlations among the risk factors due to large outsized trades that JPM was making in the synthetic CDS market. The new model would have masked these developments and thus would have produced a lower VAR numbers. Also, under normal circumstances, this would have necessitated a recalibration of the model parameters (due to change in the market variables). however its appears that JPM management deliberately avoided this in order to take up more riskier positions.
 
Come on guys, don't be naíve:
First of all J.P Morgan does not use "a" VaR number or method - they utilize both nonparametric, parametric and MC-based VaR.

1) For historical VaR or some hybrid thereof, they use a one-day projection window, but the estimation window is unknown.
2) let's not forget who came up with RiskMetrics.
3) some of the positions can only be modeled by MC


The numbers are run multiple times a day, especially as markets close and other markets open. They don't just push a button.
In all cases, the estimation window, how to filter out noise were changed. Essentially, the model was calibrated so as not to give
false positives. Put differently, the model was overfitted.
You can manipulate this by changing the index you use as a vol measure; the estimation period; the risk factors included; and the dependence among the risk factors by how the correlations offset positions.
 
It is highly unlikely on firm's part, being the one that helped pioneer risk management as we know it today,to have relied upon "a single" VAR model and probably would have used a hybrid approach, so agree with Alek on that point. Also as rightly pointed out some of the complex exposures with non-linear and path-dependent payoff can best be modeled with MC ....

Further more, the manipulations in the models can happen on multiple levels such as wrong data feeds in the model to start with, tweaking the model parameters, ignoring estimation errors and downward biases (and what i stated is one of the possibilities in the case at hand). There is no denial of the fact that very few people know what actually happened in this case...

Regarding the estimation window and VAR method I am just referring to JPM's Q1 report here-

The Firm calculates VaR to estimate possible losses for its current positions using historical simulation, which measures risk across instruments and portfolios in a consistent, comparable way. The Firm's VaR calculation is highly granular and incorporates numerous risk factors, which are selected based on the risk profile of each portfolio. The simulation is based on data for the previous 12 months.
 
Thanks ShaktiRathore, aadityafrm and Aleks! I awarded each of you a star, since each reply adds value. Here is the section of the Q1 where JPM discloses VaR, I hope it's interesting for candidates to at least see VaR outside of a textbook and mentioned in an SEC filing (emphasis mine):

Value-at-risk:
JPMorgan Chase utilizes VaR, a statistical risk measure, to estimate the potential loss from adverse market moves. Each business day, as part of its risk management activities, the Firm undertakes a comprehensive VaR calculation that includes the majority of its material market risks. VaR provides a consistent cross-business measure of risk profiles and levels of diversification and is used for comparing risks across businesses and monitoring limits. These VaR results are reported to senior management and regulators, and they are utilized in regulatory capital calculations. [i.e., Basel IMA approach]
The Firm calculates VaR to estimate possible losses for its current positions using historical simulation, which measures risk across instruments and portfolios in a consistent, comparable way. The Firm's VaR calculation is highly granular and incorporates numerous risk factors, which are selected based on the risk profile of each portfolio. The simulation is based on data for the previous 12 months. This approach assumes that historical changes in market values are representative of the distribution of potential outcomes in the immediate future. VaR is calculated using a one day time horizon and an expected tail-loss methodology, and approximates a 95% confidence level. This means that, assuming current changes in market values are consistent with the historical changes used in the simulation, the Firm would expect to incur losses greater than that predicted by VaR estimates five times in every 100 trading days, or about 12 to 13 times a year. However, differences between current and historical market price volatility may result in fewer or greater VaR exceptions than the number indicated by the historical simulation. In addition, based on their reliance on available historical data, limited time horizons, and other factors, VaR measures are inherently limited in their ability to measure certain risks and to predict losses, particularly those associated with market illiquidity and sudden or severe shifts in market conditions. As a result, the Firm considers other metrics in addition to VaR to monitor and manage its market risk positions.

I can't improve on the weaknesses cited above, especially as the Task Force report in my opinion is not specific; i.e., I could find nothing that explains the essential difference between their old and new models. Although, alek's overfit hypothesis looks (and smells) to me exactly like what happened; an easy way to achieve that, in this case with synthetic derivatives, would be to re-map (a little tweak) the risk factors. Also, historical simulation is a broad umbrella of approaches spanning from simple HS to bootstrapping to filtered/weighted simulations. I can't tell from the disclosure which specific HS JPM uses.

But in case it's helpful just to round out this thread on the topic of drawbacks with HS, here is Kevin Dowd on the disadvantages of historical simulation VaR: "Perhaps their [non parametric approaches] biggest potential weakness is that their results are very (and in most cases, completely) dependent on the historical data set. There are various other related problems:
  • If our data period was unusually quiet, non-parametric methods will often produce VaR or ES estimates that are too low for the risks we are actually facing; and if our data period was unusually volatile, they will often produce VaR or ES estimates that are too high.
  • Non-parametric approaches can have difficulty handling shifts that take place during our sample period. For example, if there is a permanent change in exchange rate risk, it will usually take time for the HS VaR or ES estimates to reflect the new exchange rate risk. Similarly, such approaches are sometimes slow to reflect major events, such as the increases in risk associated with sudden market turbulence.
  • If our data set incorporates extreme losses that are unlikely to recur, these losses can dominate non-parametric risk estimates even though we don’t expect them to recur.
  • Most (if not all) non-parametric methods are subject (to a greater or lesser extent) to the phenomenon of ghost or shadow effects.
  • In general, non-parametric estimates of VaR or ES make no allowance for plausible events that might occur, but did not actually occur, in our sample period.
  • Non-parametric estimates of VaR and ES are to a greater or lesser extent constrained by the largest loss in our historical data set. In the simpler versions of HS, we cannot extrapolate from the largest historical loss to anything larger that might conceivably occur in the future. More sophisticated versions of HS can relax this constraint, but even so, the fact remains that non-parametric estimates of VaR or ES are still constrained by the largest loss in a way that parametric estimates are not. This means that such methods are not well suited to handling extremes, particularly with small- or medium-sized samples."
 
Recently I compared some methods of risk such as Normal Linear, Historical Simulation, GARCH and CVaR in both 95% and 99% confidence level with different rolling windows (100 to 1000 days) for a set of indices. Besides all drawbacks to HS method, Backtesting showed that this model performed better in both confidence level and all rolling windows!!
 
Historical simulation is a broad term.
One type of [filtered] historical simulation for example uses both 'historical VaR', volatility updating, volatility filtering through some GARCH process (AGARCH/FIGARCH/EGARCH) as well as Monte Carlo for sampling distributions and projection to the horizon, e.g. re-introducing the noise of the filtered process.

Do the same exercise as above, but this time try benchmarking against multifractal forecasting.
 
Top