DB LDA Approach

Hi David,

The answer for the below is B, could you plz throw some light on the explanation.

What is the main reason that DB used a piece-wise severity distribution instead of a single parameteric distribution to model losses within BL-ET cells?

(A) Insufficient Cells
(B) Difference between body & tail
(C) Mixing internal & external data
(D) Complexity

Thanks
Rahul
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
Hi Rahul,

I'm glad you posted this: this is a good example of a question where might be able to guess.
If you need to guess, I recommend guessing with a big FRM theme
... of all those answers, the one word that might stick out is "tail" ... many times in the readings we have seen how difficult it is to understand (i.e., parameterize into something quantifiable) the extreme tail. And the tail contains catastrophes (low frequency, high severity) ... so going for the answer with "tail" if we are guessing, we could do worse :)

in the case of DB LDA: their tail was > $50 MM operational losses. In all of their own history, they did not have enough data to inform the tail (> $50 million losses) with an empirical distribution. Sort of like trying to generate a photo with only 5 pixels. The piece-wise just means they combined (or grafted on) another distribution: they used data (empirical) for the severity distribution up to 1 MM (internal to 10K, then internal + external to 1 MM; but notice, this is a data shortage problem too...they have enough internal data to emipirically characterize losses to 10K but then they need to start supplementing to 1 MM but beyond that, historical data is insufficient), then when they get to the tail, no more data, must "graft on" a parametric (EVT POT) distribution.

this is thematic: we typically have lots of useful data for the body (e.g., what's the mean return? what's the EL?) but risk cares about the tail where our dilemna is, by definition ("once in a lifetime"), we lack data

David
 
Hi David,
Thanks for the concrete example & answer.
The answer is B. Right

I also posted a question regarding Limitation or Improvement area for LDA. Could you plz jot your views.

On this is another one on LDA, according to me the answer is A, becoz lossed are not Inflation adjusted & neither Scaled up.
Could you please clear, b/c one of the source has answer as (B). I am bit confused, b/c B looks right as well. The insurance do get subtracted.

The "Relevant Loss Data Process" used by Deutsche Bank contains the following principle: "As long as data is considered relevant according to defined criteria for business activity (firm type, product and region), it will be included in the capital calculations, no matter when the loss occurred." In regard to loss data sources at this stage, all of the following ADJUSTMENTS are made to loss data except:

(AI Inflation adjustment
(B) Insurance recoveries subtracted
(C) External data mapped to internal structure
(D) FX conversion into EUR

David,
Here is another one >>> just to make sure on the answer.
In the DB LDA approach, all data points are regarded as a sample from an underlying distribution and therefore receive the same weight or probability in the statistical analysis. However, there are three EXCEPTIONS. All of the following are EXCEPTIONS except for:

(A) Large bank losses (scale)
(B) Split losses
(C) Old losses
(D) External losses

I read the losses are not scaled up or down. The answer for this is B, not sure if it is the right one.


Thanks for your help & support

Rahul
 
Top