Monte Carlo VaR

rodnymullen

New Member
Hello everyone!

I am writing a master thesis about VaR. The main idea of it is to compare various methods VaR estimation. For that purpose i used a hypothetical portfolio made of 2 stocks and 1 call option, value of which is calculated as max(St-K;0). So far I used Historical Simulation, EWMA; ExpMA, Monte Carlo (normally distributed), Monte Carlo (t-distributed), Monte Carlo (lognormally distributed) of 95% confidence lvl. The theory says that Monte Carlo VaR should outperform other methods, especially when we have non-linear instruments like call options in the portfolio. But in fact i got the opposite results where MC (normally distributed) MC (lognormally distributed) VaRs demostrated the worst performance. Only MC (t-distributed with 15 degree of freedom) was better than others.

Therefore that made me think that i did not implement MC VaR in a correct way. Maybe I had to use multivariate MC VaR instead of univariate MC VaR for my portfolio and make these generation of correlated random vriables, Cholesky deconposition etc.... Because what I did was simply the using of final returns of the portfolio but not using the returns of each asset seperately.

Here is the file with my results.

https://www.dropbox.com/s/ku6bmpjf4m1rh4l/My VAR.xlsx?dl=0

I am hoping to recieve your help. Thanks =)
 

ShaktiRathore

Well-Known Member
Subscriber
Hi,
Mcs is not producing best var because u have assumed normal/lognormal for options which is not appropriate ,u can assume these for stock returns/stock prices. If u can make adjustment to options taking option returns as delta*stock return,u shall get better results.i mean take chg in options values as delta *chg in underlying(method given in frm readings) to give better distribution approximating normal,chg in options value being non linear does not fit normal.
Historical var for me is the best method to estimate var of non linear options.it shall give very good estimate.t-distribution assumes thicker tails shall produce more conservative var hence better estimate but its not apprropriate to use.
Thanks
 

rodnymullen

New Member
Thank you for your reply!

My thesis supervisor suggested me to set delta equal to 1. It means that there should be no difference between the distribution of option returns and distribution of any another stock when the underlying stock is above the strike price.

This how my option returns look like over time:
upload_2014-11-28_16-57-3.jpeg
So basically every fluctuation of stock price which is below the strike price doesn't enter the portfolio returns.

However now I am not that sure whether it is correct or not. My intuition says me that it is too simplified and is far from the reality. Maybe this is the crusial point which makes my results inconsistent.

Thanks!
 
Top