The choices were given in term of range. I dont think I see any 2.7. I saw 1.75 to 2.5. that was closest I chose that...hm..[/quote
yes it was, and i selected some range... now am unsure of what i marked.
The choices were given in term of range. I dont think I see any 2.7. I saw 1.75 to 2.5. that was closest I chose that...hm..[/quote
yes it was, and i selected some range... now am unsure of what i marked.
.....
I strongly felt that none of the options were correct, as the return on 40 M portfolio was 8 % and the same on 60 M portfolio was 9 % . Which gives average return of 8.6. Hence at-least one of the options had to have mean return off 100*8.6. i.e. the mid point of the +/- Sigma range. I checked for it but did not get it. And then decided to guess "A" and move ahead.
Plulutes, Babak083,
I too took Nov 13 Part I and although I found it tricky, I nevertheless found May 14 trickier especially in the Quants area.
Regarding your comments I couldn’t agree more. i.) no shame in re-sitting ii.) I have benefited more studying 2nd time round, especially with BT material (compared to Schweser first time round).
I too skipped a number of questions (10-12) hoping to have time to review them at the end, which I didn’t. I too wish I had more time to review in greater detail. What I found most frustrating, on personal level, was that I skipped questions which was bread and butter to me during the mocks but I just couldn't answer on the day i.e. my calculations were incorrect. Referring to percentages, I think I answered 50% solidly (let's say I got 40 correct), yet 30% reasonably guessed (let's say i got 15 of these correct) with 20% as a flat out guesses (let's say i got 5 of these correct).
In terms of knowledge and answering questions, I did do better than in Nov 13 but as mentioned I just didn't have time to review the skipped questions which I think will cost me.
I have benefitted immensely from the BT material, but feel that based on the outcome of Nov 13, I still haven’t done enough.
Regards
Wolf
I think the correct answer was 1.600. First time I saw question like this. I did it in some logic.800.
That's tough one as time has to be minimum and takes minimum number of scenarios.. I too I was confused.
Hi, It was at least 30% harder than in Nov 2013Hi, candidates who seat also for PART I 2013 Nov. exam, how do you think the difference between Nov 2013 and May 2014. I think this exam was more difficult than November exam.
I think the correct answer was 1.600. First time I saw question like this. I did it in some logic.
The difference to the mean in the new confidence interval was 4 times less than the previous one. That's why I think also that no. of scenarios should be 200 * 8 = 1600 to reduce 4x. But I picked up 800.....
I might have not checked well..I could be wrong here but I think I saw 2 options
1 with a mean of 100 +/- (2 * Sigma)
and the second with a mean of 100*1.086 +/- (2* Sigma)
Gosh,....I am now getting a feeling that my memory may no longer be reliable
evn mine was 2.7.i too got 2.7
for this question i took correlation as litmus test. as sqr of correlation = R sqr. therefore the R sqr came to .09.. which means the statistical significance of independent variables is too low. accordingly the f test is not statistically significantThere is one question when they gave the computed B1 and B2 t-stats value (1.64 and 2.33) and asks to compute F stats and determine if it is significant at 95% (reject and accept the Null Hypothesis that B1=0 and B2=0)..I dont know how to compute F stats from B1 and B2 so I make a guess that if B1 and B2 are both significant at 95% then F stats is also significant at 95%...Is that correct ?
on the efficient estimator the formula as per book is one with lowest std/sqr root of time.. i m confused whtr it shld b total time or time for 1 scenario.. i finally selected option c that was one with lowest std dev per scenarioIf i am correct, there actually was an exercise on this in David's set, which is were i based myself on. Basically, if i remember correctly, you need to chose the one for which SE times sqrt(t) is lowest. Stdev, number of scenarios and total time to run scenarios was given. So i calculated SE for each based on Stdev/sqrt(#scenarios run), then multiplies with sqrt(#time needed to run all scenarios) and took the minimum of that. However, i'm not 100% sure if i needed to use sqr(#time all scenarios) or sqrt(#time one scenario).
And I thought of you for the crack spread question indeed However, the one you explained was 1 output from 2 inputs, here it was 1 input for 2 outputs. Despite spotting that difference, i didn't find the correct answer either. I came up with 5, whereas the options were -10 and 10.
On the barbell one, i didn't get that one either. I'm getting the feeling there were some questons which many many candidates won't get
Never really thought of it that way, but that's absolutely correct! Scares me a little bit, but i remain hopeful
May be,....didn't read it that way though.....if it were today, then the text should have simply read by the end of the day.[/q
c..
was that the the on with 25% standard deviation or with 30%...? if you remember...on the std
on the efficient estimator the formula as per book is one with lowest std/sqr root of time.. i m confused whtr it shld b total time or time for 1 scenario.. i finally selected option c that was one with lowest std dev per scenario
Had to skip 7/8 questions while doing the exam because i had no idea how to begin with them. Got 40 minutes left in the end of the exam to have a second look, so time mangement wise, i'm happy i could finish the entire exam without having to skip questions. Wonder how other candidates did time-wise? I felt most people in my testing site were able to complete the exam, contrary to expectation.
Content-wise, i guess many of the questions were to be expected. Maybe somewhat more qualitative questions then expected (there were 2 on the new country risk chapter, btw). For future candidates, i do think it's important to stress the GARP practice exams. There were at least 3 questions which were literally copy/paste. I do think that's a nice gesture of GARP, to "reward" candidates with thorough prepration.
There is a formula to calculate F-stat from the individual t-stats and correlation. I knew it exists but I personally skipped it when i was studying because i thought it was an overexagerated formula to expect candidates to know. If GARP truely wants to be "practice-oriented", i really think these kind of questions are ridiculous. In practice, you'll get the p-value. In that sense, i liked the more qualitative questions from QA more (like the interpretation of the scatterplot and R² implications). Anyway, applying the formula on 1.64, 2.33 and 0.3 correlation gives an F-stat of 7-something, so the answer should be significant if i'm correct.
I found that a bit strange too indeed. I did: lamda^23 = 1/2 --> lamda = 0.97.. Then 0.03 times 0.97^4 was one of the given answers.
On the forward rates: i believe the currency was quoted in reverse (in David's exercises the EUR/USD was always given as e.g. 1.2. Garp now gave it as 1/1.2. That's probably why the order of the rates was inverse too. I converted the rate to the "norrmal" notation, did the "normal" calculation and then converted back. Maybe a bit of a detour, but i felt most safe like that and the answer was in the list.
was that the the on with 25% standard deviation or with 30%...? if you remember...
1. Yes I got 0.85% for that one as well. I used different method to derive it..tree method as I learn it in my uni.can anyone confirm the answer for following 2 ques
1. prob ques.. the one at beg.. saying joint prob of a and b is 5 percent.. and conditional prob of b on A being 0.5. prob of neither happening ..
i selected 85 % .. as A independent prob wld be 10% i.e. 0.5/ 0.05 and neither a and b= 1- 0.05-.10 = .85 ..
2. the total number of payments on tht ISDA confirmation stuff.. with qrtly floating payment and fixed payment.. options were 5, 14, 21 .. i selected 14 .. dunno if its right or wrng
can anyone pls confirm