Reducing the chance of making a type 1 error.

Janda66

New Member
Hey there,

I was just wondering, when you reduce the size of the level of significance, from 5% to 1% for example, does that also reduce the chance of making a type 1 error in an hypothesis test?
Also, if you repeat the same test many times to gain more information about the certain data set, will that also reduce the chance of making a type 1 error?

I know that repeating the test with a larger sample size will reduce it, but am not sure about the others.

Thanks a lot!
 

ShaktiRathore

Well-Known Member
Subscriber
Type I error is the chance of rejecting the true sample. That is we reject the null hypothesis when its actually is true at a given level of significance. The alpha is the significance level which is the probability of committing the type I error. In the area of distribution curve the points falling in the 5% area are rejected , thus greater the rejection area the greater are the chances that points will fall out of a population in this rejection area and thus more probability of incorrectly identifying true samples in the rejection area.If level of significance reduces from 5 to 1% than the rejection area also reduces thus lower rejection area reduces the chances that points will fall out of a population in this rejection area and thus less the probability of incorrectly identifying true samples in the rejection area. Thus the chances of committing the type I error decreases with reduction in the significance level alpha.

thanks
 

David Harper CFA FRM

David Harper CFA FRM
Subscriber
I agree with Shakti, I think you phrase is tautological, in a good way: we design (decide) the significance (α) level and, in doing so, we make a decision about the probability of making a Type I error. For example, to lower the significance level from 5% to 1%, is to decide for a 1% probability of Type I error; and the price is a higher probability of a Type II error (which, i don't think, we can similarly target so easily).

fwiw, my best source on the particulars of this, is http://stats.stackexchange.com/
.... for example, http://stats.stackexchange.com/ques...-the-definitions-of-type-i-and-type-ii-errors
 

chiyui

Member
Hi Janda88,

Since you're mentioning this issue, let me try to tell you more about this.

The probability of type 1 error is just exactly equal to the significance level (call it alpha as usual). So we can manipulate it easily as we like. If one feels like, for just any reason suits, to take a higher risk of committing it, he/she just simply choose alpha equal to 10%. If he/she doesn't feel like it, just decreases the choice to 1% or even lower.

But this's not that easy in case of type 2 error.
The probability of type 2 error (call it beta as usual) will increase if we decrease alpha, and vice versa. But by how much? It depends on what is the true answer of the unknown parameter you're testing. In other words, beta is a function of the unknown parameter.

If we know the equation of this function as well as the true answer of the unknown parameter, we surely can calculate the exact answer of beta. But this is rarely the case in reality. And this means we don't know how much risk we're taking when talking about the possibility of committing a type 2 error.

That's why people tend to say "not reject the null hypothesis" but not "accept the null hypothesis".
If you accept it, you will immediately expose to the risk of committing type 2 error, and people don't like to take this risk because they don't know the probability of the risk.
But if you're just not rejecting it, you can make some excuse saying "not rejecting it doesn't mean accepting it", something like that. Then you can even further say "we need further investigation in order to determine whether we should really accept it or not". This is a game of language.

But in some cases, people just gotta really accept the null hypothesis, especially when they gotta make a decision and take an action upon it (if you're asking me what's the reason, well, I don't know either). One word, "just do it!"
In this case, either they pay some money or time or resources or any other costs to make "further investigation in order to determine...", or they can only "just do it!"

Hope the comment helps your understanding.
 

Tika

New Member
Thank you friends for good discussion. Yes friends, we try to reduce the type I error reducing the significant level as 5% to 1%. When we try to reduce the type I error, type two error will increase automatically. So while calculating the sample size we fix the significant level as (alpha) 95% leaving 5 % chance of error out of 100. And same time we use the acceptance error as " d" in the formula as n= (z^2pq)/ d^2. What is the d here? We accept error like 5%, 10%. what it is? And we say we fix the one error and try to reduce another error. Usually which error we fix and how and which error we try to reduce and how do we reduce it?
 
Top