A drug company just spent a billion dollars to develop a new drug to cure zits and you analyze the data for them in a competent and honest way and tell them ”hey there’s no statistically significant difference between ZitsBeGone and whipped cream”. You think they’ll shrug their shoulders and walk away? They’ll want more than that. They’ll say ”how do you know you’re right?” In fact maybe they are right. You accepted the null hypothesis, but it doesn’t mean to say that it’s correct.
So inherent in your results are errors. There are two types of errors:
is true but we reject it. As an example, you get 100 heads in a row, which is exceedingly unlikely for an unbiased coin (that is, for the hypothesis to be correct), so you reject it. However it turns out that it is unbiased. Weird huh? What’s the probability of getting that wrong? It’s just which in this case is . In this case you were exceedingly unlucky.
We reject , and accept , even though is correct. Just because we didn’t find statistical significance doesn’t mean to say that the effect isn’t there. As an example, that billion dollar drug actually did work, but by some fluke you didn’t see an effect. Now 500 other groups have seen that it works great. You got egg on your face, but you know you did the right thing. You were unlucky. That can happen too.