So it is in the nature of statistics that you can goof. If your is set to then if you do 20 experiments, you expect with a pretty good chance that you’ll get a type 1 error. For example, suppose it’s really true that chocolate ice cream does zip for colds and there are 100 experiments being done to test this. Around 5 of these will find a statistically significant result and publish it. What happens to all the others? You think they’ll get published? Why would any one think chocolate ice cream would do anything for colds in the first place? So why waste journal space on that. But the studies that do find an effect will get published because it’s so unexpected. The experiment is likely to get at least a lifetime of free ice cream (preferably one of the better brands). Was the experimenter cheating? No, they were just unlucky, well maybe in this case, they were lucky.
If you think about it, there are always a lot of people testing weird and strange hypotheses. Most of the time you’d expect to find . They may not all be on chocolate ice cream but they’re on something, maybe guanabana juice. Because it’s fairly standard to set as the cutoff for statistical significance, this implies that 1 in 20 of these studies will find an effect that’s not really there and then this’ll likely be published. If it’s strange enough, or if there’s market potential it might get picked up by the press. So that’s another reason not to believe everything you read in the papers.