Jeffrey-Lindley Paradox

Via Amy Perfors at the Harvard statistics blog, Social Science Statistics Blog, I learned of the Jeffrey-Lindley Paradox in statistics. The paradox is that if you have a sample large enough, you can get p-values that are very close to zero, even though the null hypothesis is true. You can read a very in depth explanation of the paradox here.

I don't find this either surprising or worrisome, as Perfors does. While I'd never heard of the paradox before (it's really pretty cool, if you're into statistics or Bayesian reasoning), everyone who's taken a statistics course understands the perils of large sample-sizes. The fact is, if you have two different groups, even from the same population, they are, by definition, two different groups, as they are composed of two different sets of individuals. As a result, where measures that are influenced by random variables are concerned, the means of the groups will be different, and if you get a large enough sample size, that difference will be statistically significant. Since everyone is aware of this, I can't imagine it's a problem. If it looks like someone's using a sample that's too large, so that any significant differences he or she might find are likely to be theoretically and practically uninteresting, people will pick up on it, either through effect size calculations or through subsequent research.

More like this

How many times did Pavlov ring the bell before his dogs' meals until the dogs began to salivate? Surely, the number of experiences must make a difference, as anyone who's trained a dog would attest. As described in a brilliant article by C.R. Gallistel (in Psych. Review; preprint here), this has…
How does cooperation evolve? It is in an organism's best interest to screw its competitors in order to best convey its genes to the next generation, yet we see a variety of human and animals examples of cooperation. The answer falls to a division of mathematics and economics called Game Theory.…
Marilyn Mann pointed me to an interesting post by David Rind over at Evidence in Medicine (thanks!). It's a follow-on to an earlier post of his about the importance of plausibility in interpreting medical literature, a subject that deserves a post of its own. In fact the piece at issue, "HIV…
By way of Digby, I came across this poll of white attitudes towards various ethnicities (including whites) based on self-identified Tea Party support (note: respondents were only from four states, NV, MO, GA, and NC). One of things that struck me while looking at the data (pdf and pdf) was the…

I just noticed this post of yours, so feel compelled to comment. :)

I definitely agree with you that to the extent people keep effect sizes in mind, this isn't a worrisome result; what I'm more worried about -- and failed to say in my post -- is related to, for lack of a better word, "meta" cognitive-science, or sociology of science. Because (as we know from much cognitive science research) people tend to think categorically, and because a significance level gives a nice "category" to fit results in, even if effect size is reported and it's small it's easy to just notice and remember the significance level. This tendency is made worse by the fact that sometimes there is no accepted notion of how big of an effect is "interesting", in the same way that there are accepted p-value thresholds. Thus, if we often run subjects until getting a significant p-value -- even if we report effect size -- what ends up staying in memory is just the result and the knowledge that it was significant. It might be better to stop collecting data earlier, thus possibly overlooking findings with small effect sizes, in order to just focus on and pinpoint the interesting and robust results.

Honestly, I don't think this is a big worry in practice, at least for a lot of reported work. I made the post mainly because I think the paradox is cool and wanted to talk about it. :) But effect size does matter, for many diverse reasons: and the more salient we make this point, the more often we emphasize it, the less I worry about being led astray because of cognitive factors like those I detailed in the last paragraph.

By Amy Perfors (not verified) on 02 Nov 2006 #permalink

Hi Amy, thanks for commenting. I agree with your point about cognitive factors, though I think that's where further research sorts things out. Howver, like you, I mostly wrote this post because I thought the paradox was cool.