About a month ago, we were told that theory is dead. That was the thesis of Chris Anderson's article in Wired. Rather than testing hypotheses using the scientific method, Anderson argues that scientists are merely generating loads of data and looking for correlations and stuff. The article was a bit muddled, but that's Anderson's main point . . . I think.
Well, now the Times Higher Education has published an article by Tim Birkhead in which he argues the opposite (In praise of fishing trips). Birkhead says that the scientific establishment is too attached to hypothesis testing. This means funding agencies are reluctant to approve projects that are designed to collect data without testing a priori hypotheses. Here's the brunt of his argument:
The scientific research councils seem to be obsessed by hypothesis testing. Many times I have heard it said by referees rejecting a proposal: "But there was no hypothesis." The research council modus operandi now seems to be the h-d method, but taken to an almost ludicrous extreme - researchers basically have to know what they are going to find before putting in a research application (in truth, they need to have done the research, even though they will not actually say so, to write a convincing account of what they intend to do). That's hardly the best way of doing science. The other problem, and it is a much more serious one, is that by knowing what you will find out, science becomes trivially confirmatory and inherently unlikely to discover anything truly novel.
So, which one is it? Have we abandoned hypothesis testing? Or are we so attached to hypothesis testing that we overlook novel projects that would explore underdeveloped areas of research?
So, which one is it? Have we abandoned hypothesis testing? Or are we so attached to hypothesis testing that we overlook novel projects that would explore underdeveloped areas of research?
I suspect that depends on which type of grant you've recently had rejected.
I am a Popperian believer in hypothesis testing. When I make a general collection of fishes from some stream in Venezuela, my hypothesis is: "Someday someone will find some of these specimens useful in testing a hypothesis." My hypothesis has been supported on a number of occasions.
I think hypotheses help organize your efforts. We all know that no results end up being exactly what was proposed, so the article's characterization of research as essentially a priori has never been realistic. Lots of new data get generated and new hypotheses are put forth as the work progresses. I think if you can't generate and adequately justify some set of hypotheses that relate to the data you want to gather, then you are running a good risk of wasting effort, and you should either try harder or find a new line of inquiry.
"Let's collect some data" is a shot in the dark. Why did you collect THAT set of data? Was it totally random? If it's not a completely random set of data that you're hoping to collect, then CONGRATULATIONS, your efforts are informed by theory, and you are likely testing a hypothesis as we speak.
All that's left is an odd situation: you've collected data, you didn't do it randomly, yet you're unable to determine what aspect of theory your data addresses. Imagine our surprise when the grant is denied.
Tim is probably correct - most NIH grants that are approved are for well planned out experiments that leave little room for fishing expeditions and crazy ideas. That being said, we shouldn't all go into the business of mindless Big Biology (collecting tons of data for the sake of collecting tons data) and completely ditch hypothesis driven research.
BTW there is quite a bit of money for non-hypothesis driven big biology - the type that Anderson advocates for - but little money for small biology-type fishing expeditions.
I'm not that educated on exactly which grants get funded, not having risen high enough to go to study sections. However, I would say from my own experience that many labs still do large data cloud collection studies, whether or not they are explicitly funded within their grants.
I don't foresee us NOT being able to fund these novel data collection approaches. We will still be collecting massive reams of data without specific hypotheses (just look at the proliferation of DNA microarray screens and experiments).
And regardless of what correlations or findings come from analysis of the "cloud", I think that it will always come back down to the level of hypothesis testing. I cannot see us as a society or as scientific individuals just accepting important findings and discoveries based on a p-value alone. I think we will always want to take it back to individual corroborating experiments.
Or perhaps I'm just naive.
I think using a hypothesis to focus and organize a study design is quite valid. I will say this is not the only method to do so, however. Creating a set of objectives, while similar to a hypothesis, does not pick sides as it were. It simply delineates the focus of study design. I think it is especially valid if you honestly do not know the outcome (such as if no baseline data has been taken). This could theoretically, at least,speed up science. But whatever works, right?
Hello
Find "Pierre Trémaux"
Trémaux on species: A theory of allopatric speciation (and punctuated equilibrium) before Wagner
http://philsci-archive.pitt.edu/archive/00003806/
Have we abandoned this blog? Or are we so attached to this blog that we overlook novel posts that would explore underdeveloped areas of this blog?
I think Tim Birkhead is right on point. A hypothesis is much more a statement of what you already know than a tool of discovery. Hypothesis-driven research both limits the scientist and the valuable funding that is given to scientists to uncover new findings.
While hypothesis-driven research clearly has a place, especially in driving home a point that is basically circumstantial and when a clear purpose is provided for proving that point, there should exist other, legitimized avenues for doing science: for example, when attempting to affect policy, drive business, or substantiate a key breakthrough. I think that not thinking outside the hypothesis, however, innately limits Junior scientists, and less mature research projects. Clearly structure is needed in research, but I think if scientists are handcuffed to the Scientific Method for their entire lifetimes, they inherently limit their ability to reach their true potential.