Well, I got this (some days ago; I got backlogged):
As one of the more highly trafficked climate blogs on the web, I’m seeking your assistance in conducting a crowd-sourced online survey of peer-reviewed climate research. I have compiled a database of around 12,000 papers listed in the 'Web Of Science' between 1991 to 2011 matching the topic 'global warming' or 'global climate change'. I am now inviting readers from a diverse range of climate blogs to peruse the abstracts of these climate papers with the purpose of estimating the level of consensus in the literature regarding the proposition that humans are causing global warming. If you’re interested in having your readers participate in this survey, please post the following link to the survey:
http://survey.gci.uq.edu.au/survey.php?c=5RL8LWWT2YO7
The survey involves rating 10 randomly selected abstracts and is expected to take 15 minutes. Participants may sign up to receive the final results of the survey (de-individuated so no individual's data will be published). No other personal information is required (and email is optional). Participants may elect to discontinue the survey at any point and results are only recorded if the survey is completed. Participant ratings are confidential and all data will be de-individuated in the final results so no individual ratings will be published.
The analysis is being conducted by the University of Queensland in collaboration with contributing authors of the website Skeptical Science. The research project is headed by John Cook, research fellow in climate communication for the Global Change Institute at the University of Queensland.
I'm posting it here so anyone can see, though doubtless you've already seen it elsewhere. I'm dubious about the virtue of surveys to establish stuff though. In fact... why don't I go off and do this one? <goes off... I'm back!> At the end, I got:
Of the 10 papers that you rated, your average rating was 3.2 (to put that number into context, 1 represents endorsement of AGW, 7 represents rejection of AGW and 4 represents no position). The average rating of the 10 papers by the authors of the papers was 3.1.
That wasn't so hard, because 8/10 were "implicit endorsement" and 2 were "neutral" in my view.
Refs
* It’s true: 97% of research papers say climate change is happening - this or similar being published. I've lost track.
* David Appell doubts.
* So does KK but his last bit is wrong.
- Log in to post comments
Who cares? Science isn't done by the number of papers that seem to support a certain position, nor by how many amateurs rank important.
This is a meaningless exercise by John Cook (no surprise there).
David, it's about public perception. There already has been a survey/ranking of papers (to be published in ERL soon, according to Skeptical Science), and the current survey is thus a comparison between expert opinion of papers and the public perception of such papers. Nothing meaningless about that, especially since those who commented on their results all seem to rank their papers as less clearly in support of AGW (as in "AGW is real") than the authors/experts. I would not be surprised if the same applies to various other fields.
What was interesting was that most of the papers I was presented with took global warming as a given, and studied some effect. So there was only one paper that came out and said, "Its warming and we are causing most of it". Most of them just said, "It is warming, and here are two different models of ocean heat, and we look at how they differ". So it was mostly implicit stuff.
Anyway I scored 3.1, but the authors scored themselves as 2.3. Which means they think they blamed people for global warming more than I think they did.
[I think that's a problem with this kind of analysis. On their scoring system its clear they consider "this paper says its 'umans" as a "higher" score than take-it-as-given. But in a sense that's the wrong way round -W]
Another major problem with the survey, looking at posting at SkS anyway, is that the Authors are rating their papers, whereas the survey takers are rating the abstracts.
While this may not exactly be an apples and oranges type of thing, it isn't an apples to apples thing either.
"Science isn’t done by the number of papers that seem to support a certain position"
Actually, in a sense, this is precisely how science is done. In one's tiny corner of a scientific field, one tries to find something new... but in order to do so, one needs to take as a given most of the rest of the scientific universe. How does one decide what the rest of the scientific universe says? Well, by the preponderance of the literature, usually. Sure, sometimes it is going to be wrong, but for the big ideas that is very rare (or at least, the big, oft repeated ideas usually remain pretty useful and it is only for a few edge cases where they might need updating... e.g., Newton's laws to relativity, or darwinism to modern evolutionary biology, etc. etc.)
In any case, the original survey was useful (e.g., to show, yet again, that the majority of scientists in climate-related fields believe that human-induced climate change is real and that there isn't a lot of controversy... "teach the controversy" being a very successful meme in certain circles). My impression is that this "repeat" of the survey is to give people a chance to see for themselves that the original survey was done reasonably well. Now, if _I_ were to have designed it, I would not only report what the paper's authors scored their papers at, but what the Cook et al. analysis was (as the latter would be more directly comparable to our own experience of only seeing the abstracts). But Cook et al. left a comment in their blog saying that we'd have to wait for the paper itself to come out to see how their analysis matches the self-analysis by the paper authors...
Re: Marco's point. So far it seems to be the consensus of those who have taken the survey that the authors rated their papers as more conclusive than the general public. Realistically that should probably just be interpreted as meaning that scientists aren't good communicators, but it will be interesting to see how this is spun. On one side you'll probably have people saying "Look! Scientists aren't being alarmist. They're actually being forced to tone down their language to get published to the point that they're overly cautious." On the other side you will no doubt have people claiming that scientists' own papers can't justify their hysteria.
Given the vastly different interpretations of fairly straightforward statements like Phil Jones' lack of statistically significant warming comment, I think it might also be interesting to investigate how the bias of respondents affects their ratings. You could provide one link on "consensus" blogs and a different link on "denier" blogs and see how the rankings of the same papers differ based on which link respondents came from. If you wanted to be a little more robust you could throw in a few introductory questions asking respondents to rank their agreement with several statements regarding the IPCC's position. A basic science literacy test might be fun too. :)
MikeG, they did provide different links for the two classes of blogs (I think they were different for each blog). Of course that led to some rather amusing "this must be some scheme to trick us" thinking. Do I see "Unbounded Recursion, Conspiracist Ideation and Climate Denial" in the near future?
@David Appell
United States
2013/05/10
"Who cares? Science isn’t done by the number of papers that seem to support a certain position, nor by how many amateurs rank important.
This is a meaningless exercise by John Cook (no surprise there)."
It's an interesting exercise. There are numerous papers on AGW, or related to it. The 'average person in the street' has no idea just what is worth reading, if they want to do some reading. No harm, could be useful. For some reason, though, he's being crucified. That says more about the survey than anything else.
The attacks on John are just McIntyre vs Mann redeux. Of course denial first occurs as a farce and then as another farce.
OT sorry.
Just wondering if you have seen this paper
http://arxiv.org/pdf/1305.0629v1.pdf
What is your impression? A significant flaw in sea ice models? Or Insignificant? Or Misunderstanding situation?
Is it a sensible reason to suggest that perhaps data extrapolations should be trusted more than model projections even if only over fairly short periods?
[Sea ice is never off topic. I haven't seen it, but having just skimmed the start, I'm very dubious. Those nice people at the UKMO do care about conservation, and if their sea-ice scheme wasn't conserving mass I think they would have noticed. And I don't think they are alone in that -W]
W: GCM modelers do care about conserving mass and energy... but are best when they do so within the bounds of one module. E.g., the atmosphere will make sure everything is conserved, as will the ocean, but sometimes things get tricky at the interface (see the flux adjustments back in the 1990s). Now, the ocean and atmosphere have been pretty well coupled, but I'd only be a little surprised to find out that the sea ice module may not perfectly conserve mass when included in the bigger system... still, I'd definitely want to hear from a real GCM modeler, and not just a couple of Nordic physicists.
WC, MMM,
I am sure modelers do care about conservation of mass and energy.
I also would like to hear from more than just a couple of Nordic physicists which is why I was asking.
If Hibler back in 1979 implemented a scheme which for thick ice growth assumed volume of thin ice was negligible then for thin ice growth makes different assumptions. If the results were sensible and appropriate within perennial ice pack and the growth rate is set to match observations within a perennial ice pack, how often would assumptions of such schemes be checked for whether different conditions could cause problems? Frequently or is it possible that modelers would just think Hibler got good results and it calculates rapidly, lets not re-invent the wheel, we'll just use that scheme?
If somebody does look at the code and thinks that looks odd, do they just go back to Hibler and see the arguments laid out confirming that is what the code is supposed to do. Do they then think that is ok then and get on with something else or do they question whether circumstances are changing?
It is hard to judge the likelihood of whether this is likely to be a significant flaw without some feel for what happens. Hence ask a modeller or maybe someone that has moved on to other things is able to give a feel for situation without treading on too many toes that might be important to them.
PIOMAS references Hibler as the scheme used so lots of models repeated the decision to use that scheme seems plausible.
John Cook has an article published at The Conversation web site.
http://theconversation.com/its-true-97-of-research-papers-say-climate-c…
The Conversation "is an independent source of news and views, sourced from the academic and research community and delivered direct to the public."
Now extended to the UK.
http://theconversation.com/how-cold-has-it-really-been-in-the-northern-…
Hello Doctor Connolley. Long time lurker and first time commenter here, although I'm a regular over at Respectful Insolence. I saw a news article and wondered if it related to this post.
http://www.timeslive.co.za/scitech/2013/05/16/scientific-papers-overwhe…
Julian, not Dr. Connolley here, but the answer is "yes and no". The study referenced in the newspaper is the study prior to the survey referenced here: the published work used a selection of people strongly interested in the field to rate the abstracts (and also the authors of about 2000 of those papers). The survey looks at how a broader audience rates those same papers. Reproducing the work with different methodology, and the potential to identify papers (or rather, abstracts of papers) that are open to a more broad interpretation. The latter will require a few ten thousand participants, I guess.
Thanks Marco.
A bit off topic, and a bit late in the day, but John Wettlaufer isn't "just" a "Nordic physicist". He is (or was?) also at Yale:
http://physics.yale.edu/wettlaufer
and coauthor of things like "Nonlinear threshold behavior during the loss of Arctic sea ice"
http://arxiv.org/pdf/0812.4777v1.pdf