Is a shift away from peer review cause for concern?

Today, Inside Higher Ed has an article about the recent decline of peer reviewed papers authored by professors in top five economics departments in high profile economics journals. A paper by MIT economics professor Glenn Ellison, "Is Peer Review in Decline?," considers possible explanations for this decline, and the Inside Higher ed article looks at the possible impacts of this shift.

The alternative threatening the peer reviewed journals here is the web, since scholars can post their papers directly to their websites (or blogs) rather than letting them languish with pokey referees. But I think the issues here go beyond the tug-of-war between old media and new media and bring us to the larger question of just what is involved in building new scientific* knowledge.

One concern Ellison raises is that well-established economics professors who use their websites to post their new papers are tapping into a large audience (potentially much larger than they would have just publishing in the journals in their field), and they can get their work out quickly, but in the process they are circumventing the quality-control function peer review is supposed to play. The Inside Higher ed article quotes Ellison:

"What I think is potentially problematic is if more top authors start withdrawing from the peer-reviewed journals, then the peer-reviewed journals become a less impressive signal of the quality of the paper."

"What I worry about," Ellison said, "is you get to a point where you can't make a reputation for yourself by publishing in the peer-reviewed journals. That locks in today's elite."

Ellison says that the economists opting out of publishing in the peer reviewed journals tend to be at the top of their profession, not those climbing the academic ladder and trying to make a name for themselves. Indeed, Ehud Kalai, a professor at Northwestern University's Kellogg School of Management and editor of Games and Economic Behavior, points out that the internet won't be putting the journal publishers out of business just yet:

"The other thing that's a bit puzzling in this whole theory, it seems to me, is that with this explosion of information on the Internet, peer review has become even more needed because there are so many more papers," Kalai said, adding that the number of economics journals has exploded in recent years. "They're just multiplying like mad. If there is a trend not to publish, why are so many starting them?"

Of course, my interest in this story has less to do with the particular dynamics at work in the tribe of academic economists and the sorts of strategies to which these dynamics give rise than to the larger issue of how a scientific community understands the process of building and communicating knowledge.

It's part of the standard picture of science that you can't say you've built knowledge about a piece of the world until your results and interpretations have withstood the scrutiny of others who are working to understand the same piece of the world. Here's how I described peer review in an earlier post:

The reviewer, a scientist with at least moderate expertise in the area of science with which the manuscript engages, is evaluating the strength of the scientific argument. Assuming you used the methods described to collect the data you present, how well-supported are your conclusions? How well do these conclusions mesh with what we know from other studies in this area? (If they don't mesh well with these other studies, do you address that and explain why?) Are the methods you describe reasonable ways to collect data relevant to the question you're trying to answer? Are there other sorts of measurements you ought to make to ensure that the data are reliable? Is your analysis of the data reasonable, or potentially misleading? What are the best possible objections to your reasoning here, and do you anticipate and address them?

While aspects of this process may include "technical editing" (and while more technical scrutiny, especially of statistical analyses, may be a very good idea), good peer reviewers are bringing more to the table. They are really evaluating the quality of the scientific arguments presented in the manuscript, and how well they fit with the existing knowledge or arguments in the relevant scientific field. They are asking the skeptical questions that good scientists try to ask of their own research before they write it up and send it out. They are trying to distinguish well-supported claims from wishful thinking.

Methodologically, peer review puts scientists into a dialogue with other scientists and presses them to be more objective. It's not enough that you're convinced of your finding -- you have to convince someone who has officially assumed the role of the guy trying to find problems with your claims.

But, as we've discussed before, there are ways in which peer review as it happens on the ground departs from the idealized version of peer review:

In many instances, the people peer reviewing your manuscripts may well be your scientific rivals. Even if peer review is supposed to be anonymous, in a small enough sub-field people start recognizing each other's experimental approaches and writing styles, making it harder to keep the evaluation of the content of a manuscript objective. And, peer reviewing of manuscripts is something working scientists do on top of their own scientific research, grant writing, teaching, supervision of students, and everything else -- and they do it without pay or any real career reward. (This is not to say it's only worth doing the stuff you get some tangible reward for doing, but it can end up pretty low in the queue.)

Some of this may explain why the typical submission-to-publication interval for economics papers is now around three years.

Is peer review an indispensable step that certifies a finding as "knowledge" before it's disseminated? Or is it a force that just slows the release of knowledge that a vibrant research community could be using as the foundation for more knowledge?

One of the things I find striking about Ellison's comments is their focus on the score-keeping aspect of the tribe of academic economists. In some ways, it sounds like these journals are primarily of value in building reputations, rather than communicating knowledge. Top professors who are drifting away from the journals are undercutting the prestige of those journals, thus hurting the prospects for an up and coming economist who hopes a publication in one of these journals will boost her reputation. Those who are currently the elites in the tribe are locked into their elite positions -- positions that ensure that the papers they publish on their own websites will get plenty of attention within the tribe.

I'm not denying that the score-keeping dynamic is a real feature of one's life in the academic world. However, the fact that the well-established scholars in a scientific community can get more of a hearing for their ideas based on the authority they've build up from prior works doesn't mean that the scientific community should start accepting arguments from authority. Indeed, one of the features that is supposed to make science different from other human activities is resistance to arguments from authority. (And, as we admire the long and productive careers of the established scholars in our fields, we shouldn't forget where crackpots come from.)

Another thing that's odd here is that the internet has been heralded as a democratizing force, bringing information to more people and letting more of those people enter a conversation about that information -- yet Ellison sees signs that the internet is entrenching existing hierarchies in economics. Perhaps this is inevitable when there is such an explosion of information; the only sensible plan is to get your information from a reliable source, rather than having to evaluate it all your own self. I wonder the extent to which peer review may already be lulling readers, making them think, "Someone has already put this paper through the wringer, so I don't need to be so skeptical myself as I read it." Could it be that the caution scholars bring to papers published on the internet (without peer review) might better engage the skeptical faculties that, ideally, should always be running?

Finally, if it would be better for the scientific community if papers weren't endpoints, serving mostly to add another notch to your CV, but rather parts of ongoing conversations meant to advance the knowledge of the community, could there be a real advantage to quick dissemination of findings coupled with something like peer review that happens in the open, as part of the conversation?

I'm especially interested to hear what the open science folks think about these questions.
_______
* For the sake of argument, let's stipulate that economics counts as a science.

More like this

An important part of the practice of science is not just the creation of knowledge but also the transmission of that knowledge. Knowledge that's stuck in your head or lab notebooks doesn't do anyone else any good. So, scientists are supposed to spread the knowledge through means such as peer…
When my "Ethics in Science" class was discussing scientific communication (especially via peer reviewed journals), we talked about what peer review tries to accomplish -- subjecting a report of a scientific finding to the critical scrutiny of other trained scientists, who evaluate the quality of…
Over at Cosmic Variance, Julianne Dalcanton describes a strategy for scientific communication that raises some interesting ethical issues: Suppose you (and perhaps a competing team) had an incredibly exciting discovery that you wrote up and submitted to Nature. Now suppose that you (and the…
One of the greatest shocks when I started working in industry was the realization that the peer-reviewed paper, the most valuable form of currency in the academic world, was valued so little. In academics, there is a well-established reward system for getting your work published in peer-reviewed…

I believe the most important weeder in the traditional peer-review system is not the reviewer. It's the editor who decides which submitted papers to send through peer review. There is usually a far greater quality difference between a turned-down manuscript and one sent to review than between the original version of an accepted manuscript and the peer-reviewed version that gets published.

Also, scientific consensus on an issue forms only after an important contribution has been published, often years afterwards if it's a controversial one. Whether this contribution has been peer-reviewed or not is of little interest at that point as most published papers are simply forgotten: what matters is if the piece was interesting enough for anybody to pay attention to it.

So my answer is this: first we decide if a piece of research is interesting. Then we investigate if it's dependable. Only at the second stage does peer review do any good, providing some clues. So peer-reviewing most humdrum contributions is really a waste of everybody's time.

The journal Economic Inquiry says the largest problem is "the gradual morphing of the referees from evaluators to anonymous co-authors". The journal recently announced an experiment:

an author can submit under a 'no revisions' policy.... I will ask referees: 'is it better for Economic Inquiry to publish the paper as is, versus reject it, and why or why not?' This policy returns referees to their role of evaluator. There will still be anonymous reports.

More details available at the journal.

RfP brings up an excellent point. I have never had (nor do I know of anyone who has ever had) papers accepted as-is. If your paper is accepted, it's accepted "with revisions." This generally means that the science is sound enough that you won't have to go back to the lab; the requested changes will fall either in the data analysis, data presentation, or the writing. Reviewers frequently provide a laundry list of things they don't like. While many of these are genuine concerns that lead to a stronger and more concise presentation, in my experience, a significant fraction of the suggested revisions are simply stylistic points, or worse yet, avoidable misunderstandings arising from superficial reading.

So, rather than give the reviewer so much leeway to rewrite your manuscript, what if, as RfP mentioned, we simply ask them: would you accept as is or reject outright? Barring this either/or approach, I think the review process might benefit from increased emphasis on prioritizing the suggested edits. For example: could we get reviewers to provide the "top five" changes they would most like to see before the manuscript is in print? Such an approach should urge the reviewer to genuinely consider what is most important, while providing the authors with a more manageable amount of higher quality feedback for revision. Those are my two cents...

PLoS has an option to rate others papers and give small commentary. I know PLoS peer reviews papers before publication, but try imagining an online community of scientists surrounding a journal without a huge amount of peer review before something gets posted.

The ability for other scientists (really, anyone who takes the time to read the paper) to instantly rate and give feedback on an article could help the review process.

An old open source "law" for programming is described on wikipedia as follows:

Eric S. Raymond states that "given enough eyeballs, all bugs are shallow". More formally: "Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone."

Instead of a small number of peer reviewers taking a long time to make sure a paper is top quality, a large number of other scientists reading the paper could do the trick. If there's a flaw, it should be found by someone without a huge amount of hassle - if it's hard to read or composed poorly, it will get low ratings.

In a hypothetical magic world of open access and instant online feedback like this, great papers will float to the surface with high ratings, and poor papers will sink to the bottom with low ratings.

By Taylor Murphy (not verified) on 26 Jul 2007 #permalink

"A paper by MIT economics professor Glenn Ellison, 'Is Peer Review in Decline?,' considers possible explanations for this decline, and the Inside Higher ed article looks at the possible impacts of this shift."

Could it be that this all arises out of the fact that economics is highly mathematized bullshit?

By PhysioProf (not verified) on 26 Jul 2007 #permalink

I like the idea of open source peer review, but there's no substitute for pre-publication screening. We need some indicator that a piece is actually worth the time to give it a critical reading.

On the other hand, pre-prints, non-reviewed articles and random notes are really important in keeping up to date, but here the indicator that they are worth the time to read is usually the status of the author.

I'm not sure blurring the lines is useful - in the same way you wouldn't cite Wikipedia, you don't cite pre-prints and other non-review material unless it's exceptional.

Peer review is tough work, done today for neither pay nor glory. Doing peer review is part of the expected payment for having one's own work reviewed and published. Most scientists see it as an unpleasant chore that just saps their time and energy. No wonder so many reviews are seen as superficial and nit-picky.

Glory for anonymous reviewers, no matter how important the paper and no matter how crucial the reviewer's input, is unlikely. Pay is another matter.

A solution, at least for scientific work done under contract, would be to have part of the contract payment earmarked to pay for competent independent peer review. Once such a system is in place there would be plenty of opportunity to examine ideas such as having authors and editors rating reviewers, or whether editors should allow some scientists to produce far more reviews than new science.

But the idea is simple: Review is a job that is just as important as science. If you're working for someone else, you should expect to be paid, and your employers should be willing to pay you. And the corollary is that if you're being paid, those paying you have a right to expect good work and you have an obligation to provide it.

By Walter Faxon (not verified) on 26 Jul 2007 #permalink

I've worked in a several settings, and the best reviews I've ever had were internal reviews at a government agency. (Best = most thorough/useful, least competitive/biased.)

In that system reviewing isn't anonymous, but it works because reviewing is part of everyone's job description. That means the reviewer gets paid for the time, and is expected to put a lot of time and effort into it. Best of all, both the reviewer's and the author's bosses read the review to assess its thoroughness and fairness. You can't get away with a scant, inattentive, or biased review precisely because everyone DOES know who you are.

I realize that's not practical in many settings. As a compromise, I like that some journals list the reviewers in the journal (which is the first time the author sees the names). Others give the reviewer the option "May we tell the author your name?" Both those options put more peer pressure on the reviewer. People know if the reviewer OKs a lousy paper or trashes a great paper, or if someone never agrees to do reviews.

By cartagena (not verified) on 26 Jul 2007 #permalink

Aaron Barlow posted along related lines here in a post entitled Why Can't We Do It Backwards?

I think the open source peer review has a lot of promise, but it will probably need to go through some experimentation before it develops a form that is both useful to science and "works" within the characteristics of the web. I do think that there needs to be some fairly well-understood method of "signaling" to indicate the status of the work - so that both the author and readers/reviewers get the advantage of the early vetting of ideas and work - but with a clear indication of the state and how/if it should be cited. I am thinking here of something like the conventions used in software (alpha, beta or even major and minor releases). And in general, I think some of the lessons from the Open Source Software community would be valuable to look at, however Open Source Software is a much "narrower" domain with a more restrictive set of "solutions".

I will go out on a limb and predict that this will become the dominant form of "peer review" within 5-10 years. As noted, this might or might not further entrench existing reputational hierarchies. Throwing things out into the Marketplace of Scarce Attention does often seem to result in "winners take nearly all" type results. (For a good discussion re: The Liberal Blogosphere see this post, Finally, The Myth of a Flat Blogosphere, at The Republic of T.) Mechanisms to persuade more senior researchers and practitioners to review the work of those less well-known or experienced would probably need to be created - maybe it becomes an activity that is a community expectation for getting tenure or further promotion/recognition. I think it will ultimately "win" because of its flexibility and potential for efficiency and speed. A downside is that it might lead to more "unfinished" work - although maybe some of it does not need or deserve "finishing". And there is some potential for the "Wikipediafication" of scientific knowledge.

Perhaps it's because IEEE publications like IEEE Computer Graphics and Applications attract more amateur authors and referees, but in my one experience refereeing a special issue on a subject I knew something about, I found most of my questions and suggestions had to do with answering the reader's question, "why should I read this article?" which I found just as often involved the author's answering the question, "why did I do this research?"

Of course, they didn't tell the truth, necessarily. I remember one article that I suggested a subtitle for that probably increased its readership by an order of magnitude (well, I like to tell myself), the true answer was that the author was a test engineer (a brilliant one, as it happened) and would have done this work without salary.

It seems to me that editing for the purpose of stripping the pocket protector off an article and thereby getting people to want to read it is work that can't be done by open source approaches.

As I said, my refereeing experience is very limited (the thing I mentioned and a couple of years of refereeing papers for a symposium). What do some folks who referee regularly have to say about the value, if any, they think they add to papers?

No way should reviewers get paid. They already get the benefit of seeing a paper before publication. The idea is that a person in the same field as the manuscript would read the paper if it were in print anyways so they may as well read it pre-publication and advise the authors and editor. It's not a big deal. After all, reading, thinking, and writing is ostensibly what intellectuals and scholars are supposed to do... even in the sciences.

I don't think lazy reviewers giving superficial readings is the root of the problem. After all, if scientists won't carefully read a manuscript in their own field for review that likely means that they don't carefully read any papers. After all, there's neither public recognition nor money rewarded for critically evaluating published manuscripts either.

The problem with peer review is that scientists are generally lazy and habitually read everything superficially. Most scientists are reluctant scholars who resent reading, thinking, and writing even about the work in their own field. When they do manage to stir themselves to evaluate their colleagues' research they expend as little mental effort as possible and turn in poor reviews.

By Herb West (not verified) on 26 Jul 2007 #permalink

Some of this may explain why the typical submission-to-publication interval for economics papers is now around three years.

To be honest, if I was in a discipline where it took 3 years to get your work into a traditional journal, and I knew I could get the right people reading and engaging with it, I'd be sorely tempted to just dump my research on the web too.

...the conventions used in software (alpha, beta or even major and minor releases)...
-JP Stormcrow

But with software, the user can almost always test it her/himself to determine if it is evolved enough to build on. This is often less than immediately possible with many results "under construction" as described in scientific communications.

By Super Sally (not verified) on 27 Jul 2007 #permalink

When I review papers I divide my comments into serious issues, minor quibbles and thinking points and head them up as such. This should make it more clear to the author and the editor what I thought was important.

It should be kept in mind though that an author can always refuse to comply with something a referee has said if they think it is bad advice. I do this vaguely regularly with my papers and thus far it hasn't cost me a publication. Although I tend to say something like:
"I have not done x as y requested, because of w, if the editor feels this is an issue I am happy to further revise the paper"

There is going to be peer review. The question is whether this will happen before publication or after. The idea that somehow we will have a certain set of experts "locked in" because they decide to circumvent the peer review process is a bit far-fetched. People may read papers by famous authors that have been published outside of the peer review process, but if said papers are rubbish, they're not going to be cited at all.

Herb West says:

The problem with peer review is that scientists are generally lazy and habitually read everything superficially. Most scientists are reluctant scholars who resent reading, thinking, and writing even about the work in their own field. When they do manage to stir themselves to evaluate their colleagues' research they expend as little mental effort as possible and turn in poor reviews.

Speak for yourself. Geez, slag an entire profession why don't you?

As someone with a spousal relation to economics, I suspect one of the reasons for this occuring is the utter disfunctionality of the economics publishing process. After an article is accepted for publication (sometimes a year after submission) there is a 1-2 year period before the article is actually "published." It's usually online during most of that period, but the delay is insane. If there respected places to publish "working papers" much faster online, why bother with the process. I suspect if the rest of the process was smooth and efficient, the top economists would have no problem with peer review.

And, in defense against, Could it be that this all arises out of the fact that economics is highly mathematized bullshit?, PhysioProf
Every field has it's bull, including economics, but there's also a lot o good research. My general opintion is that much of economics could actually be considered a broad subspecialty of applied statistics.

It's interesting and revealing that no one has discussed the other functions of peer review. I guess that professional scientists like to believe that this is all for the good of science, and for "building knowledge". It's also bad PR for the scientific community to criticize peer review too much. After all, science's credibility is built upon the perceived notion of an infallible quality control system.

But the whole system of peer-reviewed publication is also a gate-keeping scheme for a scientific community, and a way for it to structure itself with a suitable hierarchy. You are not part of a community if you haven't published at least one paper, and to do that you need to get past the reviewers, and the editors. Once you've published enough, you may be granted reviewer status, but that is entirely up to the journal editors. If you want to achieve higher status, you have to be yourself an editor, and you can start by being associate editor. Those are all steps in a scientific community's hierarchy.

Peer-reviewed publications are a valuable commodity, they give you access to grants. So there is a "market" for them. The mechanism of peer-review regulates that market. We publish not so much to "build knowledge", but mostly to build our CV's!

Furthermore, as I mentioned above, the peer-review system is also a PR tool for scientists to maintain their credibility in society. That credibility is also essential to maintain the public funding of scientists.

So if anyone wants to understand why there is so much inertia to change the anonymous peer-review system, one has to take into account those other functions as well. Open peer review, as is easily achievable with internet publishing, threatens to dissolve the power structure of the scientific communities. That is why there is so much resistance to it.

This is not to say that peer-review does not serve as some sort of quality control. But it's obviously not the best quality control one can get. What I'm saying is that it won't change easily because it serves other useful social functions, albeit tacitly, and probably unconsciously for most scientists.

By Francois Ouellette (not verified) on 28 Jul 2007 #permalink

But with software, the user can almost always test it her/himself to determine if it is evolved enough to build on. This is often less than immediately possible with many results "under construction" as described in scientific communications.

I think you are right and to some degree it indicates that in science more than software you do generally ultimately resort to an "argument from authority". I am not saying it in a negative sense, just that most who use the results do not test them (at least not fully - the logic will generally be at least tested informally by "users"). But the positive characteristic of "science" is that the authority is ideally derived from not just the author, but some manner of review by other experts and practitioners, and more importantly is challengable by an open and "agreed to" process. (In fact this is the area where in my opinion many scientists are the least "scientific" - giving up on things in which they have a vested interest. ... I imagine science would progress much more slowly if people lived (or were intellectually active at least) for several hundred years.)

In thinking more about the, I think one of the main concerns is laymen (especially politically-motivated laymen) picking up on partial works and using them "incorrectly". (of course that happens to final results a lot, but at least the author should be able to defend it in that case.) Even if an adequate system of signaling the "scientific status" is worked out among practitioners - along with conventions on how to reference and "use" preliminary work - it will not necessarily be understood or adhered to by the general public and media. In some ways language can let us down here. "Y said X" where Y is a scientific authority can be "truthful" even if X is some wild hypothesis they were testing among peers, or a draft of something that is later shown (and admitted) to be wrong.

But concerns or not, I think it is going to happen, and sooner rather than later.