Journosplaining 101

Over at National Geographic's other blog network, Ed Yong offers a guide for scientists talking to journalists. Like everything Ed writes about scientists and journalists, this was immediately re-tweeted by 5000 people calling it a must-read. Also like nearly everything Ed writes about scientists and journalists, some of it kind of rubbed me the wrong way.

Given our respective areas of interest, there's approximately zero chance that Ed will ever contact me to ask my opinion of a paper, but I want to push back on a few of these, anyway. Because, in the end, scientists aren't responding in what Ed considers a sub-optimal manner because we're clueless idiots; there are good reasons why we do the things we do, just as there are good reasons why he does the things he does.

My main gripe is with the list of things Ed doesn't find useful, particularly:

1) A summary of what the paper showed. Around half of comments start with this. I don’t need it. I already know what the paper showed, or will have talked to someone else who explained it.

This bugs me, because I'm a person who will start off any comments I make with a capsule summary of the paper. Not because I think journalists are clueless-- quite the contrary. If I thought the person who contacted me for a comment was an idiot, I would pretend not to have received the message. Life is too short to talk to idiots when I don't have to.

I start comments on a research paper with a summary, because I think that's useful information. For that matter, I start off my written reviews of NSF proposals with a short summary, and those are going to people who I know damn well can and will read the proposals themselves. But I think it's important, because what I see as the crucial elements of a particular proposal or paper may not be the same as what somebody else sees as the crucial elements. And that can lead to confusion unless both parties in the conversation know exactly what the other thinks they're talking about. The best way to avoid that confusion is to tell the other person what you're talking about, even if that takes a few extra seconds of their time to skim.

And if there is a difference between my summary and the summary from whoever else explained it, that's useful information. It tells you that either there's some disagreement about the real importance of a paper, or at worst that one of the people you've talked to is wrong, and their comments should be discounted or at least de-emphasized.

I also find listing summaries as "not useful" kind of annoying when paired with these ones from the "useful" list:

4) The past. The paper will probably have a paragraph that crushes decades of earlier work. You will know all of that; I won’t have had time to read it. So tell me: How does this new discovery fit with what has come before? Is this based on a radical new approach? A long slog? Something that people in the field have been anticipating? Is it just reinventing the wheel?

5) The present. Have other people found similar things? Contradictory things? Is this one of many such studies, or something truly original? If this is, say, a new approach to fighting malaria, how does it compare to all the other approaches people are investigating?

So, I need to explain the past and present context of the work, which is too long for journalists to spend time reading, without summarizing what the paper under discussion did. That's more or less impossible.

Indeed, most of the summary information I would provide would be to answer exactly these points. When I say "The authors of this paper did A, B, and C," it's usually in order to make a contrast between this paper and that one, whose authors did X, Y, and Z.

I'm also a little dubious about the push for really strongly worded comments, especially when combined with "I’m not here to present people with the totality of your views, so what you say will almost certainly end up getting cut and distilled." Yes, I understand that you're not going to deliberately misrepresent anything, but knowing that anything I send will be "cut and distilled" makes it seem more important, not less, that I include qualifications and equivocations, so as to reduce the possibility of accidental misrepresentation.

And there's also the fact that most research isn't all that superlative, even the stuff that gets picked up by journalists. The vast majority of research papers are... interesting. There's very little utter crap published, and very little world-changing new breakthroughs. Basic statistics, if nothing else, ought to tell you that much. If you're getting wishy-washy quotes from scientists, full of "boilerplate adjectives," it's probably because their actual opinion of the work in question is kind of equivocal. Particularly if they're not the ones who did it.

There's only so much you can inflate your comments about a fairly cool but not utterly amazing new paper for media consumption. At some point, it becomes deceptive. If you actually want the real opinions of experts on new research that's being published, you need to understand that a lot of the time, their real opinion is kind of boring. If you only want strong adjectives, call the PI's university press office.

Ultimately, though, what rubs me the wrong way about this is a sense that the ways scientists talk to journalists are wasting the journalists' time, which they would otherwise be using to do Important Journalism. Which bugs me because, ultimately, each party in one of these conversations is doing the other a favor by having the conversation at all. Yes, journalists are helping to boost the profile of scientists and science in general, but they're also taking up time that the scientists could be using to do Important Science. And really, the concrete benefit of being quoted in the newspaper is pretty minimal for a scientist, almost certainly not worth the time it takes to provide the material for the quote. And the loss to a journalist of having to skim the occasional summary paragraph or complaint about citations seems pretty minimal, especially since they're getting these comments for nothing.

Now, this is not to say that I'm totally against everything Ed has to say-- a bunch of the advice is good, though I'm a little boggled that some of it is necessary. And on those occasions when I've been contacted by journalists looking for comment, I give my opinion as clearly and forthrightly as I can. And I'll keep a few of these points in mind in the future, particularly the idea of being more specific about what future research is needed.

But as usual in these sorts of discussions, I think there needs to be a little more recognition that both sides have reasons for doing what they do the way they do. Far too many online discussions of scientists and journalists overemphasize the things that scientists "need" to do differently to accommodate journalists, with little reciprocity. This is just another lecture about what one side needs to do, when what we really need is a more mutual dialogue.

More like this

Via Bookslut, there's an article in the Chronicle of Higher Education about whether reading is really important: Is it always a good thing to read an entire book? When I was a graduate student, it dawned on me that I often had the most intelligent things to say about books I'd only half- or quarter…
It's been a banner week for blogging advice, between John Scalzi's thoughts on comments and Bee's advice on whether to write a science blog. Both of them are worth a read, and I don't have a great deal to add, but writing the stuff I'm supposed to be writing this morning is like pulling my own…
Ed Yong went to the World Conference of Science Journalists, and came back with both an award (yay!) and some thoughts on embargoes and science journalism. What's got me thinking is not so much the issue of embargoes - I'm not trying to compete with science journalists, and wouldn't have time to…
Amanda Marcotte at Pandagon put up a post about gender essentialism that starts by citing this post at Mixing Memory on how people's representations of homosexuality affect their attitudes toward homosexuality. Because Chris's post cited my two posts (initially sparked by Jessica's post at…

When I read this:

"C) I genuinely want to know what you make of the paper. I am not just trying to fill my story with a random cutaway quote to make it look like I did my job and asked around.
D) I’m not here to present people with the totality of your views, so what you say will almost certainly end up getting cut and distilled. BUT, I won’t do that in a way that misquotes or misrepresents you. If you say, “I’m fascinated by this approach but I think it has serious flaws”, I won’t cut that to “I’m fascinated”. I’m a journalist; I’m not making a movie poster."

I almost fell off my chair. I have had quite a few interactions with science journalists over the past couple of years, some of whom are quite well respected. As in the Oppenheimer quote I posted on G+ yeaterday, https://plus.google.com/106829223435837926328/posts/CThaPowXtNw I consider it my duty to explain the science as accurately as possible in these interactions and that is going to mean a lot of qualifications and equivocations. That means that my answer is necessarily going to get paraphrased, which I understand. However, every single time a journalist has done that they have badly mangled the meaning of what I have said by taking quotes out of context. I also notice, by the way, that my quotes are often bumped in favour of those by people who are notorious for oversimplifying and playing into the hype machine. Now, it is possible that I am just very bad at explaining things, but it often seems to me like I am being misrepresented either deliberately because the journalist wants to tell a simpler story or accidentally because they failed to understand what I said. This is especially true when their interview with me has involved a very long email exchange or a lengthy phone conversation. It is of course possible that the foundations of quantum mechanics are just too mindbendingly weird for a nonspecialist to ever understand anything, but I don't really believe that.

Through these interactions, I have come to the conclusion that unless you can sum up what you want to say accurately in at most two sentences and do the journalist's for them by only giving them those two sentences then it is not worth the bother. This means that I vehemently disagree with Ed Yong's advice to give a lot of detail. I am now terrified of doing that and I will treat this as an adversarial game until my expericence suggests that I should do otherwise.

By Matt Leifer (not verified) on 22 May 2013 #permalink

I suspect that one of the effects at play here is that it's substantially easier to do life-science journalism than fundamental physics journalism. When you're writing about medical treatments, cognitive science, or unusual animal genitalia, there's a lot less background required for both the reader and the writer. It's easier to make people understand why they should care about some new finding involving animals or people than it is to get across why quantum foundations is interesting. That frees up some words for extra details or qualifications, so the scientists are less likely to feel their words have been mangled beyond recognition.

I had exactly the same thought. I totally believe that science communication is important and necessary etc. I talk to most journalists if I get requests (unless they seem to be affiliated with somehow fishy websites etc). But it does take up quite some time and the payoff for me is tiny, if there is any. So can we please have a list from the scientists' side?

My first point would be: Tell me what you know already. Because that will save me a lot of time while I'm trying to figure out on what level of technical detail I'm supposed to talk to you. Background knowledge greatly varies among journalists. My next point would be: Tell me what you want from me and what you want it for. Because it's happened to me numerous times that I spent an hour or longer explaining the purpose of some study, just to then see that the people who were quoted instead where the well-known guys in the field while I was the idiot helping to get the facts straight. I can't say this increases my enthusiasm for engaging in such interactions. And I'm really tired of hearing journalists blaming their editors for all and everything.

"I won’t have had time to read it."

What? He's a journalist but he won't have had time to read what he considers important information about what he's writing? Not even one little paragraph? Maybe he ought to be in a different line of work.

And, besides, it's not the interviewee's job to know what to tell the reporter, it's the reporter's job to ask the right questions. The interviewee can always add information if the reporter fails to know enough to ask the right question.

More of the usual confusion. A whole lot of other people appear to think that scientists are lousy communicators, and indeed, a whole lot of scientists agree and there are workshops, meetings and even, shudder, blogs, devoted to self improvement, or not. This goes into the file under missing the point.

It's not that scientists are or are not lousy communicators (say that and Eli will lock you in a room with Richard Alley for example), but that journalists are lousy communicators. It's their effing (emphasis added) job and they are screwing it up to a fare-thee-well. But, of course, everyone knows that it's the scientists fault. That or Obama

By Eli Rabett (not verified) on 25 May 2013 #permalink