Last week's spat between Nicholas Carr and Steven Pinker generated a lot of attention â and, happily, delivered a couple of the more lucid framings yet of the debate over whether digital culture makes us shallow, as Carr argues in his new book, or simply represents yet another sometimes-distracting element that we can learn to deal with, as Pinker countered in a Times Op-Ed last Thursday.
Â
I sympathize with both arguments; I see Carr's point but feel he overplays it. I find digital culture immensely distracting. I regularly dive down rabbit holes in my computer, iPhone, and iPad, taking wandering, shallow paths much like those Carr describes. Yet I remember getting distracted by other things â newspapers, magazines, favorite books I'd already read, tennis matches, conversations with neighbors â as a young adult in the dark dark pre-Internet era. So instead of reading tweets and blog posts instead of writing my book(s), I read again some favorite passage about Eric Shipton exploring Nepal, watched Wimbledon, or phoned my sister to see how grad school was going. As Pinker notes,
distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life.
I agree. Twitter indeed offers endless, easy, and lasting distraction; it calleth as I compose. But 20 years ago, so called too The Sporting News, the New York Review of Books, and my tennis racket, my binoculars, my bicycle, and my Gibson ES-345, a stack of books I hadn't read and several bookshelves full I had read, not to mention all the people I could find to talk to if I took a long enough walk. I didn't work any more steadily or deeply back then than I do now, once I get going. But now I am far less isolated socially and intellectually, even living in rinkydink Montpelier, than I was back then living in large university towns. I don't mean to dismiss Carr's concerns altogether. But I side with Pinker and Jonah Lehrer in being skeptical that the Net is working a fundamental, singular, bad bad voodoo on how we think.
I bring to this a bit of history: About a year or 18 months ago, I had several discussions with an editor (at Wired, of all places; this was going to be a sort of anti-Wired piece) about doing a story exploring a more tightly constrained version of Carr's argument: I would flesh out the notion that consuming digital culture, even just words on the net instead of words on the page, likely wired the brain differently than reading on the page did. I pitched the story because I wondered if that was happening to me; reading on the web felt different; perhaps it affected brain and cognitive development proportionately.
Perhaps things have changed since then, but at the time, we decided against doing the story because in a couple days of surveying literature and making phone calls to people who studied reading from a neuroscientific point of view ... well folks, I could not find anyone with data showing such rewiring. Yes, people were doing the sorts of fMRI studies that showed the brain activated a bit differently reading the web or following links than it did reading print; they showed, in other words, that the experience was different. But no one had data showing that the sort of change that I would consider "rewiring" â that is, that reading on the web, or immersing digitally in general, actually created a different course of brain or cognitive development. Again, possibly things have changed since; perhaps I'd find those studies if I read Carr's book â though for what it's worth (quite a bit, in my book) Jonah Lehrer did read it and came to the same conclusion as I did: the data doesn't clear the bar.
Â
So went my own Shallows argument 12 or 18 months ago: to the round file. I started with my own feeling that the web was rewiring my brain â and failed to find data supporting my own dark suspicions.
Â
But oh wait â I got distracted. I want to address here not so much the heart of the Pinker-Carr argument but one particular argument that Carr used in his response to Pinker that I found offkey â not so much because it didn't apply (though it doesn't, for reasons we'll see), but because it pulls on a false dichotomy that I think we need to lay to rest. I refer to this:
Pinker, itâs important to point out, has an axe to grind here. The growing body of research on the adult brain's remarkable ability to adapt, even at the cellular level, to changing circumstances and new experiences poses a challenge to Pinker's faith in evolutionary psychology and behavioral genetics. The more adaptable the brain is, the less we're merely playing out ancient patterns of behavior imposed on us by our genetic heritage.
Wuh-oh, trouble: Carr casts a strong opposition here between inherited cognitive powers and in-play adaptability, genes and plasticity.* On a fine scale and at most proximate range, of course, he's attacking Pinker's "faith in evolutionary psychology and behavioral genetics," and perhaps that's all Carr means here â that Pinker objects because Pinker feels threatened, and Pinker feels threatened because he's wed to a false dichotomy about nature-or-nurture. Yet Carr himself seems intimately tied to the same dichotomy when he writes "The more adaptable the brain is, the less we're merely playing out ancient patterns of behavior imposed on us by our genetic heritage." He seems not to be saying that Pinker is wrong to draw the contrast, but that he's on the wrong side of the debate.
And so Carr sets adaptability against genetic heritage. Carr has stronger arguments, and I think he needs to set this one aside. For the most vital part of the "genetic heritage" he cites is the very adaptability or plasticity he likes to emphasize. We're successful (as a species, and generally as individuals) precisely because our brains learn readily and âas Carl Zimmer points out nicely in a recent essay â both brains and genes fluidly adapt to a surprising range of environments and challenges. Adaptability exists not despite our genes but because of them.
Nick Carr is a bright guy, and I suspect that at some level, perhaps many levels, he recognizes this. Indeed, in the very next paragraph of his piece he notes that, to understand human thought,
we need to take into account both the fundamental genetic wiring of the brain - what Pinker calls its "basic information-processing capacities" - and the way our genetic makeup allows for ongoing changes in that wiring.
This clearly recognizes that genes underlie our behavioral and neural plasticity. Yet Carr's earlier language, about the brain's adaptability being incompatible with a recognition of our genetic heritage, ignores it. He seems to insist on a false split between nature and nurture.
Possibly I'm misreading him here. Possibly he misspoke. But I suspect that Carr â hardly alone in doing so â expressed a nature v. nurture framework for pondering human thought and behavior that, though deeply ingrained, is being proven false by the highly fluid conversations that researchers are exposing between genes and experience. Possibly he does so just to make a point; certainly that's the way he deploys this idea here. And goodness knows, among the attractions of the nature-or-nurture debate is that it lets you argue incessantly about a dichotomy that even your own argument betrays as false.
Â
For what it's worth, Louis Menand, reviewing Pinker's The Blank Slate in 2002, accuses Pinker of the same muddle.
Having it both ways [that is, sometimes insisting that nature trumps nurture, and at other times citing nurture's power to override nature) is an irritating feature of "The Blank Slate." Pinker can write, in refutation of the scarecrow theory of violent behavior, "The sad fact is that despite the repeated assurances that 'we know the conditions that breed violence,' we barely have a clue," and then, a few pages later, "It is not surprising, then, that when African American teenagers are taken out of underclass neighborhoods they are no more violent or delinquent than white teenagers." Well, that should give us one clue. He sums the matter up: "With violence, as with so many other concerns, human nature is the problem, but human nature is also the solution." This is just another way of saying that it is in human nature to socialize and to be socialized, which is, pragmatically, exactly the view of the "intellectuals."
The nature-or-nurture debate exerts a strong pull. I'm tempted to say it seems to be in our genes. Yet while resolving puzzles is in our genes, the nature-nurture debate is not; it's prominent and perennially hot because each side offers a seemingly viable explanation of behavior, and, even more important, because it carries the horrid legacies of racism, the Holocaust, and 20th century eugenics. It's as much political as it is scientific. But we're at a place where science, anyway, would let us set it aside.
Â
*Unlike Pinker or my friend Vaughan Bell, I don't find neuroplasticity a dirty word.  Though it's often used badly and sloppily, neuroplasticity, along with plain old plasticity, provides a useful shorthand for reminding us that both our brains and our behavior are more malleable and changeable than the neuroscience and psychology of a couple decades ago recognized. It also reminds us â  implies, anyway â that some of us are more mentally and behaviorally plastic and capable of change than others.
Â
- Log in to post comments
I appreciate that you come down on the so-called Lehrer/Pinker side of the debate; I lean towards the other side. But here is where I agree with you (as I have pointed out elsewhere in my writing on the matter): the data that Nick Carr marshals in support of his argument is incomplete. But here is the difference between Carr's argument, and that of his critics - while the data set may be incomplete, Carr actually cites the strong scientific data in his arguments!
Let's begin with Pinker. Nowhere in his influential op-ed piece in the NY Times was there any reference whatsoever to scientific studies which showed that the 'ecosystem of interruption technologies' were not affecting the way that we use our brains. About the best he can do is to say that the state of modern science is good, innovation is moving along briskly, and from this observation conclude that the internet is a boon rather than a threat to our cognitive apparatus. In truth, it seems as if Pinker is using the weight of his public persona as somehow giving him the right to determine the veracity of an issue on which he has essentially no academic bona fides.
Jonah Lehrer's book review, again in the New York Times, does cite some data, but is only on strong ground when it comes to studies of the effects of video games (which, I hasten to note, are immersive rather than distracting, and are not the focus of Carr's book) on the brain. But the effects of multitasking is another story, and here Lehrer made a mis-step: trotting out the 2009 study by Small et all regarding fMRI images of the brain of naive and experiences internet users, and concluding from it that Google is "exercising the very mental muscles that make us smarter." I don't want to come down too hard on Gary Small's pilot study, but frankly it was not the best science going. Even Small admits this right in the summary (in case one lacks the attention span to read the entire paper) saying that, "the present findings must be interpreted cautiously in light of the exploratory design of this study."
The problems with both Lehrer's and Pinker's pieces is not that they represent an opinion; that is always welcome. It is that they do a disservice to both science and their readers when they present opinion in the absence of data (Pinker) or based upon very weak science (Lehrer), and they do so in the so-called paper of record. Tsk. Tsk.
Finally, neither of them address the core issue: the distinction between information and deep thinking. To the best of my knowledge, this is a very difficult thing to measure, and yet it is the core of what Carr argues in the Shallows. It is perhaps time to put our minds to discerning such things, rather than tossing verbal hand grenades without the data to back them up.
Peter, pleased to hear from you. I think in a way I'm with you regarding the state of the science, even if of different opinion on what the obscured answer is. To boil it down: While subjective experience can make it feel that the Internet and its interruptions are rewiring our brains and making us more shallow, there's not enough data at this point to confirm that, so we should stick with the null hypothesis. And the null is that this is not -- hype about phones "that change everything" notwithstanding -- something that fundamentally changes the our brains work. Data may someday prove otherwise. But at this point, it's not there, to my eye.
That said, I think it's perfectly valuable to raise these concerns and that it's an interesting area to research; we'll almost certainly find things worth knowing, even if we don't get a clean answer to the Shallows question.
I should take this opportunity to let everyone know that the newish group blog Mr. Reiner contributes to, Neuroethics at the Core, is marvelously worth adding to you regular reads.
I'm of many minds with regards to the plethora of ideas you present here. For one, I like that there are intellectual powerhouses railing against one another in a public forum. It's enjoyable to watch as closely-held theories slam up against alternative unproven interpretations. In particular, I appreciate that someone is willing to wade into the breach of the seemingly unassailable world of neuroscience. For that alone I say, "Huzzah, Nicholas Carr."
To his credit, Carr has tried to compete using the methodology prescribed by the science he questions, by citing research sources. But he's been caught out by the scientific community for introducingâzounds!âhis personal perceptions. This gives me pause; it gives me pause precisely because if a person's perceptions are dismissed, we find ourselves flailing in the deep waters of scientism. It seems that Carr's argument is being unfairly responded to thusly: you don't have enough science on your side; therefore, your argument has been rendered mere opinion. Except that the other side of the argument doesn't have enough science on its side, either.
Let's step back and ask ourselves where science would be without personal intuition, undefinable inklings, unclassifiable creative perception? This is how new science is discovered: the mental leap without a net. Can everything really be initially quantifiable, reducible? Economics seems to be a science based on this idea, but recently even economists have begun to recognize that what we think we know about science (and ourselves) is limited by what we presume to know. Consider the exciting domains resulting from emergence, phase change, and epiphenomena.
So I don't buy the hard science canard: if science can't measure it, then it doesn't exist or count. To me, this is the much larger and more problematic canard. When we have learned to measure what consciousness is, or when we've defined the quantitative nature of emotion, or when we've discovered every last living thing on the planetâthen we can have this discussion again. Until then, we should honor the intellectual magpie within each of us: bright and curious, and casting about for the shiny bits of life to build a nest of theories.
Carla,
I think you're confusing hypotheses and discoveries here. Certainly, intuition is responsible for a great many hypotheses; no one is denying that. But scientific discoveries only result from testing those hypotheses; it doesn't matter how "perceptive" your hypothesis is if there's no data to support it.
Also, economics is a poor example of a science. Most schools of thought there start with untested axioms of human behavior and work from there, so it's no surprise that their predictions rarely match reality.
And who mentioned something even remotely like this? You're tilting at windmills.
Hello!
[neuroplasticity] a useful shorthand for reminding us that both our brains and our behavior are more malleable and changeable than the neuroscience and psychology of a couple decades ago recognized
It's probably worth bearing in mind that while the popular belief is that we used to think the brain was 'fixed' this, in itself, is part of the neuroplasticity hype.
There are references to extensive neuroplasticity throughout the last century of neuroscience. For example, extensive re-organisation of function after hemispherectomies completed in the 1930s, discussions, review lectures on 'Growth and Plasticity in the Nervous System' in the 1950s. Cajal studied neuroplasticity in relation to mental function, as did Broca and even Jules Cotard way back in the 1800s. Neurologists since the profession began have known that people recover functions initially lost after brain damage.
In fact, the only fundamentally new revelation about neuroplasticity which has arrived in the last two decades has been the discovery that the adult brain has the ability to form new neurons, which likely the least important form as it contributes relatively little to change and recovery in mental function in comparison to other mechanisms in the brain.
After looking, I can't find good evidence in the scientific literature that we used to think the brain was 'fixed' or 'set' and this stereotype seems to have been championed as part of the global enthusiasm for neuroscience (largely in popular writing it has to be said - e.g. Doidge makes lots of claims about past 'assumptions' while providing virtually no evidence for them being common).
Of course, we are constantly learning more about how the brain changes but the idea that we've suddenly rejected the past belief in a 'fixed' brain is stretching the truth.
I would completely agree with your reasoning against Carr's argument by citing distractions you have experience in the "dark pre-Internet era," but I think the main point Carr is trying to make is the amount of time between our distractions is causing us to become shallow. You mentioned playing the guitar and phoning your sister as a distraction, but these "distractions" require you to focus on one activity for a long period of time - which in turn become a focused endeavor. Therefore, the idea of information confusion is not realized.
I think Carr was worried that because we scan through multiple items - or screens of information rapidly, that we spend very little time focusing on our assigned task, or even our distraction. And because we are not centralizing around one task, whether it be the actual task or the distraction, we are losing our ability to retain important information.
Chris, good point; I suspect I've shorted Carr a bit in just the manner you note.
Nevertheless, it's not necessarily the case that I'd be any more deeply engaged in the guitar playing or phone call I made in the late 80s than I am in reading Twitter feeds now. You can argue that those activities perhaps allow deeper engagement; but that doesn't mean I'd do it, or that I did it. The point again returns to the existence, always, of distractions, and of the need to resist or punch through them to think deeply. Again: I think Carr has a point -- but that, as Pinker points out, it's easy to press it too far, and to forget that we've always had to get through distractions to think and work seriously.
And the Net of course has its upside. It may indeed make it harder (though hardly impossible) to set aside the time and mindspace to read and think deeply. Yet it brings so much context and allows so much intellectual engagement â it can so readily let you put your work and reading and thoughts not just into context but into a broad, current, proximate conversation â that those benefits themselves both deepen and sharpen one's reading and â if you're fierce about working seriously â one's work as well.
Nevertheless, it's not necessarily the case that I'd be any more deeply engaged in the guitar playing or phone call I made in the late 80s than I am in reading Twitter feeds now