Jeffrey Rosen on the Neuro-law revolution

Jeffrey Rosen has an excellent piece in the NYTimes magazine about the increasing use of neurological arguments in the courts:

One important question raised by the Roper case was the question of where to draw the line in considering neuroscience evidence as a legal mitigation or excuse. Should courts be in the business of deciding when to mitigate someone's criminal responsibility because his brain functions improperly, whether because of age, in-born defects or trauma? As we learn more about criminals' brains, will we have to redefine our most basic ideas of justice?

Two of the most ardent supporters of the claim that neuroscience requires the redefinition of guilt and punishment are Joshua D. Greene, an assistant professor of psychology at Harvard, and Jonathan D. Cohen, a professor of psychology who directs the neuroscience program at Princeton. Greene got Cohen interested in the legal implications of neuroscience, and together they conducted a series of experiments exploring how people's brains react to moral dilemmas involving life and death. In particular, they wanted to test people's responses in the f.M.R.I. scanner to variations of the famous trolley problem, which philosophers have been arguing about for decades.

The trolley problem goes something like this: Imagine a train heading toward five people who are going to die if you don't do anything. If you hit a switch, the train veers onto a side track and kills another person. Most people confronted with this scenario say it's O.K. to hit the switch. By contrast, imagine that you're standing on a footbridge that spans the train tracks, and the only way you can save the five people is to push an obese man standing next to you off the footbridge so that his body stops the train. Under these circumstances, most people say it's not O.K. to kill one person to save five.

"I wondered why people have such clear intuitions," Greene told me, "and the core idea was to confront people with these two cases in the scanner and see if we got more of an emotional response in one case and reasoned response in the other." As it turns out, that's precisely what happened: Greene and Cohen found that the brain region associated with deliberate problem solving and self-control, the dorsolateral prefrontal cortex, was especially active when subjects confronted the first trolley hypothetical, in which most of them made a utilitarian judgment about how to save the greatest number of lives. By contrast, emotional centers in the brain were more active when subjects confronted the second trolley hypothetical, in which they tended to recoil at the idea of personally harming an individual, even under such wrenching circumstances. "This suggests that moral judgment is not a single thing; it's intuitive emotional responses and then cognitive responses that are duking it out," Greene said.

"To a neuroscientist, you are your brain; nothing causes your behavior other than the operations of your brain," Greene says. "If that's right, it radically changes the way we think about the law. The official line in the law is all that matters is whether you're rational, but you can have someone who is totally rational but whose strings are being pulled by something beyond his control." In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain. Greene insists that this insight means that the criminal-justice system should abandon the idea of retribution -- the idea that bad people should be punished because they have freely chosen to act immorally -- which has been the focus of American criminal law since the 1970s, when rehabilitation went out of fashion. Instead, Greene says, the law should focus on deterring future harms. In some cases, he supposes, this might mean lighter punishments. "If it's really true that we don't get any prevention bang from our punishment buck when we punish that person, then it's not worth punishing that person," he says. (On the other hand, Carter Snead, the Notre Dame scholar, maintains that capital defendants who are not considered fully blameworthy under current rules could be executed more readily under a system that focused on preventing future harms.)

Read the whole thing. (It's long, but its worth it.)

I don't really have that much to say other than I agree that the issue of culpability -- to what extent organic changes rob us of our will and hence of our culpability for crime -- is going to be one of the most important legal issues of the next century. Most of the claims that I have heard made about imaging technology and brains scans are rather significantly overblown; we can do a lot less than these people suggest. However, that doesn't mean we won't be able to do fantastic and invasive things forever.

Categories

More like this

I've been posting about moral cognition anytime a new and interesting result pops up for a while now, and every time I think I've said before, though it bears repeating, that every time I read another article on moral cognition, I'm more confused than I was before reading it. Part of the problem, I…
Back on the old blog, I wrote a series of posts in which I detailed a revolution in moral psychology. Sparked largely by recent empirical and theoretical work by neuroscientists, psychologists studying moral judgment have transitioned from Kantian rationalism, that goes back as far as, well, Kant (…
If you've been reading this blog for a while, you might remember my old posts on moral psychology (I'm too lazy to look them up and link them, right now, but if you really want to find them, I'll do it). Well, after I discussed that research with a couple other psychologists who, it turns out, are…
An article in The Boston Globe on the recent research about how we make moral judgements. "MORAL PHILOSOPHERS and academics interested in studying how humans choose between right and wrong often use thought experiments to tease out the principles that inform our decisions. One particular…

In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain.

If so, wouldn't our choices about retribution, rehabilitation, and deterrence be similarly predestined?

The whole "you are your brain function" thing seems to me like more of a tautology than a basis for prescription. People experience conversion and learning all the time, from simple things like thinking a the pea is under the middle shell to complex ones like deciding to devote their lives to some particular deity or system of behavior. Those changes are no doubt reflected in their subsequent brain function.

Saying that our behavior is the product of our previous experiences (some apparently chosen, some not) and our genetic endowment is nothing new. Dressing it up in fancy language about the brain doesn't really get you any closer to the underlying arguments about agency and choice.