Elections are weighing heavily on our minds. In three short months, America will see the race between Barack Obama and John McCain come to a head, while we in Britain will probably have a general election within the next few years. Some people, of course, will vote based on long-held loyalties to a specific political party, but many of us are more malleable in our choices. What affects the choices of these undecided voters?
People are given to viewing ourselves as rational beings and as such, we'd like to think that our choices are fuelled by objective and careful deliberation. So we pay attention to media coverage, we read up on policies and we listen to debates and only then, having gathered as much information as we can about the various options, do we make a choice. That's how it plays out in our heads, but according to a new study, the reality may be quite different.
Silvia Galdi at the University of Padova, Italy, has found evidence that the final verdicts of undecided decision-makers are only weakly related to their conscious preferences and more strongly influenced by unconscious views and biases they aren't aware of. In many cases, when people claim that they are undecided, they have secretly made up our minds, unbeknownst even to themselves.
For example, a British voter sitting on the fence might unconsciously be inclined to vote for David Cameron because they view Gordon Brown as dour, or oppositely because of a prejudice against the Tory party. Likewise, and more unfortunately, an American voter might side with John McCain because of unconscious racial prejudices against black people.
By their very nature, there unconscious associations aren't easy to find, but psychologists have a tool for doing so - implicit association tests. Volunteers are shown a series of words or images and must classify them into one of two categories by pressing assigned keys. For example, they might have to distinguish good words (happy, joy) from bad words (anger, hate) and white faces from black faces. At first, the categories are presented separately and then in various combinations. So in one trial, you might be asked to press one key for good words and black faces and the other key for bad words and white faces.
The idea is that people perform the task more quickly and more accurately if the combinations of categories matches their unconscious associations between the two categories. So if people have a hidden prejudice against black people, they would be quicker at trials where black faces and bad words were represented by the same key, than those where black faces were twinned with good words. If you want to see these tests in action, Harvard have a large range online and I'd highly recommend having a go yourself.
Probing the unconscious
Galdi unveiled the true influence of these hidden biases by interviewing 129 residents of Vicenza, Italy about the expansion of a new US army base nearby. At the time, the expansion was a burning issue in the media and strongly polarised the opinions of local Italians. Galdi asked all the interviewees outright about whether they were in favour of enlarging the base, against it or undecided. She also asked them a set of 10 questions that probed deeper into the conscious reasons behind their decisions, be they environmental, economic, social or political.
So much for their conscious beliefs. To bring out their unconscious ones, Galdi gave them a variation of the implicit association test where they had to classify pictures of the base, as well as positive and negative words, as quickly as possible.
All the interviewees returned to repeat the tests one week later. Among those who were previously undecided about the base, Galdi found that their conscious beliefs had little bearing on their later choices. Instead, it was their unconscious biases that had the greater influence; they predicted which way the interviewees' decisions would swing a week later, as well as any changes in direction in their conscious beliefs.
The tests show that the unconscious beliefs of these swing-voters were influential enough to sway their future decisions. Even though they said (and most probably believed) that they were undecided, they had to some extent already made up their minds.
People who had already made up their minds behaved differently. In their brains, unconscious associations held little sway and it was their conscious reasons that predicted their future choices. In fact, these reasons even predicted any changes in their unconscious associations - their beliefs were strong enough that over time, they eventually strengthened into a sort of mental reflex.
Mind made up
So the minds of the decided resolutely stay their course, but those of the undecided are surprisingly affected in ways they are unaware of. But what then of the painstaking deliberation process that many people go through to make decisions? Is there any point to lists of pros and cons, a reading of reviews, or a careful gathering of balanced viewpoints? Based on other studies, Galdi suggests that in many cases, these acts merely serve to confirm and support decisions that have already been unconsciously made.
The idea is that your inner biases affect which bits of information you pay attention to, and they affect the way you interpret any data you do take in. Your secret preferences for candidate A over candidate B (and they are secret even to yourself) predisposes how you process new information in a way that favours A over B. In time, your unconscious favouritism becomes a conscious preference but your Eureka moment is the result of a long-term manipulation by silent puppet-masters.
These are lessons that political pundits and pollsters might care to heed. As elections near, vast acres of forest are felled in order to print the results of opinion polls and untold amounts of glucose are burned by analysts poring over the results in a vain attempt to understand why people voted the way they did. But surely, this picture cannot be a complete one. It relies solely on the conscious reasons that people offer during interviews, reasons that we now appreciate are often elaborate fictions. And people who claim they are undecided may well be telling the truth as far as they're concerned, but actually be far from it.
At this point, many of you are probably scrolling for the comment box in protest. You might argue that the interviewees were just deliberately trying to hide their views and biases, especially if they were unpopular or taboo. Certainly, it's a valid alternative way of construing the results. But Galdi's interpretation doesn't stand alone - it sits alongside a slew of other studies, which show that we are often in the dark about our own decision-making processes and how they are influenced.
The hidden iceberg
Take the work of Petter Johansson's group on "choice blindness". In 2005, they asked a group of volunteers to choose which of two women they found most attractive based on photos. They were then shown their selected image and asked for the reasons behind their choice. But in some trials, the deft experimenters performed a sleight-of-hand that swapped the photos and presented the volunteers with the photo they had rejected.
Amazingly, 75% of people failed to notice the switch. Even more amazingly, the duped volunteers had no problems in explaining the choices they didn't actually make. "She's radiant," said one. "I like earrings," he continued. The most telling result was the fact that there weren't any differences between the volunteers' reasons for their real choices and the reasons for the choices they didn't actually make. Both types of rationale contained the same level of detail and were expressed with the same confidence and emotion. Like the undecided voters in Galdi's study, the duped men in Johansson's were also unaware of their own unawareness and they happily justified their "preferences" to themselves.
Studies like these show that conscious decision-making, as we know it, is just the tip of a psychological iceberg, with the bulk of the process operating out of view. As Timothy Wilson and Yoav Bar-Anan write in a related editorial, it means that "we are often strangers to ourselves". But why should people be so "unaware of our unawareness"?
For a start, this hidden world of unconscious processing makes for a more efficient thinking machine by relieving us of the burden of micro-managing every trivial decision. But Wilson and Bar-Anan also suggest that our mental lives already seem so rich and saturated with information that we find it hard to conceive of levels of processing that we can't see. As an example, they tell a nice story:
"One of us was recently driving on a Californian coast highway when he saw a sign indicating that a nearby beach was a haven for elephant seals. He and his wife stopped and saw five gigantic seals sunbathing on the beach, and after observing them for a few minutes, they turned to go, satisfied that they had had the prototypical elephant seal experience. It was only when they looked down the beach that they realised that htey had gone to the wrong overlook - a mere 50 yards away there were hundreds of seals sleeping, playing and snuggling.
Unfortunately, when it comes to human introspection, there is no overlook from which to see the vast contents of the adaptive unconscious. We are left with the illusion that the few "elephant seals" we can see - the feelings and thoughts that are conscious - are the entirety of our mental life."
Reference: Science doi:10.1126/science.1160769 and 10.1126/science.1163029
- Log in to post comments
I think there is tremendous confusion (or maybe just plain denial) about this subject. Well educated people are aware that they should attempt to dispassionately determine the truth, and only then let emotion help decide which outcomes they like. They then assume that that how they really think. And even worse, they assume that most people use that method as well. The reality, is our unconscious emotional processing usually selects for us, and then we use reason to justify the results. Only a few people are reasonably successful at trying to dispassionately determine truth (or likely consequences), without our thinking being seriously affected by our (most hidden) biases.
This strikes me as a lot less revolutionary and paradigm-shaking than you (or the researchers) are making it out to be.
The people who arrived at decisions already through some decision-making process (informed, no doubt, by both conscious and non-conscious factors) internalized those decisions into their mental framework such that their non-conscious reactions came to reflect their conscious choices. Isn't that just, you know, learning? Didn't we already know that our decisions and actions reshape how we see the world? Over 2000 years ago Aristotle talked about how habit (ethos) becomes character (êthos) over time with repetition of similar actions in similar circumstances, so this is hardly a new insight into the mysteries of human cognition.
Let's go back and look at the other side of the results. One could talk about the undecided people "already having made up their minds" even though they haven't made a conscious decision, but I'd say that's a very biased way of stating the result. These people already have a character and world-view, surely. And don't we generally expect people to make decisions which spring from their prior experiences, or rather from the character and inclinations that their prior experiences and choices have shaped? Even though there is a process of conscious deliberation involved in making a choice and acting (the action in this case being voting), we surely don't think that people are blank slates before they begin - or finish - any particular deliberation process. And if they aren't blank slates, that means they have inclinations towards and biases against the various options before they come to a conscious decision. And, gosh, if we find a clever way to examine inclinations and biases directly, it turns out that we can find them. So what? (FYI, Aristotle also wrote about how acting from conscious deliberation is different from - but still rooted in - reacting straight from our inclinations.)
This study isn't exposing any deep or surprising insights into flawed human decision-making. At best it's tellings us more about the workings of processes we already grasped in our ordinary reflection on our own and others' behavior - which is still valuable, but not revolutionary. Perhaps what is being exposed is the falsity of some idealization of dispassionate, disinterested, "just the facts ma'am" decision-making procedure driven by pure abstract reason, but no serious investigator into human nature has ever really believed humans made decisions that way most of the time. "Reason is, and ought to be, slave of the passions," David Hume famously wrote. Heck, even champions of ultra-abstract pure reason like Immanual Kant recognized it as an ideal that is unrealizable in the ordinary course of human life. Even traditional economists, the biggest purveyors of "rational man" mythology as far as I can tell, have generally hedged their flawed "rational choice" perspective on human behavior by saying that it was only in aggregate that such rational decision-making could be observed, not necessarily in actual individual people making decisions.
Shattering myths that no one truly embraced in the first place isn't really all that myth-shattering. I get the feeling that the researchers in question are strongly motivated - perhaps not entirely consciously - to frame their research in this way to generate more interest in their work.
My conclusion here is that somebody should make a program that lets you put in your own items and self-administer an IAT, to be told what decision you should make... that would make decision making much easier =D