In my last post, I traced a debate over the evolution of language. On one side, we have Steven Pinker and his colleagues, who argue that human language is, like the eye, a complex adaptation produced over millions of years through natural selection, favoring communication between hominids. On the other side, we have Noam Chomsky, Tecumseh Fitch, and Marc Hauser, who think scientists should explore some alternative ideas about language, including one hypothesis in which practically all the building blocks of human language were already in place long before our ancestors could speak, having evolved for other functions. In the current issue of Cognition, Pinker and Ray Jackendoff of Brandeis responded to Chomsky, Fitch, and Hauser with a long, detailed counterattack. They worked their way through many features of language, from words to syntax to speech, that they argued show signs of adaptation in humans specifically for language. The idea that almost of all of the language faculty was already in place is, they argue, a weak one.
Chomsky, Fitch, and Hauser have something to say in response, and their response has just been accepted by Cognition for a future issue. You can get a copy here. Chomsky, Fitch, and Hauser argue that Pinker and Jackendoff did not understand their initial paper, created a straw man in its place, and then destroyed it with arguments that are irrelevant to what Chomsky, Fitch, and Hauser actually said.
It was exactly this sort of confusion about language that Chomsky, Fitch, and Hauser believe has dogged research on its evolution. The first step to resolving this confusion, they argue, is to categorize the components of language. They suggest that scientists should focus on two categories, which they call the Faculty of Language Broad (FLB), and the Faculty of Language Narrow (FLN). FLN includes those things that are unique and essential to human language. FLB includes those things that are essential to human language but are not unique. They might be found in other animals, for example, or in other functions of the human mind.
Chomsky, Fitch, and Hauser argue that we don't actually know yet what belongs in FLN. The only way to find out is to explore the human mind and the minds of animals. But they argue that the road to an understanding of how language evolved must start here. Simply calling all of language an adaptation is a vague and fruitless statement, and one that leaves biologists and linguists unable to work together.
In their effort to portray language as a monolithic whole utterly unique to humans, Pinker and Jackendoff offer up evidence that Chomsky, Fitch, and Hauser consider beside the point. Consider the fact that the human brain shows a different response to speech than to other sounds. Chomsky, Fitch, and Hauser argue that you can't use the circuitry of the human brain as a simple guide to the evolution of its abilities. After all, some people who suffer brain injuries can lose the ability to read while retaining the ability to write. It would be silly to say that this is evidence that natural selection has altered the human brain because reading provides some reproductive advantage. Animals, Chomsky, Fitch, and Hauser argue, are a lot better at understanding the features of speech sounds than Pinker and Jackendoff give them credit for. In fact, they claim that Pinker and Jackendoff are behind the curve, relying on research that's years out of date. Given all that's been discovered about animal minds, Chomsky, Fitch, and Hauser argue that we should assume that any feature of language can be found in some animal until someone shows that it is indeed unique to humans.
There's a lot that's fascinating in all of the papers I've described in these two posts, but I find them frustrating. Pinker and Jackendoff may have erected a straw man to attack, but I think they can to some extent be forgiven. The 2002 paper by Chomsky, Fitch, and Hauser was murky, and their new paper, which is supposed to clarify it, is a bit of a maze as well. Consider the "almost-there" hypothesis, which they offered up in their 2002 paper. It's conceivable that FLN contains only one ingredient--a process called recursion, which I describe in my first post. If that's true, the evolution of recursion may have brought modern language into existence. On the one hand, Chomsky, Fitch, and Hauser claim to be noncommittal about the almost-there hypothesis, saying that we don't yet know what FLN actually is. On the other hand, they claim there is no data that refutes it. Doesn't sound very noncommittal to me.
I'm also not sure how meaningful the categories of FLB and FLN are. Consider the case of FOXP2, a gene associated with human language. Chomsky, Fitch, and Hauser point out that other animals have the gene, and that in humans its effects are not limited to language (it's important in embryo development, too). So it belongs in FLB, because it's not unique enough to qualify for FLN.
It is true that other animals have FOXP2, but in humans, it has undergone strong natural selection and is significantly different from the versions found in other animals. And just because it acts the human body in other ways doesn't mean that natural selection couldn't have favored its effect on human language. Chomsky, Fitch, and Hauser grant that features of language that belong to FLB may have also evolved significantly in humans. But if that's true, then deciding exactly what's FLN and what's not doesn't seem to have much to offer in the quest to understand the evolution of human language.
For now, the main effect these papers will have will probably be to guide scientists in different kinds of research on language. Some scientists will follow Pinker and Jackendoff, and try to reverse-engineer language. Others will focus instead on animals, and will probably find a lot of new surprises about what they're capable of. But until they come to a better agreement on what adaptations are, and the best way to study them, I don't think the debate will end any time soon.
Thank you. I have a knee-jerk response to go with Chomsky and Hauser and against Pinker based on the stuff they did before (and some stuff you pointed out in your two posts), but I will try to remain agnostic until I actually read all of their papers you link to.
Question: Is this debate really bimodal? Aren't there other versions? Where is Fodor on this issue? Or Terry Deacon? Cognitive ethologists?
I'm also curious about the other options. It seems to me like there isn't enough evidence to support either side's claims.
Yes maybe it's a false dichotomy. Have you managed to get a take on Juliette Blevin's recently announced research?
http://www.mpg.de/english/illustrationsDocumentation/documentation/pressReleases/2005/pressRelease200502151/presselogin/
Overall, a balanced presentation of a case where, surely, the devil (and the truth) is in the details.
As an epistemological aside, however, a lack of refuting/disconfirming data should not incline critical minds toward commitment to anything but suspended judgment. Chomsky and friends are rationally entitled to be noncommittal about the almost-there hypothesis.
Thanks for a clear and readable summary of current thinking. It's interesting to see how the alignments change: my initial reaction to Pinker's work was "Nice, but he's so Chomskyan!" - now we see them on opposite sides of a different fence.
This quote:
On the one hand, Chomsky, Fitch, and Hauser claim to be noncommittal about the almost-there hypothesis, saying that we don't yet know what FLN actually is. On the other hand, they claim there is no data that refutes it. Doesn't sound very noncommittal to me.
Seems strange to me. Is this not the agnostic's position? (I do not know if there is a God. Also, I do not know of any proof that there is NOT a God. So, I take no position.)
-- Quentin
Thanks for summarising the concepts in such a simple but effective language. I must say it was really gratifying to understand the things at a first go.
Language isnt simply one thing, and language deficits clearly show. Those who have lost the ability to speak may still be able to swear or sing. Swearing in single words or short phrases most probably evolved directly from animal calls indeed, the calls recruit words but are still mostly just calls (the words are meaningless, the expression and context provide the only utility).
One can lose word recognition (Aphasia) and continue to communicate in words. Losing the ability to interpret the emotional content of words seems to be a bigger defect (Agnosia).
If language as a block evolved, then numerous systems and sub systems must have evolved in sync and had some selectable utility at every step.
Homo erectus had a Brocas area but insufficient lung control for speech (judging by the small spinal chord). This should be sufficient evidence for asymmetric evolution of language ability.
Having the ability to speak, to recognise words and their meaning is not sufficient for *communication* to proceed, as Pepperbergs study of the African grey Parrot Alex seems to indicate.
What are we doing when we communicate? As most of our communication is small talk we can rule out the exchange of information as being an essential and ever present component. What is a dialogue between two or more people if they do not exchange any new information? Most communication establishes and maintains the links and connections between modules of a bigger mind, just as much of the activity of neurons is to maintain linkages to adjacent and distant others.
The first and most fundamental call/sound/language useage in children merely announces I am here and I am ready for communication. In adults we can add I am ready to connect to the database, the shared knowledge, the shared memory, the shared vision, the shared experience, the shared method etc.
The inverting of the function of the brain, from a standalone modular computer responding only to environment cues to a node or module in a larger cognitive structure, is a singular change, but language is only one of the results. Art, music, dance and other activities, all of which can happily exist without language, also result from this ability to flip between the two modes cognitive structure with modules and module in a cognitive structure.
Kind Regards,
Robert Karl Stonjek
The differential preservation of swearing is indeed interesting. There's gotta be some reaon why swearing typically consists of "repressed" material--socially disapproved references to bodily functions, inappropriate and implausible sex acts with proscribed kin, blasphemy, and the like. What do these categories of expression have in common? Why do they seemingly occupy a separate channel or pipeline? Is there something that can be learned from the "social proscription" aspect of these utterances? Why are there (at least) two separate channels or modalities for socially-approved social-interaction speech (including "small talk," which of course some people have difficulty with) and socially-disapproved speech? What could the evolutionary advantages have been for SEPARATELY promoting and conserving these different modalities?
Steve,
Swear words usually have no functional literal content in the context in which they are uttered. But all swear words, regardless of the language in which they form, all have the common feature of being the strongest words emotionally ie they evoke the strongest emotional reaction. When we swear, it is the emotional reaction that comes first, followed by a word with the nearest match for that emotional amplitude, but little else.
Words with positive connotation are also used in swearing eg God, Jesus and Heavens, which only have in common with words like shit the emotional amplitude they are capable of evoking.
Swearing, single words and stock phrases, does not require gramma, word recognition or the sequencing of word utterances, any of which can frustrate the language ability. It is most likely that the equivalent of swearing occurred in pre-language humans, and that the swearing mechanism merely recruits appropriate words when they become available.
You might think now, judging by the angle of the nail, the size of the hammer head, and arc of my swing, I should be able to strike the head with sufficient force by.no, thats not quite right. I must have miscalculated. If you hit your finger with a hammer, you might say dam it, I missed, hit it harder and you may yell dam, even harder and you just scream, really hard and you remain momentarily silent. These are separate layers and evolved from the last mentioned to the first mentioned, and are lost in the opposite order. Further, after the most intensive response, we tend to ascend, given sufficient time, to the most moderate form.
Kind regards,
Robert Karl Stonjek
Again, RKS, interesting stuff. As you say, emotionally expressive content can certainly include simple emotionally-positive content--like cheering on the local sports team--Yipee! Yahoo! or what have you. I agree that these expressions don't require much in the way of complicated conception, grammar, etc. Linguistically, I grant you, these expressions don't seem to be much advanced beyond grunts, roars, shrieks, and cries.
But aren't there brain abberations (I'm probably not using the PC term--no intentional disrespect is intended toward anyone impacted by the disorder), like Tourette's, for example, where the "positive" emotional expressions are not unleashed in anything like an equal manner? Where it's difficult not to conclude that there is some interaction between the type of emotional expressions that are preserved (or preferentially "unleashed") and some sort of social approval/disapproval register or overlay?
If there is some sort of interplay between these untoward expressions and the brain's internalization of societal expectations, then I'm not as persuaded that the emotional expression "channel" is as clearly pre-langauge, pre-society as you seem to be suggesting.
As I think on it further, though, I suppose the "damage" in Tourette's could be to a part of the brain that has to do with appropriate social affect, rather than anything directly to do with a particular language (or amplified-emotional-expression) "channel." So perhaps that doesn't make or break either of our points.
Still a fascinating area!
From his book The Man Who Mistook His Wife for a Hat, at the opening of chapter 10 Witty Ticcy Ray, Oliver Sacs eloquently describes Tourettes syndrome thus:-
In 1885 Gilles de la Tourette, a pupil of Charcot, described the astonishing syndrome which now bears his name. Tourette's syndrome', as it was immediately dubbed, is characterised by an excess of nervous energy, and a great production and extravagance of strange motions and notions: tics, jerks, mannerisms, grimaces, noises, curses, involuntary imitations and compulsions of all sorts, with an odd elfin humour and a tendency to antic and outlandish kinds of play. In its 'highest' forms, Tourette's syndrome involves every aspect of the affective, the instinctual and the imaginative life; in its 'lower', and perhaps commoner, forms, there may be little more than abnormal movements and impulsivity, though even here there is an element of strangeness. It was well recognised and extensively reported in the closing years of the last century, for these were years of a spacious neurology which did not hesitate to conjoin the organic and the psychic. It was clear to Tourette, arid his peers, that this syndrome was a sort of possession by primitive impulses and urges: but also that it was a possession with an organic basisa very definite (if undiscovered) neurological disorder.
I came at these issues from a different direction, presented in my book
What is Thought? (MIT Press, 2004).
Rather than start with language, I attempted to understand thought
and its evolution. Turing gave compelling arguments that whatever
is happening in the brain, can be represented as execution of a computer
program. But execution of a computer program is pure syntax? How
can it have meaning, or understanding? And how can you evolve a program
capable of solving new problems never encountered before?
Research in computational learning theory over the last 20 years
suggests the following proposal: if you find a compact enough program
that behaves well enough in a complex environment, the only way that
can happen is for the program to exploit underlying simple structure
in the environment. The only way the code will be so compact yet
so powerful is if it is modular, with modules corresponding to real
concepts, being reused in multiple computations in different ways.
Such modules and program will be so constrained (to be so compact)
that it will compute correctly how to solve new computational challenges
that it had not previously seen.
Now the obvious "compact program" here is the genome. The genome is
quite compact, smaller than the source code for Microsoft Office
when you strip out the junk. The brain is 100 million times bigger.
Complexity theory tells us that learning is a hard problem, requiring
vast computation. Yet we learn too fast. The only way this can happen
is because evolution has already done most of the work, building
into the genome inductive biases that allow us to learn automatically
and fast. You can learn meaningful things from a single presentation,
but it would be almost impossible for you to learn meaningless things.
What meaning means, is that your learning is constrained by previous
modules.
So the mind is a huge modular program, built through this Occam's razor,
on numerous modules essentially coded into the genome. What then
is language? Well, once you assign labels to modules, you can
communicate programs (or more precisely, guide a listener to
construct a program). Metaphor is a manifestation of this reuse
of modular code: you spend, borrow, waste, invest time because
you think about time reusing a module for valuable resource management.
This explains how children learn words so fast and effortlessly--
they already have the computational modules, all they are doing is
attaching labels, and the concepts are incredibly salient because
of the Occam structure, they are meaningful and the program is
highly constrained. This indicates how you can put sentences
together in infinite ways: the Occam procedure builds the modules
to be reusable, the whole point is that by finding such a compact
structure, you generalize to all kinds of new problems.
An alternate view held by some linguists seems to be that words
enable thought, that animals are incapable of thought, for example
about objects not present. I think
the evidence for this is weak and that evidence against it is strong,
e.g. introspection denies it (plenty of mathematicians proclaim
they don't think verbally, some people have been observed who lost
verbal ability temporarily or permanently through epilepsy or lesions,
yet could still reason). But moreover, I don't understand how it could
be possible. To have meaning, the words must summon the modules
implementing the computations. But then it is the computational
modules that do the actual work. It is possible that the
discovery of language was made possible by discovery of a new
method of interface among modules, but after examining this question
at some length in What is Thought? I concluded that the data seems
to be explainable without postulating such, so the principle
of simplicity mitigates against it.
This picture also can explain the divergence between human and animal
cognition solely through language as a communicative medium. Recall,
discovery of meaningful computational modules is a hard computational problem,
requiring extensive search. Animals can more or less engage in discovery
of new modules through a single lifetime. Humankind, through our
ability to guide listeners to construct programs, has discovered
over generations more powerful programming superstructure built on
top of the concepts coded in the genome. A review of many differences
in cognitive abilities finds they can all be naturally explained in
this way: for example, human theory of mind seems built in this way
on top of subroutines already present in plovers and chimpanzees.
Our more powerful TOM is built on discoveries made over generations
and communicated to children through bedtime stories and fiction
and studying Shakespeare in school.
This does raise the question of why language took so long to evolve,
if the computational structure was in place and all that was necessary
was to attach labels. This could be explained if evolution
was stuck in a potential well from which it couldn't readily
escape. A particular proposal of such a well, by Martin Nowak
and collaborators, was discovery of digital encoding (sentences made
of words, words of phonemes.) Nowak et al have argued that
until you are expressing many concepts, it is fitter to adopt an analog
encoding: you can't use lots of words till you discover digital
encoding, and you can't discover digital encoding till you use
lots of words, so evolution was stuck. What is Thought? surveys
their proposal (in my context) and also makes a second one:
you can't start using words till somebody else is prepared to
learn them, and he can't be prepared to learn them till you
are using them. It is conceivable that language was launched
by a single pair of proto-human Einsteins who conceived the plan
of naming modules and learning the names. Of course, once proto-humans
started using language, we no doubt would have evolved to use it better.
For example, What is Thought? surveys the evidence for specific
grammar adaptations as a case study of inductive bias programmed
into the genome, and even considers how the Baldwin Effect may have
been involved.
Eric Baum http://www.whatisthought.com
Has Chomsky ever once had a response to criticism (in linguistics or politics) that wasn't some variant of "You didn't understand what I said"?
Perhaps it should have occurred to him by now that the only thing more muddled than his thinking is his writing.
ahem, rant off.
In other news, this isn't what an agnosia is:
"Losing the ability to interpret the emotional content of words seems to be a bigger defect (Agnosia)."
Agnosias involve deficits in object perception and recognition. I'm not sure there's a concise word for deficits in emotion recognition.
Sorry, I missed the 'Tonal' from 'Tonal Agnosia'.
'Tonal Agnosia' is exactly what I said it is - the loss of recognition of the emotional content of speech. Although the expression refers to the loss of recognition of the tonal variations in speech, which is probably what first came to the attention of medical experts, people with Tonal Agnosia are also unable to deduce the emotional content of written work.
Robert
I am currently reading "The First Idea" by Stanley I. Greenspan and Stuart G. Shanker. Their proposed schema for the evolutionary development of concepts and language goes through the stages of learning in succession the abilities "to attend, interact with others, engage in emotional and social signaling, construct complex patterns, organize information symbolically, and use symbols to think."
This is a popular book rather than an academic work and I am not a professional in these matters. Can any of you experts out there tell me if there is any value in their ideas?
Ah, thanks for the clarification Robert.
I'd never heard of tonal agnosia before. Interesting stuff.