You may have noticed that Cognitive Daily hasn't exactly been living up to its name recently. During the summer vacation season, we travel quite a bit, so it's difficult to maintain our usual pace of posting. But along the way we've collected some great photos, and we'll try to share a few of them with you when it's relevant. For example, take a look at this picture I took in Maine about a week ago:
It's a lovely section of the Maine coast, readily identifiable by anyone as "shoreline." Earlier in the summer we vacationed with our nieces on the North Carolina coast, just a few hours' drive from where we live:
Once again, this is clearly a shoreline, but it's quite different from the Maine coast. Many studies have offered evidence that recognizing a general image category like "shoreline" uses a different mental process than recognizing a specific image -- the Maine coast photo versus the North Carolina coast photo. The most convincing evidence comes from a 1999 study by C.J. Marsolek. Marsolek showed people pictures of many different items and asked them to name them. Then the same people saw pictures in either the left or right visual field. Some of these pictures were identical to the first set, but others were in the same category (like "piano" or "dog") but a different exemplar (a grand versus upright piano; a pekingese versus an Irish setter). Objects in the left visual field (which corresponds the right hemisphere of the brain) were recognized better when they were exactly the same as previously-viewed photos rather than different examples of the same category. So the right hemisphere seems to be better at processing specific examples, while the left hemisphere is better at processing general categories.
So is this difference in processing general versus specific examples found only in the visual system, or does it extend to all our senses?
Julio González and Conor T. McLennan adapted Marsolek's procedure and played a variety of sounds on headphones to 24 undergraduates. First the students listened to 24 different 1- to 6-second sound clips (bagpipe, jackhammer, monkey, and so on) and had to identify each one. Then they heard 24 shortened clips -- just the first 0.75 seconds of each clip. Eight were clips they had heard before, eight were different exemplars of the same sounds, and eight were completely new sounds. Half the time the sound was played back through the left ear, and half the time it came through the right ear. Once again, they were asked to identify the sounds. Here are the results:
The graph shows the accuracy on the second test, with the shortened clips. When the sounds were played through the left ear, the students were significantly more accurate at recognizing the identical sounds they had heard before compared to different exemplars of the sound category. But there was no significant difference in accuracy when the sounds were played through the right ear, as long as they had heard one exemplar from that sound category before.
When the task was made more difficult, the results were even more dramatic. This graph shows the results for a new group of students who had to identify the shortened clips while a masking sound was being played in their other ear:
Gonzalez and McLennan say that both of these experiments demonstrate that the brain identifies sounds using two different processes, just like the visual system. One system is used for identifying the general category of a sound, and one system is used for identifying particular, specific sounds. The general system is housed in the left hemisphere (which processes sounds coming through the right ear), and the specific system is housed in the right hemisphere.
González J, & McLennan CT (2009). Hemispheric differences in the recognition of environmental sounds. Psychological science : a journal of the American Psychological Society / APS, 20 (7), 887-94 PMID: 19515117
- Log in to post comments
That was awesome. What more can I say?
it is an interasting view point.
years ago, i read a paper, which says male and female understand each other by different message decoding process.
When you look at an orchestra, the instruments on the left are mostly the violin section where massing is used to create large homogeneous soundfield, and tympani and other percussion (not defined as much by note as by timbre and effect). But the instruments on the right are characterized by several similar instruments with more nuanced differences - viola, cello and bass, and various horns. Where both general and specific sound differntiation are needed - such as soloists - these are centered on the soundstage. Most fans of classical music would immediately recognize as stereo system with left and rigth channels reversed and probably find this disconcerting.
I wonder how often popular music has been mixed this way - with strings, synths, and rhythm guitar on the left, and lead guitar and primary instruments on the right? Vocals are almost always mixed center simply as are kick drum and bass mostly because of the nature of stereo electronic reproduction characteristics. But it would be interesting to see if there is a correlation between succesful popular music and where on the soundstage the instuments are panned in the mix, and if we subconciously prefer songs mixed a certain way as much as we expect a classical music soundstage to have a particular soundfield.
"Duck Bagpipe" is either a great name for a band, or some very odd, messy, and potentially dangerous musical instrument.
So, if my boss tells me to do something and she's stood to my right I get the gist (and vaguely do as I'm told) but if she's stood to my left I'll follow her instructions more accurately? ;)
When I hear music in my head (I don't know if "hearing" is the right word, because I don't hear it with my ears but it is more real than if I just "imagined" it), the music seems to exist in a space that is above and on the right side of my head.
Marcia Dream;
I believe to "hear" music in one's head is to 'audiate.'
What if the right ear is more conservative ("no sound is like the one sounded before") and the left ear is more permissive ("every sound is like the one sounded before") - that would have similar result (though perhaps the control shows that is less likely explanation).