FOR most of us, the ability to navigate our environment is largely dependent on the sense of vision. We use visual information to note the location of landmarks, and to identify and negotiate obstacles. These visual cues also enable us to keep track of our movements, by monitoring how our position changes relative to landmarks and, when possible, our starting point and final destination. All of this information is combined to generate a cognitive map of the surroundings, on which successful navigation of that environment later on depends.
Despite the importance of vision for navigation, congenitally blind people - those born blind - can still generate neural representations of space. Exactly how is unclear, but it is thought to be by using a combination of touch, hearing and smell, and some are even known to use echolocation. Spatial navigation in the congenitally blind is therefore thought to involve different brain networks than those engaged in sighted people. A team of Danish researchers now report, however, that the mechanisms underlying spatial navigation in the blind are much the same as those in sighted people, due to the brain's remarkable ability to reconfigure itself.
The new study, by Ron Kupers and Daniel-Robert Chebat, involved two separate experiments using the same navigation task. In the first, 10 congenitally blind and 10 blindfolded sighted participants spent 4 days learning a route navigation and route recognition task using the BrainPort tongue-display unit (below), a sensory substitution device which translates visual images into touch sensations applied to the tongue.
The principle of sensory substitution was established in the 1960s by Paul Bach-y-Rita, who demonstrated it with a "tactile-vision" device consisting of an old dentist's chair with hundreds of vibrating solenoid stimulators incorporated into the back rest. These acted a bit like pixels, generating a tactile representation of images from a television camera, which was accurate enough to enable blind people to discriminate between objects. This is possible because of the brain's ability to re-route sensory information along novel pathways, one form of the phenomenon referred to as neuroplasticity.
In the navigation task, the participants learned to navigate their way through two virtual routes (above) presented onto their tongues by the device, using the arrow keys on a computer keyboard. At the end of each training day, they were asked to draw each of the virtual routes, to verify that they had generated a cognitive map. In the route recognition task, the participants were automatically guided through the routes by the computer program, and then had to indicate which of the routes had been presented to them by means of the tongue-display unit. Overall, there was no difference in performance between the two groups - both the blind and the sighted participants successfully learned both of the routes, and their drawings became increasingly accurate after each training day.
When, however, they repeated the route recognition task while lying in a brain scanner, important differences in the brain activation patterns were observed. In the blind participants, route recognition produced strong activation of several distinct sub-regions of the visual cortex, and of the right parahippocampus, which is known to contain cells that are involved in spatial navigation. In contrast, the sighted participants exhibited no increase whatsoever in visual cortical or parahippocampal activity. Instead, the task led to activation of various frontal cortical areas. In the second experiment, 10 more sighted participants were trained to perform the same route recognition without blindfolds. They didn't use the tongue display unit either - the virtual routes were presented to them on a computer screen instead. When they repeated it in the brain scanner, the brain activation pattern observed was very similar to that seen in the blind participants in the first experiment.
Although there are numerous studies of how the brain is reorganized following sensory loss, this is one of only a small handful that use functional neuroimaging to investigate the neural basis of spatial navigation in the congenitally blind. One interesting finding is that the hippocampus itself, which is known to contain at least four cell types involved in spatial navigation, was not activated in any of the participants. This may be because it is more important for encoding of cognitive maps, but not their subsequent retrieval. The participants spent 4 days learning the routes, by which time their maps were likely encoded strongly. There is also some evidence that the hippocampus encodes spatial information related to external cues, whereas the parahippocampus encodes it in relation to one's own movements. The frontal cortical activation observed in the sighted participants suggest that they may use a different navigational strategy, one that involves decision-making.
Because of the limited resolution of the tongue display unit, the routes used in the navigation tasks were simplifed versions of computerized mazes that lacked usual environmental features such as landmarks. It is possible, therefore, that the tasks were not demanding enough, but solving them did involve generating cognitive maps, and they were made harder because the sensory information was tactile rather than visual. The study therefore provides evidence that spatial navigation in the absence of vision depends upon the parahippocampus and visual cortex. The findings also suggest that cognitive maps can develop in the complete absence of visual experience, because the visual cortex is capable of processing spatial information received by non-visual senses such as touch.
Related:
- Human grid cells tile the environment
- Developmental topographagnosia
- Where do you think you are? A brain scan can tell
- Mice navigate a virtual reality environment
Kupers, R. et al. (2010). Neural correlates of virtual route recognition in congenital blindness Proc. Nat. Acad. Sci. DOI: 10.1073/pnas.1006199107.
Bach-y-Rita, P. W. & Kercel, S. (2003) Sensory substitution and the human-machine interface. Trends. Cogn. Sci. 7: 541-546. [PDF]
- Log in to post comments
"Exactly how is unclear, but it is thought to be by using a combination of touch, hearing and smell, and some are even known to use echolocation."
All we actually are echolocator. Try to observe how our sounds go and come from the near surfaces. You do function in that way. No matters if you are not aware.
The actual "blindness" from neuroscientists to accept echolocation as universal, may come from centuries trying to divorce themselves from their body, from their senses and sensibility. Neuroscientists seemed "buried" in the Plato cavern called: "isolated brain". But brain is in permanent and open connection within your body, your senses, and the world.
Fortunately, embodied embedded cognition is changing that, towards a sharing transsubjectivity where the observer, the subject, becomes, at different levels, the brain, the body, and the ecosystem.
Symbiodiversity research group and ISMA association have developed several technological tools to enhance all that human "hidden" powers: Copylife code, Infimonikal mathematical system, MimouX co-operating system, and Mokoputomoko server...