Congenitally blind people have been taught to perceive body shape and posture through “soundscapes” that translate images into sound, a study published today in Current Biology reports.
Vision often dominates our perception of the world. While people with visual impairments have long since used other senses such as touch to perceive the world around them, those who are blind from birth have limited experience of external body shapes.
With the use of sensory substitution devices (SSDs) – technologies that provide one sense access to certain features of the world generally experienced by another – researchers Ella Striem-Amit and Amir Amedi from the Hebrew University of Jerusalem have greatly extended the possibilities of perception for the blind.
“The idea is to replace information from a missing sense by using input from a different sense,” Dr Amedi explained. “It’s just like bats and dolphins use sounds and echolocation to ‘see’ using their ears.”
“Body shape and posture convey a lot of social information regarding people’s identity, emotions and intents,” Ms Striem-Amit added. “When you see someone standing or walking you can tell a lot by their body language, and that information is entirely unavailable to the blind.”
Hearing is ‘seeing’
Researchers trained a group of congenitally blind individuals to use the visual-to-auditory sensory-substitution algorithm vOICe, which conveys shapes by topographically translating images into sound (see video below for a demonstration using facial expressions).
“Imagine for instance a diagonal line going down from left to right; if we use a descending musical scale — going on the piano from right to left – it will describe it nicely,” Ms Striem-Amit said. “And if the diagonal line is going up from left to right, then we use an ascending musical scale.”
Participants were first taught to perceive simples dots and lines and gradually learned to distinguish more complex images.
With an average of 70 hours of training, approximately ten hours of which was devoted to the recognition of silhouettes and outlines, participants were not only capable of perceiving the presence of a human form, but were also able to recognise and imitate its exact posture.
The movie tag contains http://commons.wikimedia.org/wiki/File:%E2%80%98Visual%E2%80%99-Acuity-of-the-Congenitally-Blind-Using-Visual-to-Auditory-Sensory-Substitution-pone.0033136.s001.ogv, which is an unsupported URL, in the src attribute. Please try again with youtube or vimeo.
The recognition of body shapes “has a great social function,” Mirko Farina from Macquarie University said, which allows for “richer and deeper personal interactions and certainly gives proficient congenitally blind SSD users a sense of independence”.
The most difficult challenge for SSDs to overcome, though, is depth perception and being able to fast track movements in real time.
Mr Farina said the fact that congenitally blind people “can now be taught to perceive body shapes and postures (presumably in dynamical situations/environments) is an excellent preliminary step, which gives us hope that we will eventually be able to fulfil the promise of full depth perception and fast tracking”.
Awakening a blind spot
The study also found that once the participants could retrieve body shapes via “soundscapes” the part of the brain responsible for vision was activated, rather than the auditory cortex.
Researchers compared blind participants’ perception of SSD body shape information with a group of normally sighted subjects’ visual perception and found they both had activation of the extrastriate body area (EBA).
The defining feature of the EBA is its body shape selectivity function, as opposed to recognition of objects, faces or textures.
Hence, the study suggests that body shape analysis is enabled by a specific region of the brain regardless of sensory input.
This supports the view that the brain is sensory-independent and task-selective.
“We’re beginning to understand the brain is more than a pure sensory machine,” Dr Amedi said. “It is a highly flexible task machine.”
Mark Williams, from Macquarie University, said the study suggests that “areas traditionally thought to be visual areas and parts of the ‘visual’ cortex (also known as the occipital cortex) may in fact be multi-modal (such as involved in audition).
"Of course it is always difficult to interpret data from individuals who have developed without a particular sense, as we don’t know how closely their brains really resemble a ‘normal’ brain.”
Similarly, Tamara Watson, from the University of Western Sydney, noted that the paper did not see if participants already recruited the EBA to process body shape information experienced naturally through other senses, such as touch. “I would think that it’s not the case that this area of the brain has been sitting idle,” she said.
Nevertheless, Dr Watson said: “we shouldn’t be held back by thinking about the brain as separate modules that serve isolated purposes when we want to develop new assistive technologies.”