The monologues in our minds could one day be converted into language, according to researchers who have succeeded in decoding electrical activity in the area of the brain that recognises sounds.
The development could allow scientists to hear the imagined speech of people who have suffered strokes or paralysis and cannot talk, say the researchers at the University of California, Berkeley.
Their work is published today in the journal PLoS Biology.
By analyzing the pattern of activity in an area of the brain called the superior temporal gyrus, the team was able to create spectogram representations that they could then use to reconstruct words that patients listened to in conversation.
“This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s disease and can’t speak,” said one of the authors, Robert Knight, Professor of Psychology and Neuroscience at Berkley. “If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit.”
But the chief author of the paper, Brian Pasley, a post-doctoral researcher, cautioned that the research was based on sounds that patients actually heard, as opposed to sounds they imagined in their heads. To use the research to enable people to control a prosthetic limb, these principles would have to apply to someone who is imagining speech, he said.
But he added that there was evidence that perception and imagery were “pretty similar in the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device.”
Patients who took part in the research heard a single word, and Dr Pasley then used two computational models to predict the word based on electrode recordings from their brains.
The more accurate model was able to reproduce a sound close enough to the original word that the researchers could “correctly guess the word better than chance”.
Professor Knight said that the “computational model can reproduce the sound the patient heard and you can actually recognize the word, although not at a perfect level.”
The ability of the research team to guess the word in more cases than not represented a “fascinating and very significant” breakthrough, said Professor Geoffrey Donnan, director of Florey Neuroscience Institutes in Melbourne.
The research was important “because it focussed not just on the primary auditory cortex, where messages go directly to the brain, but on the association areas of the brain where those messages are interpreted”.
“We’ve always been able to detect messages going in large conduits from the sensory systems to the brain - you can hear all sorts of electrical activity going down those routes - but this is the first time that we’ve really tapped directly into the decoding centre of the brain itself.”