Menu Close

Understanding language influences how brains process sound

Can’t hear you. Nemo's great uncle

We live in a society that is fascinated by the brain – how it works, how it differs across individuals or changes over our lifetimes and how it makes us different to other species. And over the years, several “brain myths” have taken hold as popularly held facts about the relationship between the brain, intelligence and personality.

One of these is built around the notion that the two sides of the brain – the left and right hemispheres – perform different functions. People will claim they are relatively more “right-brained” because they are creative or musical, while others more “left-brained” by being logical or mathematical. It’s a compelling idea, and one that sometimes makes it all the way into the science sections of major booksellers.

More fun on the right. TZA

But although some people show more creative tendencies and some have a greater aptitude for mathematics, there is no established basis for the claim that this has anything to do with tapping into one or other brain side.

Beneath this, however, cognitive neuroscientists – who study the relationships between brain function and aspects of cognition – do have considerable interest in measuring and describing specific hemispheric asymmetries. And my own research area, the cognitive neuroscience of speech processing, features one of the most prominent and long-standing of these debates.

A tale of two halves

In the late 19th century, Paul Broca and Carl Wernicke described patients who had difficulties processing speech after a brain injury. In post-mortem examinations, these difficulties were associated with injuries to the left hemisphere of the brain. And patients who showed severe difficulties producing speech had damage in the frontal lobe, while those with speech comprehension difficulties had injuries to parts of the temporal lobe. This was a very strong indication that it was the left hemisphere that was dominant when it came to processing speech.

However, when neuroscientists first began to explore brain activity in healthy, living brains a century later, they noticed the temporal lobes in both hemispheres showed responses to heard speech. Some took it that this meant the two sides of the brain played different roles in the perception of sounds and, in their view, the left hemisphere was more strongly associated with the processing of (rapid) timing information important for understanding speech. And according to them, its specific preference for these acoustic properties that underpinned its dominance for processing spoken language.

But in another view – which is the one I share – the left-hemisphere preference for processing speech reflects the fact that speech is a linguistic signal. The left hemisphere is dominant in the processing of sounds that are understood linguistically (regardless of their specific acoustic properties). And there is compelling evidence in studies that show the left hemisphere engaged differently depending on how the listener perceives a sound (so, regardless of its acoustic content).

Your mother or a horse?

Some interesting research using functional MRI (fMRI) compared the brains of native Mandarin speakers with people who had learned the language later in life. Mandarin is a tonal language that uses pitch changes on words to indicate different meanings, so different tones on the syllable “ma”, for example, can make the difference between you saying “mother” or “horse”. Using fMRI scans to measure the flow of blood around the brain, the team found that only native speakers showed a left-hemisphere sensitivity to tonal cues that were linguistically informative.

I think you have me confused with your mother. Mike Burns

A more recent paper published in Brain and Language also explored how brain responses differed, this time across different dialects. They compared speakers of standard Japanese, which uses a type of pitch accent to distinguish between words, with speakers of a non-standard dialect that doesn’t have this cue. It’s an intriguing comparison (and an advance on the studies in Mandarin above) because both groups were fully fluent native speakers of Japanese. Also, because standard Japanese predominates in the national media, non-standard dialect speakers will have likely been exposed to its pitch accent cues throughout their lives.

In the study, participants’ brain responses were recorded using near infra-red spectroscopy, a technique that uses the reflection of infra-red light to detect bloodflow changes – and activity – around the brain. By comparing the responses in the left and right hemispheres, the authors showed standard Japanese speakers had a stronger left hemisphere response to changes in pitch accents. In contrast, non-standard dialect speakers didn’t show a difference between left or right. But when they listened to synthetic non-speech signals that contained the same pitch accent contrast, they showed a greater right hemisphere response, while the standard dialect speakers showed no hemispheric asymmetry.

What does this all mean? It’s certainly very interesting that you might be able to see brain differences between two groups of people who speak native variants of the same language, although it’s worth mentioning that each group’s data was analysed separately rather than as a direct statistical comparison, which we need to actually claim a difference between them. But the study does suggest different brain responses to specific acoustic cues that are linguistically meaningful in one dialect group but not another.

And it’s yet another piece of evidence that suggests that it’s how we perceive language, not just a sound, that excites the left side of our brain when it comes to processing and understanding speech.

Want to write?

Write an article and join a growing community of more than 182,000 academics and researchers from 4,940 institutions.

Register now