How robust are languages learned in childhood but disused later in life? A new study by researchers at McGill University and the University of Montreal has found that the forgotten birth language of adoptees can apparently leave its traces in the brain, many years after the adoption has taken place.
International adoption is rapidly on the increase in Western countries. Children who were born and raised initially in one linguistic environment are transplanted to a family and community that uses a new language. Typically, the transition to this new language is made astonishingly quickly, with the first language usually being forgotten within months – and sometimes despite the adoptive parents’ best efforts to provide opportunities to help the child retain the birth language. Surprisingly, this pattern can be seen not only in infants but also among children as old as nine years at the time of adoption.
What adults remember
As adults, adoptees are often unable to even recognise the very basic vocabulary of the language to which they were exposed in childhood. In one study of young adults in France who were born in Korea and adopted aged three to nine, upon hearing sentences in several different languages, they were unable to determine which of the utterances were in Korean, as opposed to Japanese, Polish and other languages. Brain scans of the same Korean adoptees revealed that their neural activity while listening to their birth language did not differ from that of French speakers who had never been exposed to Korean (as would have been expected if the early exposure had left a neurological “imprint”).
The idea that the birth language – the language you once used and understood, the language your biological parents spoke, the language that you may associate with your heritage – has vanished entirely from your brain is an intriguing thought, and a troubling one for many adoptees.
Can we not at least assume that the birth language will be easier for an adoptee to re-learn than it is for others to acquire it from scratch? This is what is known as the “Savings Paradigm” – the assumption that even information that is apparently forgotten is still there somewhere, and only needs re-activating.
The findings by researchers are, thus far, not encouraging: while studies have shown adoptees may have a minor advantage at learning how to discriminate and produce some of the speech sounds of their birth language (and in some cases none at all), early language exposure does apparently not lead to better performance on grammar or vocabulary.
Sounds stick out
Children learn the phonology of their birth language at a very early age: within months the infant’s perceptual system becomes attuned to those speech sounds that can make a meaningful difference as opposed to those that do not. For example, in English the difference between “l” and “r” is phonemic, meaning that the words “road” and “load” mean different things, and English infants will quickly come to discriminate these sounds.
In some other languages, there is no such distinction between “l” and “r”, and children growing up exposed to such a language have lower sensitivity to this contrast. Languages such as Chinese furthermore use the intonation contour of the syllable for meaning-making, so that the word “ma” can have different meanings, dependent on whether it is carries an intonation contour that is high-level (“mother”), high-rising (“hemp”) low-dipping (“horse”) or high-falling (“scold”).
This is the contrast tested in the recent Canadian study. Chinese adoptees in Montreal who were adopted around ten years ago, before they were two years old and who now only speak French, were compared with both monolingual Francophone Canadian children and with French-Chinese bilingual children. Unlike in the study of the Korean adoptees in France, these children did not listen to actual (meaningful) speech, but to nonsense syllables that were pronounced with different intonation contours as well as to humming with the same intonation pattern.
Only an ear for intonation?
Under these conditions, while no actual linguistic processing was going on, brain scans revealed that the adopted children seemed to recruit the same brain regions to process the intonational patterns as did the Chinese-French bilinguals, areas that are typically associated with the processing of tonal languages. Meanwhile, the monolingual French children showed an entirely different pattern of neurological activation.
This suggests that the early adoptees retain the knowledge that intonation patterns can be used to distinguish lexical meanings. For the monolingual French children, this type of information is more typically associated with utterance-level information, such as distinguishing the statement “John is here” from the rising-intonation question “John is here?”, and they consequently recruit different brain areas for processing it.
Does the fact that the early exposure to Chinese has left such a demonstrable trace in the brain mean that these children will find it easier to re-learn this phonological contrast, with which learners with no childhood exposure to Chinese typically struggle?
A study conducted at the Max Planck Institute for Psycholinguistics in Nijmegen in the Netherlands suggests that it might not be as straightforward as that. While Chinese adoptees in the Netherlands are better than monolingual Dutch children at producing these tones, they have no advantage at perceiving, and thus discriminating, them – not even after they were trained on the task. Notably, the speakers in this study were severed from their birth language for only about half as long as the people in the Canadian study.
Taken together, these studies suggest that a birth language may indeed leave a persistent trace in the brain, but that the advantage for speakers who were exposed to a language at an early age is probably limited to relatively narrow phonological features and does not include the wider areas of lexicon or grammar.