tag:theconversation.com,2011:/uk/topics/speech-2192/articlesSpeech – The Conversation2024-03-13T12:28:30Ztag:theconversation.com,2011:article/2248122024-03-13T12:28:30Z2024-03-13T12:28:30ZSlowed speech may indicate cognitive decline more accurately than forgetting words<figure><img src="https://images.theconversation.com/files/580833/original/file-20240310-30-y6u8ju.jpg?ixlib=rb-1.1.0&rect=21%2C0%2C4723%2C3165&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/image-senior-man-thinking-set-question-271407230">tomertu/Shutterstock</a></span></figcaption></figure><p>Can you pass me the whatchamacallit? It’s right over there next to the thingamajig. </p>
<p>Many of us will experience “lethologica”, or difficulty finding words, in everyday life. And it usually becomes more prominent with age. </p>
<p>Frequent difficulty finding the right word can signal changes in the brain <a href="https://www.geriatric.theclinics.com/article/S0749-0690(22)00058-1/abstract">consistent</a> with the early (“preclinical”) stages of Alzheimer’s disease – before more obvious symptoms emerge. However, a <a href="https://doi.org/10.1080/13825585.2024.2315774">recent study</a> from the University of Toronto suggests that it’s the speed of speech, rather than the difficulty in finding words that is a more accurate indicator of brain health in older adults.</p>
<p>The researchers asked 125 healthy adults, aged 18 to 90, to describe a scene in detail. Recordings of these descriptions were subsequently analysed by artificial intelligence (AI) software to extract features such as speed of talking, duration of pauses between words, and the variety of words used. </p>
<p>Participants also completed a standard set of tests that measure concentration, thinking speed, and the ability to plan and carry out tasks. Age-related decline in these “executive” abilities was closely linked to the pace of a person’s everyday speech, suggesting a broader decline than just difficulty in finding the right word. </p>
<p>A novel aspect of this study was the use of a “picture-word interference task”, a clever task designed to separate the two steps of naming an object: finding the right word and instructing the mouth on how to say it out loud. </p>
<p>During this task, participants were shown pictures of everyday objects (such as a broom) while being played an audio clip of a word that is either related in meaning (such as “mop” – which makes it harder to think of the picture’s name) or which sounds similar (such as “groom” – which can make it easier). </p>
<p>Interestingly, the study found that the natural speech speed of older adults was related to their quickness in naming pictures. This highlights that a general slowdown in processing might underlie broader cognitive and linguistic changes with age, rather than a specific challenge in memory retrieval for words.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/wfLP8fFrOp0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Alzheimer’s explained.</span></figcaption>
</figure>
<h2>How to make the findings more powerful</h2>
<p>While the findings from this study are interesting, finding words in response to picture-based cues may not reflect the complexity of vocabulary in unconstrained everyday conversation. </p>
<p>Verbal fluency tasks, which require participants to generate as many words as possible from a given category (for example, animals or fruits) or starting with a specific letter within a time limit, may be used with picture-naming to better capture the “tip-of-the-tongue” phenomenon. </p>
<p>The tip-of-the-tongue phenomenon refers to the temporary inability to retrieve a word from memory, despite partial recall and the feeling that the word is known. These tasks are considered a better test of everyday conversations than the picture-word interference task because they involve the active retrieval and production of words from one’s vocabulary, similar to the processes involved in natural speech.</p>
<p>While verbal fluency performance does not significantly decline with normal ageing (as shown in a <a href="https://doi.org/10.1186/s13643-022-02018-y">2022 study</a>), poor performance on these tasks can indicate neurodegenerative diseases such as Alzheimer’s. </p>
<p>The tests are useful because they account for the typical changes in word retrieval ability as people get older, allowing doctors to identify impairments beyond what is expected from normal ageing and potentially detect neurodegenerative conditions.</p>
<p>The verbal fluency test engages various brain regions involved in language, memory, and executive functioning, and hence can offer insights into which regions of the brain are affected by cognitive decline.</p>
<p>The authors of the University of Toronto study could have investigated participants’ subjective experiences of word-finding difficulties alongside objective measures like speech pauses. This would provide a more comprehensive understanding of the cognitive processes involved. </p>
<p>Personal reports of the “feeling” of struggling to retrieve words could offer valuable insights complementing the behavioural data, potentially leading to more powerful tools for quantifying and detecting early cognitive decline.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/daily-fibre-supplement-improves-older-adults-brain-function-in-just-three-months-new-study-224885">Daily fibre supplement improves older adults’ brain function in just three months – new study</a>
</strong>
</em>
</p>
<hr>
<h2>Opening doors</h2>
<p>Nevertheless, this study has opened exciting doors for future research, showing that it’s not just what we say but how fast we say it that can reveal cognitive changes. </p>
<p>By harnessing natural language processing technologies (a type of AI), which use computational techniques to analyse and understand human language data, this work advances previous studies that noticed subtle changes in the spoken and written language of public figures like <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6922000/">Ronald Reagan</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/15574466/">Iris Murdoch</a> in the years before their dementia diagnoses. </p>
<p>While those opportunistic reports were based on looking back after a dementia diagnosis, this study provides a more systematic, data-driven and forward-looking approach.</p>
<p>Using rapid advancements in natural language processing will allow for automatic, detection of language changes, such as slowed speech rate. </p>
<p>This study underscores the potential of speech rate changes as a significant yet subtle marker of cognitive health that could aid in identifying people at risk before more severe symptoms become apparent.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/could-many-dementia-cases-actually-be-liver-disease-222779">Could many dementia cases actually be liver disease?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/224812/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Claire Lancaster receives funding from the Economic and Social Research Council and Sussex Partnership NHS Foundation Trust to investigate speech-based markers of neurodegenerative disease. </span></em></p><p class="fine-print"><em><span>Alice Stanton receives funding from the Economic and Social Research Council and Sussex Partnership NHS Foundation Trust to investigate speech-based markers of neurodegenerative disease.</span></em></p>A new study suggests that talking speed is a more important indicator of brain health than difficulty finding words.Claire Lancaster, Lecturer, Dementia, University of SussexAlice Stanton, PhD Candidate, Dementia, University of SussexLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2206192024-01-31T12:24:33Z2024-01-31T12:24:33ZWhat inner speech is, and why philosophy is waking up to it<figure><img src="https://images.theconversation.com/files/571609/original/file-20240126-25-ei5wf3.jpg?ixlib=rb-1.1.0&rect=0%2C15%2C2556%2C1582&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/es/image-vector/set-bubble-speech-white-empty-space-1535722826">Hunia Studio/Shutterstock</a></span></figcaption></figure><p>It is quite rare for philosophers to start investigating a new area, and a lot of the questions they explore have been around since ancient times. However, there is something they have only begun to look at closely in the last 15 years or so, which sits at the intersection of psychology and philosophy: inner speech.</p>
<p>Also known as the internal monologue, inner speech is the voice we hear in our minds when thinking or reading. Surprisingly, empirical research has found that <a href="https://escholarship.org/uc/item/93p4r8td">not everyone has this inner voice</a>, though the majority of us do. </p>
<p>Science and psychology have given it plenty of attention. We have known for over a century that the inner voice – especially when reading text – is accompanied by <a href="https://www.jstor.org/stable/1412271?origin=crossref">tiny movements of the larynx</a>, showing a clear link between “internal” and “external” speech.</p>
<p>Philosophers have occasionally thought about inner speech before. The well known behaviourist <a href="https://www.britannica.com/biography/Gilbert-Ryle">Gilbert Ryle</a> saw it as playing a key role in what philosophers call “self knowledge”. We learn about others by listening to what they say, and in his seminal 1949 book, <a href="https://www.google.es/books/edition/The_Concept_of_Mind/FHJ4AgAAQBAJ?hl=en&gbpv=0">The Concept of Mind</a>, Ryle suggested that we are able to do the same to ourselves by “eavesdropping” on our own inner speech. </p>
<p>The phenomenon has made an appearance in other philosophical contexts, but it has not, until recently, been a topic of sustained attention in the field. Philosophers are now realising that psychology can only explain it up to a point: there are certain aspects of inner speech that can only be addressed by distinctively theoretical thinking.</p>
<h2>Psychology vs philosophy</h2>
<p>Inner speech has received a lot more attention from psychologists than philosophers over the years. Soviet psychologist <a href="https://www.britannica.com/biography/L-S-Vygotsky">Lev Vygotsky</a> was a very influential figure on the subject. </p>
<p>Vygotsky noted – as we have all undoubtedly seen – that children of a certain age often speak to themselves aloud, but that they gradually stop as they grow older. He suggested that inner speech develops as this practice fades. According to Vygotsky, inner speech is simply external speech that has been internalised.</p>
<p>Many philosophers agree, but some see the phenomenon differently, as there are not, as far as we know, any other activities that we can perform both internally and externally. Some philosophers have thought that inner speech might not actually be speech but a mental representation of it.</p>
<p><a href="https://www.amacad.org/person/ray-s-jackendoff">Ray Jackendoff</a>, for example, has suggested that we are imagining what speech sounds like when we produce inner speech, but doing so in a way which imitates how we would express ourselves if we were speaking aloud. We are not actually speaking, but simulating speech.</p>
<p>This is purely theoretical reasoning, but it does not aim to challenge or disprove psychological approaches. On the contrary, it enriches empirical research by adding a valuable new perspective.</p>
<h2>Talking to ourselves?</h2>
<p>One question we can answer, at least partly, is why we produce inner speech, even though no one else can hear it. There are a number of benefits.</p>
<p>Putting our thoughts into words can help to clarify our thoughts, and make them more precise. Sometimes we can only work out our true thoughts by saying them aloud. We often speak to others – or perhaps write our ideas down – to try and solve a problem or deal with emotions. Producing inner speech helps us to develop our own thoughts in a similar way.</p>
<p>There may be other benefits too. Making an existing thought or belief conscious by expressing it internally can help to advance a process of reasoning, even on everyday matters. “If I’m home by 6:30, I can cook dinner by 7:30,” you might say in inner speech. But this prompts the further thought, “Oh, but the game starts at 7. I’d better get takeaway instead.”</p>
<p>These answers, however, still leave a question open: are we actually talking to ourselves in the same way we talk to others? Or are we just talking?</p>
<h2>Controlling the voice in your head</h2>
<p>Another area with room for philosophical thinking is the question of whether producing inner speech is an action, or something that just happens.</p>
<p>When we physically speak aloud, it is typically an action: we can choose to do it, or not do it. The same cannot be said for inner speech, which is often unprompted, or even intrusive and undesired.</p>
<p>It can actually be hard to silence our internal monologue, and doing it at will is all but impossible. See for yourself, right now: concentrate on trying to think of nothing and stop producing inner speech. You will probably, paradoxically, find yourself producing more, and further efforts will only make it harder. Conditions such as <a href="https://pubmed.ncbi.nlm.nih.gov/7870507/">stress</a>, <a href="https://www.psychologytoday.com/intl/blog/depression-management-techniques/201604/rumination-a-problem-in-anxiety-and-depression">anxiety or depression</a> also have proven psychological links to inner speech.</p>
<p>We can decide to produce a particular piece of inner speech – to “say” a word in our minds – but it often seems to happen without us doing anything at all.</p>
<h2>What is an action?</h2>
<p>In my <a href="https://dialnet.unirioja.es/servlet/articulo?codigo=7672716">research</a>, I have argued that producing inner speech is almost never an action, though the question of what makes something an action is itself a topic of philosophical debate.</p>
<p><a href="https://www.jstor.org/stable/2024676">One prominent theory</a> holds that actions are things that we can try to do, or that require effort. Producing inner speech often requires no effort, and as we have seen, we even struggle to stop it. This seems to indicate that it isn’t something we try to do, but that it just “happens”. </p>
<p>Other theories of action yield a similar result: inner speech almost never fits the definition.</p>
<p>A huge amount of philosophical work has been done on the subject of conscious experience in general. However, philosophers have not always paid attention to specific mental phenomena. Inner speech is a unique kind of conscious experience, which seems to involve a typically external activity – speaking – taking place in the mind. Investigating it will undoubtedly lead us down fascinating paths in years to come.</p><img src="https://counter.theconversation.com/content/220619/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Gregory's María Zambrano Postdoctoral Fellowship is funded by the European Commission's Next Generation EU package, via the Spanish Ministry of Universities. He is also a member of the Inner Speech in Action: New Perspectives research project, which receives funding from the Spanish State Research Agency and the Spanish Ministry of Science and Innovation (grant number PID2020-115052GA-Ioo).</span></em></p>We are constantly talking to ourselves, but our internal monologues have received surprisingly little attention from philosophers, until now.Daniel Gregory, María Zambrano Postdoctoral Fellow, Universitat de BarcelonaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1977512024-01-11T07:09:39Z2024-01-11T07:09:39ZWhy AI software ‘softening’ accents is problematic<p><a href="https://www.voanews.com/a/accent-masking-software-aims-to-smooth-call-center-interactions/7252799.html">“Why isn’t it a beautiful thing?”</a> a puzzled Sharath Keshava Narayana asked of his AI device masking accents.</p>
<p>Produced by his company, Sanas, the recent technology seeks to “soften” the accents of call centre workers in real-time to allegedly shield them from bias and discrimination. It has sparked widespread interest both in the <a href="https://abc7news.com/sanas-voice-technology-silicon-valley-startup-accent-remover-translator/12162646/">English-speaking</a> and <a href="https://www.ouest-france.fr/leditiondusoir/2022-09-02/ce-logiciel-qui-gomme-les-accents-dans-la-voix-des-teleoperateurs-fait-polemique-voici-pourquoi-933e7c7f-96eb-498e-b444-f4753f9019f5#:%7E:text=La%20start%2Dup%20am%C3%A9rican%20Sanas,new%20technology%20surrect%20the%20controversy.">French-speaking world</a> since it was launched in September 2022. </p>
<p>Far from everyone is convinced of the software’s anti-racist credentials, however. Rather, critics contend it plunges us into a <a href="https://hal.archives-ouvertes.fr/hal-03831544">contemporary dystopia</a> where technology is used to erase individuals’ differences, identity markers and cultures. </p>
<p>To understand them, we could do worse than reviewing what constitutes an accent in the first place. How can they be suppressed? And in what ways does ironing them out bends far more than sound waves? </p>
<h2>How artificial intelligence can silence an accent</h2>
<p>“Accents” can be defined, among others, as a set of oral clues (vowels, consonants, intonation, etc.) that contribute to the more or less conscious elaboration of hypotheses on the identity of individuals (e.g. geographically or socially). An accent can be described as regional or foreign according to different narratives. </p>
<p>With start-up technologies typically akin to black boxes, we have little information about the tools deployed by Sanas to standardise our way of speaking. However, we know most methods aim to at least partially transform the structure of the sound wave in order to bring certain acoustic cues closer <a href="https://www.cairn.info/la-phonetique--9782130653356-page-58.htm">to a perceptive criteria</a>. The technology tweaks vowels, consonants along with parameters such as rhythm, intonation or accentuation. At the same time, the technology will be looking to safeguard as many vocal cues as possible to allow for the recognition of the original speaker’s voice, such as with <a href="https://ircamamplify.com/realisations/cloning-vocal-pour-thierry-ardisson/"><em>voice cloning</em></a>, a process that can result in <a href="https://www.20minutes.fr/high-tech/2831107-20200729-le-deepfake-audio-la-nouvelle-arnaque-tendance-des-hackers"><em>deepfake vocal</em></a> scams. These technologies make it possible to dissociate what is speech-related from what is voice-related.</p>
<p>The automatic and real-time processing of speech poses technological difficulties, the main one being the quality of the sound signal to be processed. Software developers have succeeded in overcoming them by basing themselves on <a href="https://www.science-et-vie.com/definitions-science/deep-learning-69467.html"><em>deep learning</em></a>, <a href="https://www.rts.ch/info/sciences-tech/12796888-supprimer-les-accents-dune-voix-peut-la-rendre-plus-comprehensible.html">neural networks</a>, as well as <a href="https://www.cairn.info/revue-francaise-de-linguistique-appliquee-2007-1-page-71.htm">large data bases of speech audio files</a>, which make it possible to better manage the uncertainties in the signal.</p>
<p>In the case of foreign languages, Sylvain Detey, Lionel Fontan and Thomas Pellegrini identify <a href="http://www.atala.org/sites/default/files/article-tap-didactique_21092017.pdf">some of the issues inherent in the development of these technologies</a>, including that of which standard to use for comparison, or the role that speech audio files can have in determining them. </p>
<h2>The myth of the neutral accent</h2>
<p>But accent identification is not limited to acoustics alone. Donald L. Rubin has shown that listeners can <a href="https://www.jstor.org/stable/40196047">recreate the impression of a perceived accent</a> simply by associating faces of supposedly different origins with speech. In fact, absent these other cues, speakers are <a href="http://glottopol.univ-rouen.fr/telecharger/numero_31/gpl31_03avanzi_boulademareuil.pdf">not so good at recognising accents</a> that they do not regularly hear or that they might stereotypically picture, such as German, which many associate with <a href="https://www.youtube.com/watch?v=-_xUIDRxdmc">“aggressive” consonants</a>.</p>
<p>The wishful desire to iron out accents to combat prejudice raises the question of what a “neutral” accent is. Rosina Lippi-Green points out that <a href="https://www.taylorfrancis.com/books/mono/10.4324/9780203348802/english-accent-rosina-lippi-green">the ideology of the standard language</a> - the idea that there is a way of expressing oneself that is not marked - holds sway over much of society but has no basis in fact. <a href="https://luminosoa.org/site/chapters/e/10.1525/luminos.148.c/">Vijay Ramjattan</a> further links recent collossal efforts to develop accent “reduction” and “suppression” tools with the neoliberal model, under which people are assigned skills and attributes on which they depend. Recent capitalism perceives language as a skill, and therefore the “wrong accent” is said to lead to reduced opportunities. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1735696216170676732"}"></div></p>
<p>Intelligibility thus becomes a pretext for blaming individuals for their lack of skills in tasks requiring oral communication according to <a href="https://journals.sagepub.com/doi/full/10.1177/0261927X19884619">Janin Roessel</a>. Rather than forcing individuals with “an accent to reduce it”, researchers such as <a href="https://www.jbe-platform.com/content/journals/10.1075/jslp.20038.mun">Munro and Derwing</a> have shown that it is possible to train individuals to adapt their aural abilities to phonological variation. What’s more, it’s not up to individuals to change, but for public policies to better protect those who are discriminated against on the basis of their accent - <a href="https://accentism.org/">accentism</a>.</p>
<h2>Delete or keep, the chicken or the egg?</h2>
<p>In the field of sociology, Wayne Brekhus calls on us to pay specific attention to the invisible, weighing up what isn’t marked as much as what is, the “lack of accent” as well as its reverse. This leads us to reconsider the power relations that exist between individuals and the way in which we homogenise the marked: the one who has (according to others) an accent. </p>
<p>So we are led to Catherine Pascal’s question of <a href="https://www-cairn-info.bases-doc.univ-lorraine.fr/revue-management-des-technologies-organisationnelles-2019-1-page-221.htm">how emerging technologies can hone our roles as “citizens” rather than “machines”</a>. To “remove an accent” is to value a dominant type of “accent” while neglecting the fact that other co-factors will participate in the perception of this accent as well as the emergence of discrimination. “Removing the accent” does not remove discrimination. On the contrary, the accent gives voice to identity, thus participating in the phenomena of humanisation, group membership and even empathy: the accent is a channel for otherness.</p>
<p>If technologies such AI and <em>deep learning</em> offers us untapped possibilities, they can also lead to a dystopia where dehumanisation overshadows priorities such as the common good or diversity, as spelt out in the <a href="https://www.unesco.org/fr/legal-affairs/unesco-universal-declaration-cultural-diversity">UNESCO Universal Declaration on Cultural Diversity</a>. Rather than hiding them, it seems necessary to make recruiters aware of how accents can contribute to customer satisfaction and for politicians to take up this issue.</p>
<p>Research projects such as <a href="https://prosophon.atilf.fr/">PROSOPHON at the University of Lorraine (France)</a>, which bring together researchers in applied linguistics and work psychology, are aimed at making recruiters more aware of their responsibilities in terms of <a href="https://journals.sagepub.com/doi/full/10.1177/0261927X19884619">biais awareness</a>, but also at empowering job applicants “with an accent”. By asking the question “Why isn’t this a beautiful thing?”, companies like SANAS remind us why technologies based on internalized oppressions don’t make people happy at work.</p><img src="https://counter.theconversation.com/content/197751/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Grégory Miras ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>While AI now allows us to erase accents, is this really a good idea? Besides, who doesn’t have an accent?Grégory Miras, Professeur des Universités en didactique des langues, Université de LorraineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2165112023-10-31T20:28:46Z2023-10-31T20:28:46ZHow to improve your communication with someone with a speech impairment<figure><img src="https://images.theconversation.com/files/556922/original/file-20231031-15-tb5wy6.jpg?ixlib=rb-1.1.0&rect=0%2C38%2C6459%2C4254&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A young girl learning how to use a speech-generating device.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/how-to-improve-your-communication-with-someone-with-a-speech-impairment" width="100%" height="400"></iframe>
<p>October marked <a href="https://isaac-online.org/english/what-is-aac/">alternative and augmentative communication (AAC)</a> awareness month. AAC includes all means of communication that a person may use <a href="https://www.asha.org/public/speech/disorders/aac/">besides talking</a>. Low-tech methods include means of interaction like hand gestures, facial movements, or pointing, while more high-tech tools might include a speech generating device accessed through pointing or a joystick, eye-tracking, or even a brain-computer interface.</p>
<p>British physicist <a href="https://theconversation.com/the-technology-that-gave-stephen-hawking-a-voice-should-be-accessible-to-all-who-need-it-93418">Stephen Hawking</a> was long the most famous person associated with AAC, using an advanced computer system to generate sentences and speech. American actor <a href="https://news.northeastern.edu/2022/06/07/a-i-clones-val-kilmers-voice-in-top-gun/">Val Kilmer</a> is another well-known person who has used AAC. Kilmer suffered irreparable damage to his voice due to throat cancer. However, in the latest installment of the <em>Top Gun</em> film franchise, artificial intelligence was used to “clone” the actor’s voice.</p>
<p>In 2006, 1.9 per cent of the Canadian population <a href="https://www150.statcan.gc.ca/n1/pub/89-628-x/89-628-x2007002-eng.pdf">self-identified as having a speech disability</a>. Unfortunately, this was the last time Statistics Canada identified speech disability within the Canadian census. That makes it difficult to gather more recent data of the number of people in Canada with impaired speech. </p>
<h2>Need for more acceptance</h2>
<p>Speech impairments can occur at a young age with disabilities such as cerebral palsy or autism spectrum disorder, but can also manifest later in life as a result of progressive disorders such as motor neuron disease, throat cancer, muscular dystrophy or strokes. </p>
<p>Increased acceptance of the use of AAC technologies in general society <a href="https://www.queensu.ca/aac-caa/summary-outcomes">can enhance the quality of life for people with speech impairment</a> by increasing autonomy, leading to more positive social interactions, better engagement in education and confidence in employment.</p>
<p>The <a href="https://www.canada.ca/en/employment-social-development/programs/accessible-canada.html">Accessible Canada Act</a> recognizes communication as a priority area, while the <a href="https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-rights-persons-disabilities">United Nations Convention on the Rights of Persons with Disabilities</a> promotes the rights of autonomy, safety and social participation, and recognizes communication as a human right. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/tpAGll5PDlY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Alternative and augmentative communication includes all means of communication that a person may use besides talking.</span></figcaption>
</figure>
<h2>Tackling stigma</h2>
<p>However, even if people have access to AAC technology, they can still face stigma and exclusion. Here are some things we can all do to be more inclusive of people with impaired speech: </p>
<p><strong>Start with basic respect.</strong> Understand that cognition and lack of verbal speech are not correlated. Many people with speech impairments have no cognitive deficits at all and are just as intelligent as anyone else. They want others to be more patient and understanding of speech disabilities. In social situations, they might often be underestimated and treated as children even though they are capable and competent. Show them respect, even though they may sound different when they talk. </p>
<p>Pre-programmed sentences on a tablet or speech generating device do not suggest that the person is incapable of developing those ideas. They may have spent 20 minutes typing out those messages in an attempt to meet the fast-paced environment in which we all live.</p>
<p><strong>Take time to listen.</strong> Individuals with speech impairment may need to type out phrases one letter at a time. Some may use a smartphone or iPad with a texting app, while others use an eye-tracking device or brain-computer interface to select letters using an on-screen keyboard. Be patient and wait for the person to speak.</p>
<p><a href="https://www.queensu.ca/aac-caa/sites/ascwww/files/uploaded_files/Final%20Report%20for%20project%20017325523-478738%20Recommendations%20and%20Guidelines%20for%20AAC.pdf">As one occupational therapist noted</a>, “[A problem] I often find some of my clients run into is not being given enough time to get their message written down. They’re composing it and the communication partner might not realize they need to give them a little extra time.” A conversation may require you to pause, ask a question and wait for an answer. Stop, think, be patient and understanding.</p>
<p>In addition, it’s important to realize that the use of some AAC technologies can be tiring. To use systems that rely on eye movements, for example, an individual must focus and is unable to use other means of communication such as emotional expression at the same time. Recognize that shorter conversations may be better. Perhaps try communicating by email or text. Let the person respond in their own time.</p>
<p><strong>Be an advocate.</strong> People with speech impairments must always advocate for themselves. If you are planning a conference or hiring for a position, ask what accommodations might be beneficial rather than relying on the individual to request them. Provide advance notice of conversation topics or questions. Engage people with speech impairments in social events. If you see someone passing judgment, speak up.</p>
<p>Technology is improving, and maybe one day people with impaired speech will be able to communicate with the same ease as those without. But until then, being a friend to people with speech impairments means being patient and listening.</p><img src="https://counter.theconversation.com/content/216511/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Claire Davies is a member of the Canadian Accessibility Network and collaborates with the International Society of Augmentative and Alternative Communication. She receives funding from the Government of Canada’s Accessibility Standards Canada, the Social Sciences and Humanities Research Council and Natural Sciences and Engineering Research Council.
</span></em></p>Increased acceptance of the use of alternative and augmentative communication technologies in general society can enhance the quality of life for people with speech impairment.Claire Davies, Associate Professor of Mechanical and Materials Engineering, Building and Designing Assistive Technology Lab, Queen's University, OntarioLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2106142023-08-23T12:26:45Z2023-08-23T12:26:45ZWhy somepeopletalkveryfast and others … take … their … time − despite stereotypes, it has nothing to do with intelligence<figure><img src="https://images.theconversation.com/files/543736/original/file-20230821-17-z0kidy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">French, Spanish and Japanese are spoken faster than German, Vietnamese and Mandarin, with English somwhere in the middle.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/middle-age-woman-holding-stopwatch-isolated-royalty-free-image/1143706747">Aaron Amat/iStock/Getty Images Plus</a></span></figcaption></figure><p>Pop culture abounds with examples of very fast talkers. There’s the <a href="https://www.youtube.com/watch?v=Kem32fM24VU">Judy Grimes character</a> played by Kristen Wiig on “Saturday Night Live,” or <a href="https://www.youtube.com/watch?v=GVNoWwzIiV0">that guy from the 1980s</a> who did commercials for <a href="https://www.youtube.com/watch?v=zLP6oT3uqV8">Micro Machines</a> and <a href="https://www.youtube.com/watch?v=NeK5ZjtpO-M">FedEx</a>. Of course, there are also extremely slow talkers, like the <a href="https://www.youtube.com/watch?v=HHKwnUa3txo">sloth in “Zootopia”</a> and the <a href="https://www.youtube.com/watch?v=6ntzFuFgXlM">cartoon basset hound Droopy</a>.</p>
<p>Real-life fast talkers are staples in some professions. <a href="https://www.youtube.com/watch?v=Ea7gn8hhEFA">Auctioneers</a> and <a href="https://www.youtube.com/watch?v=qyqLCS2LAbg">sportscasters</a> are known for their rapid delivery, though the <a href="https://www.youtube.com/watch?v=wz9F07Mwaa8">slower commentary in golf</a> shows there is a range for different sports.</p>
<p>As <a href="https://scholar.google.com/citations?hl=en&user=0Ip912sAAAAJ&view_op=list_works&sortby=pubdate">professors of English</a> who <a href="https://scholar.google.com/citations?hl=en&user=7wvy13gAAAAJ&view_op=list_works&sortby=pubdate">study linguistic variations</a>, we know that how fast a person speaks is a complicated phenomenon. It depends on a range of factors, including the types of words used, the language spoken, regional differences, social variables and professional needs.</p>
<h2>Different countries, different speeds</h2>
<p><a href="http://www.doi.org/10.1017/S0954394509990093">Speech rate</a> refers to the speed at which a speaker verbalizes “connected discourse” – essentially anything more than a sentence. It is measured by counting segments of sound and the pauses in a specific time frame. Typically, these segments are counted as syllables. Remember clapping syllables in elementary school? SYL-LA-BLES.</p>
<p>Linguists have discovered that humans vary their speech rate within sentences across all languages. For example, most people <a href="https://doi.org/10.1073/pnas.1800708115">slow their speech down before saying nouns</a>. Researchers have also found that <a href="https://www.jstor.org/stable/23011654">languages have different speech rates</a> when speakers read aloud. French, Spanish and Japanese were shown to have high average speech rates – with close to eight syllables spoken per second. German, Vietnamese and Mandarin exhibited slower rates – with about five syllables per second. English was in the middle, with an average rate of 6.19 syllables per second. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/NeK5ZjtpO-M?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Actor John Moschitta Jr. used his motormouth in FedEx and Micro Machines ads in the 1980s.</span></figcaption>
</figure>
<p>There is also global variation within the dialects of a language. In English, for example, one study found that <a href="https://pure.psu.edu/en/publications/speech-rates-of-new-zealand-english-and-american-english-speaking">New Zealanders spoke the fastest</a>, followed by British English speakers, then Americans and finally Australians. </p>
<h2>Stereotypes don’t hold up</h2>
<p>Many people have expectations and assumptions about different speech rates within English dialects. For example, there’s the <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9781351033824-4/southern-planets-paulina-bounds-jennifer-cramer-susan-tamasi">often-observed “drawl” of those living in the U.S. South</a>. The term drawl denotes a slower, drawn-out speaking pace. And, indeed, some research supports this perception. One study found that participants in western North Carolina <a href="https://www.doi.org/10.1017/S0954394509990093">spoke more slowly</a> than participants in Wisconsin. </p>
<p>Other research has demonstrated that some Southerners may speak more slowly only in certain contexts – for example, they may pause more often <a href="https://doi.org/10.1016/0167-6393(94)90039-6">when reading aloud</a>. And <a href="https://doi.org/10.1057/9781137291448">certain elongated vowels</a> in southern American dialects can also slow down the speech rate. This can be heard in the pronunciation of “nice” as something like “nahhce.”</p>
<p>Some people assume that all Southerners are slow talkers who exhibit these features. This is perhaps due, at least in part, to the perpetuation of stereotypes and caricatures in popular media, such as <a href="https://www.youtube.com/watch?v=c7qhVJIPfck">Cletus, the stereotyped hillbilly</a> from “The Simpsons.”</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/c7qhVJIPfck?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Cletus is a slow-talking and stereotyped hillbilly from “The Simpsons.”</span></figcaption>
</figure>
<p>But it’s important to recognize that language also varies within regions, including the U.S. South. For example, a study involving North Carolinians found that speakers in western and central North Carolina <a href="https://doi.org/10.1057/9781137291448_5">spoke more slowly than those in the state’s eastern and southern parts</a>. And some North Carolinians spoke about as fast as Ohioans – suggesting the stereotype of the slow-talking Southerner doesn’t always hold up.</p>
<h2>Age, gender and other variables</h2>
<p>Sex and gender may also influence speech rates, although results have been conflicting here, too. Some research shows that <a href="https://www.doi.org/10.1017/S0954394509990093">men speak faster than women</a>, while <a href="https://www.doi.org/10.1016/j.wocn.2011.02.006">other studies</a> find <a href="https://doi.org/10.1080/08824099009359851">no significant difference</a> in speech rate between genders. </p>
<p>The demographic variable that seems to have the <a href="https://doi.org/10.1057/9781137291448">most significant and consistent impact</a> is age. We speak slowly when we are children, speed up in adolescence and speak our fastest in our 40s. Then we slow down again as we reach our <a href="https://www.doi.org/10.1121/1.3459842">50s and 60s</a>.</p>
<p>While geography, gender and age may affect speech rates in certain cases, context plays a role as well. For example, <a href="https://www.academia.edu/80176903/Drone_Prosodics_as_Tradeoff_for_Working_Memory_Resources_Evidence_from_Play_by_Play_Sports_Commentary">certain professions use oral formulaic traditions</a>, meaning there’s a framework script when performing those jobs. An average person can speak about as <a href="https://doi.org/10.1017/S0047404500010381">fast as an auctioneer</a> – 5.3 syllables per second – when saying something they’ve said many times before.</p>
<p>However, auctioneers use certain patterns of speech that make it seem like they speak incredibly quickly. They have few pauses in speech and repeat the same words frequently. They also use unfamiliar phrasings and rhythms, which makes listeners have to process what was said long after the auctioneer has moved on to the next topic. And auctioneers have a constant rate of articulation – meaning they rarely stop talking.</p>
<p>While recognizing differences in speech rates can help people to better understand linguistic, cultural and professional identities, it also has technological and other applications. Think of how <a href="https://ijece.iaescore.com/index.php/IJECE/article/view/17795">computer scientists</a> must program Alexa and Siri to both produce and recognize speech at different rates. Speaking more slowly can also <a href="https://doi.org/10.2307/3587015">improve listening comprehension</a> for beginner and intermediate language learners.</p>
<p>Perhaps the most valuable takeaway when considering speech rate variation is the fact that linguistic perceptions don’t always match up with reality. This is a perspective <a href="https://www.routledge.com/Teaching-Language-Variation-in-the-Classroom-Strategies-and-Models-from/Devereaux-Palmer/p/book/9781138597952#">we often emphasize</a> in our own work because linguistic stereotypes can lead to assumptions <a href="https://www.routledge.com/Teaching-English-Language-Variation-in-the-Global-Classroom-Models-and/Devereaux-Palmer/p/book/9780367630256">about a person’s background</a>. </p>
<p>Recent studies of <a href="https://doi.org/10.4324/9781351033824">perceptions of U.S. dialects</a> confirm that, despite variation in speech rates within regions, people persist in labeling large regions of the South as “slow” and the North and Midwest as “fast.” Moreover, these evaluations are also typically associated with <a href="https://doi.org/10.1057/9781137291448_2">negative stereotypes</a>. Slow talkers are often assumed to be less intelligent or competent than fast talkers, while very fast talkers can be seen as less truthful or kindhearted. </p>
<p>There is no inherent connection between the rate of speech and levels of intelligence, truthfulness or kindness. Language use differs for all sorts of reasons, and differences are not deficiencies.</p><img src="https://counter.theconversation.com/content/210614/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Language, geography, age and other factors can all affect how fast a person talks. But sometimes, these perceived differences are only in the listener’s head.Michelle Devereaux, Professor of English and English Education, Kennesaw State UniversityChris C. Palmer, Professor of English, Kennesaw State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2049722023-08-21T12:27:19Z2023-08-21T12:27:19ZPresidential pauses? What those ‘ums’ and ‘uhs’ really tell us about candidates for the White House<figure><img src="https://images.theconversation.com/files/542833/original/file-20230815-19-dgdqtx.jpg?ixlib=rb-1.1.0&rect=0%2C4%2C3178%2C1851&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Up for debate? </span> <span class="attribution"><a class="source" href="https://www.gettyimages.co.uk/detail/news-photo/this-combination-of-pictures-created-on-october-22-2020-news-photo/1229229316?adppopup=true">Brendan Smialowski/Jim Watson/Morrry Gash/AFP via Getty Images</a></span></figcaption></figure><p>Nine. That is the number of “uhs” that former President Barack Obama uttered in a period of two minutes during a 2012 presidential debate. Other Obama “uh” counters, such as <a href="https://www.ling.upenn.edu/people/liberman">University of Pennsylvania linguist Mark Liberman</a>, clocked him as using “uhs” and “ums” – hesitation markers known as “filled pauses” in linguistspeak – <a href="https://languagelog.ldc.upenn.edu/nll/?p=35174">roughly every 19 words</a> during one interview. </p>
<p>By comparison, former President Donald Trump rarely uses them at all – as infrequently as once every 117 words.</p>
<p>Considering Obama’s skill as an <a href="https://psmag.com/news/the-effectiveness-of-obamas-oratory">orator garners high praise</a>, while Trump’s eloquence is less often so regaled, what’s to be made of this great, uh, imbalance? </p>
<p>In ordinary circumstances, maybe not too much. </p>
<p>But heading into the Republican presidential primary debates, which <a href="https://edition.cnn.com/2023/08/10/politics/first-republican-debate-who-has-qualified/index.html">kick off on Aug. 23, 2023</a>, you can bet some viewers and political commentators will be poring over every utterance of the candidates for clues about how they might perform as nominee of the party. </p>
<p>And going into the 2024 presidential race, expect more on Biden’s speech as a reflection of his competency, along the lines of the newspaper columnist who <a href="https://nypost.com/2022/05/06/when-bidens-own-people-dont-trust-him-to-speak-we-have-no-real-president/">dismissed the president as</a> the “wonderful Wizard of Ahs and Ums.”</p>
<h2>So who is prone to ‘umming’?</h2>
<p>But what if a bit of hesitation turns out to be not such a bad thing? </p>
<p>In my work <a href="https://www.unr.edu/english/people/valerie-fridland">as a linguist</a> and author of “<a href="https://abc.nl/book-details/like-literally-dude/$9780593298329">Like, Literally, Dude: Arguing for the Good in Bad English</a>,” I uncovered surprising evidence that filled pauses are not the mark of incompetence and inarticulateness they are often held to be. In fact, research suggests filled pauses often aid understanding. Studies into their use also reveal why we utter them and who is more prone to using them.</p>
<p>For example, <a href="https://research.rug.nl/en/publications/variation-and-change-in-the-use-of-hesitation-markers-in-germanic">research on languages ranging from English to Dutch, German, Danish and Norwegian</a> has shown that “uhs” are more often uttered by men and older people, while “ums” are <a href="https://core.ac.uk/download/pdf/76394539.pdf">the up-and-coming trend among women and those who don’t remember a time before TikTok</a>. </p>
<p>And then there <a href="https://languagelog.ldc.upenn.edu/nll/?p=14015">are the geographical preferences</a>. Southerners and New Englanders tend to “uh,” while Midwesterners prefer “um” – at least when tweeting.</p>
<p>Perhaps even more surprising, as someone’s education level and socioeconomic status go up, <a href="https://doi.org//10.1075/ijcl.16.2.02tot">research suggests</a> so does their rate of “umming” and “uhing.”</p>
<h2>Deliberate debate device</h2>
<p>Nonetheless, filled pauses have long been treated as the bane of public speaking and a mark of anxiety.</p>
<p>Yet psycholinguists who study speech hiccups <a href="https://doi.org/10.1016/S0010-0277(02)00017-3">suggest much the opposite</a>: Filled pauses are less about our speech struggles and more about signaling upcoming linguistic and semantic complexity. That is, “ums” and “uhs” emerge because we are doing more work in terms of planning and executing the next thing we need to say. </p>
<p>What this means is that filled pauses are found to most often occur right before speakers describe <a href="https://doi.org/10.1111/j.1749-818X.2008.00068.x">more abstract or difficult concepts</a> or when they use <a href="https://doi.org/10.1177/002383097902200301">less familiar or uncommon words</a>. “Ums” and “uhs” also increase when speakers <a href="https://doi.org/10.1177/002383096500800302">start a sentence</a>, since <a href="https://doi.org/10.1006/cogp.1998.0693">they are mapping out the whole sentence structure</a>.</p>
<p>Their use also increases when <a href="https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.60.3.362">there are a number of competing word options to choose from</a>, like when selecting among novel and politically advantageous adjectives to describe the health of the economy or an aging opponent.</p>
<figure class="align-center ">
<img alt="A man in a suit looks downward." src="https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=377&fit=crop&dpr=1 600w, https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=377&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=377&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=473&fit=crop&dpr=1 754w, https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=473&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/542975/original/file-20230816-22-3y9yj7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=473&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Is Republican presidential hopeful Ron DeSantis an ‘ummer’?</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.co.uk/detail/news-photo/florida-governor-and-2024-republican-presidential-hopeful-news-photo/1561386818?adppopup=true">Sergio Flores/AFP via Getty Images)</a></span>
</figcaption>
</figure>
<p>In short, they are used in places where harder thinking is required. These are exactly the linguistic challenges that politicians face when answering debate questions requiring complex terminology and strategic word choices. </p>
<p>Sometimes “ums” and “uhs” simply buy a speaker processing time to figure out what to say when they are uncertain. Taking a verbal pause instead of a silent one makes it crystal clear that one still intends to contribute to the conversation – particularly vital in a debate where floor time is the equivalent of political gold.</p>
<h2>‘Uh … I’m talking here!’</h2>
<p>Remarkably, in addition to helping speakers come up with what they want to say, “ums” and “uhs” also do a listener a service by alerting them to the fact that <a href="https://doi.org/10.1016/S0010-0277(02)00017-3">there’s going to be a delay</a> and cues them to listen up because <a href="https://link.springer.com/article/10.3758/BF03194926">something harder to comprehend</a> is coming their way.</p>
<p>This signaling helps listeners understand what you are saying. That’s because, even past our teenage years, we are still fairly lazy listeners. Adding in an “um” or “uh” can help tear the listener away from their iPhone or other distractions and alert them to the fact that something new and difficult is coming up.</p>
<p>For instance, if we had been having a conversation about dogs, and I start a new sentence by saying “The daw …” <a href="https://doi.org/10.1111/j.0956-7976.2004.00723.x">psycholinguistic evidence</a> tells us that your brain goes right to “dog” without even waiting to hear the rest of the word. But what if I was actually going to say donkey? Then you are thrown for a loop. But if I first inserted a filled pause, such as “the, uh, donkey,” listeners are much quicker to identify a new word in the sentence, as the “uh” seems to <a href="https://link.springer.com/article/10.1023/A:1021980931292">alert us to expect something unexpected</a>.</p>
<p>Another plus? The listener would be more likely to recall that we talked about donkeys later on, as a preceding filled pause has also been shown to have a positive effect on <a href="https://doi.org/10.1016/j.cognition.2006.10.010">word recognition and recall</a>. </p>
<h2>The ponderous pause</h2>
<p>So, why such a bad rap for a speech feature that signals deep thinking and helps listeners comprehend what people are saying? </p>
<p>Probably because of the company it keeps. Filled pauses have often been grouped with other features of what is termed “disfluent” speech, such as repetitions, slips of the tongue and restarts, such as “wh-what?” </p>
<p>The <a href="https://pep-web.org/browse/document/SE.006.0000A">Freudian view</a> of such speech tics as symptoms of unconscious worries and desires drove much of the early research on such features. Though <a href="https://pep-web.org/browse/document/SE.006.0000A">early psychological research</a> did not find that filled pauses strongly correlated with anxiety, the stigma stuck around and affects regular people and presidents alike.</p>
<p>For instance, Biden has <a href="https://www.nytimes.com/2019/10/30/us/politics/joe-biden-debate-gaffes.html">been called out</a> for his combined filled pauses, repetitions and restarts, which have been blamed on a number of factors ranging from age-related confusion to public-speaking anxiety. </p>
<p>While it is true that <a href="https://doi.org//10.1037/a0019424">older speakers tend to use more filled pauses</a> than younger speakers, which could be <a href="https://www.psychologytoday.com/us/blog/language-in-the-wild/202107/language-comprehension-and-the-aging-brain">related to age-related decline</a> in working memory, Biden <a href="https://www.politico.com/newsletters/west-wing-playbook/2023/01/10/normalizing-stutters-bidens-and-his-own-00077269">also has a stutter</a>, which can affect filled pause use in ways that make it hard to compare his use of them with other presidents. </p>
<p>The reality is, like it or not, we all populate our pauses from time to time. As can be seen in the Obama vs. Trump filled-pause rates, we also have a unique signature pause pattern. In other words, some of us are, to put it in the words of <a href="https://link.springer.com/article/10.1007/BF02175503">one pause researcher</a>, “heavy ummers,” while others are “um-avoiders.” </p>
<p>What doesn’t change, however, is that they signal cognitive heavy lifting ahead.</p>
<p>So, as we head into the season of presidential stumping and debate, perhaps we can look past the pause when deciding how to weed out the good candidates from the bad.</p><img src="https://counter.theconversation.com/content/204972/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Valerie M. Fridland does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Long treated as a sign of anxiety or a delaying tactic, ‘filled pauses’ are a linguistic trick to signal that what you are about to say might be complicated.Valerie M. Fridland, Professor of Linguistics, University of Nevada, RenoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2033792023-05-08T12:18:16Z2023-05-08T12:18:16ZWhat is that voice in your head when you read?<figure><img src="https://images.theconversation.com/files/523888/original/file-20230502-991-yd36r7.jpg?ixlib=rb-1.1.0&rect=2%2C64%2C1600%2C1261&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Reading becomes faster when you don't have to say each word out loud.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/propaganda-conceptual-illustration-royalty-free-illustration/1148108285?phrase=brain+speaking+illustration&adppopup=true">Gary Waters/Science Photo Library via Getty Images</a></span></figcaption></figure><figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><a href="https://theconversation.com/us/topics/curious-kids-us-74795">Curious Kids</a> is a series for children of all ages. If you have a question you’d like an expert to answer, send it to <a href="mailto:curiouskidsus@theconversation.com">curiouskidsus@theconversation.com</a>.</em></p>
<hr>
<blockquote>
<p><strong>What is that voice in your head when you read? – Luiza, age 14, Goiânia, Brazil</strong></p>
</blockquote>
<hr>
<p>When you first begin reading, you read out loud. </p>
<p>Reading aloud can make the text easier to understand when you’re a beginning reader or when you are reading something that’s challenging. Listening to yourself as you read <a href="https://www.carnegielearning.com/blog/5-benefits-reading-aloud/">helps with comprehension</a>.</p>
<p>After that, you might “<a href="https://www.alamo.edu/siteassets/nvc/academics/tutoring-services/reading-and-english-lab/reading-resources/strategies01.pdf">mumble read</a>.” That’s when you mumble, whisper or move your lips as you read. But this practice slowly fades as your reading skills develop, and you start to read silently “in your head.” That’s when your inner voice comes into play.</p>
<p>As experts in <a href="https://scholar.google.com/citations?user=rtqNMWcAAAAJ&hl=en">reading</a> and <a href="https://scholar.google.com/citations?user=XdcULRkAAAAJ&hl=en">language</a>, we see this transition from reading out loud to silently all the time. It’s a normal part of the development of reading skills. Usually, kids are good at <a href="https://www.dreambox.com/resources/blogs/what-is-silent-reading-fluency-and-how-educators-help-students">reading silently</a> by the fourth or fifth grade.</p>
<p>The shift from reading out loud to reading silently is very similar to how kids develop thinking and speaking skills.</p>
<p>Young children often speak to themselves as a way to think through challenges. <a href="https://www.verywellmind.com/lev-vygotsky-biography-2795533">Lev Vygotsky</a>, a Russian psychologist, called this “private speech.” And kids aren’t the only ones who talk to themselves. Just watch an adult try to put together a new vacuum cleaner. You might hear them muttering to themselves as they try to understand the assembly instructions.</p>
<p>As kids become better thinkers, they shift to talking inside their heads instead of out loud. This is called “inner speech.” </p>
<p>Once you’re a good reader, it’s a lot easier to read silently. Reading becomes faster because you don’t have to say each word. And you can jump back to reread parts without disrupting the flow of reading. You can even skip over short familiar words.</p>
<p>Silent reading is more flexible, and it allows you to focus on what’s most important. And it’s during silent reading that you may discover your inner voice. </p>
<h2>Developing an inner voice</h2>
<p>Hearing an inner voice while reading is relatively common. In fact, one study found that <a href="https://doi.org/10.1111/sjop.12368">4 in 5 people</a> say they often or always hear an inner voice when they read silently to themselves.</p>
<p>It’s also been suggested that there are <a href="https://bookriot.com/what-does-your-inner-narrator-sound-and-look-like/">many types</a> of inner voices. Your inner voice might be <a href="https://doi.org/10.1371%2Fjournal.pone.0025782">your own</a>: It might sound similar to the way you speak or might be just like your spoken voice. Or it might assume a different tone or timbre altogether. </p>
<p>A study of <a href="https://www.theatlantic.com/national/archive/2011/07/hearing-voices-your-head-normal-while-reading/353390/">adult readers</a> found that the voice you hear in your head may change depending on what you are reading. For example, if the lines in a book are spoken by a specific character, you may hear that character’s voice in your head.</p>
<p>So, fear not if you start hearing a bunch of voices in your head when you dive into a book – it means you’ve already become a skilled silent reader.</p><img src="https://counter.theconversation.com/content/203379/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Called your ‘inner voice,’ it develops along with your reading skills.Beth Meisinger, Associate Professor of Psychology, University of MemphisRoger J. Kreuz, Associate Dean and Professor of Psychology, University of MemphisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2042232023-04-25T14:06:58Z2023-04-25T14:06:58ZDobble: what is the psychology behind the game?<figure><img src="https://images.theconversation.com/files/522365/original/file-20230421-16-7hgsp2.jpg?ixlib=rb-1.1.0&rect=8%2C0%2C5982%2C3997&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Dobble is a card game with rules that makes it sound easier than it actually is.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/dobble-card-game-kids-billereaquitainefrance-08232021-2029552409">Ana Belen Garcia Sanchez/Shutterstock</a></span></figcaption></figure><p>Following birthday and Christmas presents, families often have a glut of new games to learn and play. Many of these games involve computers or games consoles, but with concerns about children’s <a href="https://www.forbes.com/health/family/how-much-screen-time-kids/">screen time</a> there has been a recent <a href="https://www.washingtonpost.com/business/2022/12/24/board-game-popularity/">increase</a> in the popularity of traditional board and card games.</p>
<p>One non-electronic card game that has made its way into our homes is Dobble. It’s a game of observation, articulation and speed that was first released in France in 2009. </p>
<p>While the <a href="https://www.petercollingridge.co.uk/blog/mathematics-toys-and-games/dobble/">mathematics</a> behind the workings of this game is interesting, as cognitive psychologists we were also fascinated by the underlying cognitive processes that make this simple game so absorbing and challenging to play.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/VTDKqW_GLkw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">How does Dobble work mathematically?</span></figcaption>
</figure>
<p>The aim of the game is to be the first player to get rid of all their cards by discarding them one at a time into a central pile. Players do that as soon as they can identify, and announce, the single common symbol between the card in their hand and that on top of the pile. </p>
<p>Players must be quick as the top card will change every time your opponent(s) are able to match and discard one of their cards before you. There are 55 cards, each containing eight symbols out of a possible 57. And in any pair of cards, only one symbol matches. </p>
<p>The first task in the game is to visually search the symbols on both the card in your hand and that on the top of the central pile to find the single match. Colour, size and location are typical <a href="https://psycnet.apa.org/record/1996-04250-015">cues</a> we use when searching. But this task is more difficult than it seems due to the number and variety of symbols. Their shared features sometimes give rise to false alarms when scanning quickly. For example, the lips, heart, maple leaf and fire symbols are all red in colour. </p>
<p>The fact the target items will likely be of a different size and orientation on each card also means that we <a href="https://books.google.co.uk/books?hl=en&lr=&id=HktnDAAAQBAJ&oi=fnd&pg=PA26&dq=perception+object+match+different+orientation&ots=wBLoJvKx-H&sig=IqWZ-6H9R4HpaOscqGXlbkjA3W4#v=onepage&q&f=false">perceive</a> the same symbol slightly differently. So a match is more difficult to identify. </p>
<p>Unlike, for example, <a href="https://www.independent.co.uk/arts-entertainment/books/news/where-s-the-brains-behind-wally-6261459.html">Where’s Wally?</a>, where the object of the search is clearly defined, with Dobble we do not know on any round which item we are searching for. Indeed, this will be different for each player. </p>
<p>The task requires dividing attention by searching two visual scenes in parallel. And also holding in memory the symbols that you have viewed on one card for comparison with those on the other. </p>
<p>We may <a href="http://matt.colorado.edu/teaching/highcog/fall8/m3.pdf">switch</a> between different strategies such as scanning the symbols on both cards in the hope that the match will just “pop out”. Or we may adopt a more structured approach where we peruse each symbol in turn. </p>
<p>When demands on attention are high, we are more likely to suffer <a href="https://books.google.co.uk/books?hl=en&lr=&id=Z2Sz7YgWIpQC&oi=fnd&pg=PA55&dq=inattentional+blindness+divided+attention&ots=2rrM836Idb&sig=IPiM1lTPKa-JlXAJ9QUHbZHmPvw#v=onepage&q=inattentional%20blindness%20divided%20attention&f=false">inattentional blindness</a>. That’s the phenomenon of “looking but not seeing”, whereby the item we are fixating on does not receive enough attention for us to actually notice it.</p>
<h2>Say the name</h2>
<p>Once you have found the matching symbol you must quickly announce what it is before placing your card down on the pile. This again sounds simple, but, just like producing the correct word in everyday speech, it requires the <a href="https://mybrainware.com/blog/brainware-safari-cognitive-skills-development-and-learning-to-read/">processes</a> of linking the desired concept – the symbol on the cards – with the name that represents it. </p>
<p>Also, you have to ensure that you select the appropriate word, for example saying “tortoise” rather than “turtle”. Plus you must select the correct sounds to utter that word, before finally saying it out loud. In the urgency of the game, you may find these processes don’t happen as quickly as you want them to.</p>
<figure class="align-center ">
<img alt="A pair of hands holds a collection of round cards. There is another pile of round cards on the table beneath the hands." src="https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6016%2C4016&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522130/original/file-20230420-18-n8asch.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Dobble - it’s not as easy as saying what you see.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/verona-italy-february-2nd-2021-detail-1921867253">Claire Adams/Shutterstock</a></span>
</figcaption>
</figure>
<p>Once you have correctly articulated the matching symbol and played your card, the whole process starts again. Given the low chance of that previous symbol being the next correct match, you must inhibit (stop yourself thinking about) this recent item – its name, its location, even its colour – so that you can be open to a new search. However, you must not inhibit it completely as there is still a chance it could appear next. </p>
<p>Inhibition is also required if your opponent calls out a symbol on their card first. Even if you were about to articulate a match, you must now inhibit this vocalisation and instead restart the search for a new pairing since the reference card in the centre has now changed. This ability to switch between searches and inhibit unwanted information is one of a number of <a href="https://www.tandfonline.com/doi/abs/10.1080/13803395.2010.533157">“executive”</a> organisational cognitive processes that help us in the planning and coordination of activities.</p>
<h2>Under stress</h2>
<p>And of course, all of this occurs under time pressure. Stress can increase when it seems your opponent is discarding their cards quicker. We know that increased stress levels impair our <a href="https://www.researchgate.net/profile/Tony-Buchanan/publication/320312261_Tip_of_the_Tongue_States_Increase_Under_Evaluative_Observation/links/59e0d8b1aca2724cbfd5e271/Tip-of-the-Tongue-States-Increase-Under-Evaluative-Observation.pdf">word-finding ability</a>, attention to information, inhibition of responses and <a href="https://pubmed.ncbi.nlm.nih.gov/28690203/">ability to adapt</a> to changing circumstances. All of those are vital to performing well in Dobble. </p>
<p>The bad news for parents is that many of the processes we have described <a href="https://link.springer.com/content/pdf/10.1038/s41598-020-80866-1.pdf">decline</a> as we get older, meaning that children may have the competitive edge at Dobble.</p><img src="https://counter.theconversation.com/content/204223/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Dobble is a card game that originated in France in 2009. It involves observation, articulation and speed.Nick Perham, Reader in Applied Cognitive Psychology, Cardiff Metropolitan UniversityHelen Hodgetts, Reader in Applied Cognitive Psychology, Cardiff Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2006032023-03-01T12:31:35Z2023-03-01T12:31:35ZAmerican man developed an Irish accent after getting prostate cancer – foreign accent syndrome explained<figure><img src="https://images.theconversation.com/files/512626/original/file-20230228-24-wakqx8.jpg?ixlib=rb-1.1.0&rect=26%2C8%2C5825%2C3755&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/man-talking-alphabet-letters-coming-out-769827094">pathdoc/Shutterstock</a></span></figcaption></figure><p>An American man <a href="https://casereports.bmj.com/content/16/1/e251655">developed an Irish accent</a> following treatment for metastatic prostate cancer. The man was in his 50s and had never been to Ireland. </p>
<p>The accent was described as “uncontrolled”, meaning the man couldn’t stop talking with an Irish brogue, even if he tried. He continued speaking this way until his death.</p>
<p>This is the first time a person has developed “foreign accent syndrome” linked to a prostate cancer diagnosis. And it is <a href="https://casereports.bmj.com/content/16/1/e251655">only the third case</a> of foreign accent syndrome linked to cancer – the others were breast cancer and brain cancer.</p>
<p>Foreign accent syndrome usually happens as a <a href="https://pn.bmj.com/content/16/5/409">result of brain damage</a>, such as from a stroke. Stroke can cause different types of speech and language disorders, but foreign accent syndrome is one of the more unusual ones. </p>
<p>Other causes of the syndrome are changes to the structure of the brain, such as cancer tumours, encephalitis (brain swelling), multiple sclerosis and neurodegenerative disorders such as dementia.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/RbYXXyMb8I0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Woman on This Morning, ITV, with foreign accent syndrome.</span></figcaption>
</figure>
<p>The condition was first described by <a href="https://en.wikipedia.org/wiki/Pierre_Marie">Pierre Marie</a>, a French neurologist, in 1907. Marie described the case of a man who originally spoke French with a Parisian accent, but after a stroke, he started speaking with a regional French accent from the area of Strasbourg in France. </p>
<p>To date, around 200 cases of foreign accent syndrome have been reported in clinical studies, making it quite a rare speech disorder. Perhaps the best-known case is when <a href="https://www.nme.com/news/music/george-michael-14-1264722">George Michael briefly spoke with a West Country accent</a> when he came out of a coma following a bout of pneumonia in 2011. The singer is from North London.</p>
<p>The condition can be distressing for patients because they lose an important personality characteristic that is expressed by their accent. The impact of this illness was reported in 1947 by the Norwegian neurologist Monrad-Krohn: he <a href="https://doi.org/10.1093/brain/70.4.405">described a Norwegian lady</a> who had suffered a serious head injury in a bombing raid during the second world war. As a result of this damage, she spoke Norwegian with a German foreign accent, and this was quite problematic in postwar Norway.</p>
<p>She was often refused service in shops because people thought she was German. Being identified as a foreigner all the time and being questioned about it can be very distressing. The effect may be so serious that some patients apply unusual methods to find peace of mind. We have heard of a lady with the syndrome saying that she enjoyed staying in hotels because it is very natural to hear a foreign accent in a hotel environment, so it goes unnoticed.</p>
<h2>Psychological causes</h2>
<p>Apart from damage to the central nervous system, foreign accent syndrome can also be caused by psychological factors such as extreme stress. We have identified “<a href="https://www.frontiersin.org/articles/10.3389/fnhum.2016.00168/full">psychogenic foreign accent syndrome</a>” as a separate type of foreign accent syndrome. In 2005, researchers were contacted by a native Dutch speaker who had a heavy and persistent French accent after suffering intense stress as a result of almost being hit by a car. Detailed <a href="https://doi.org/10.1155/2005/989602">neurological investigations</a> did not reveal any brain abnormalities, but psychological tests identified important psychological issues. She only fully returned to her original Dutch accent after ten years.</p>
<p>Another version of this condition is “mixed foreign accent syndrome”. These patients first develop a foreign accent because of brain damage and then try to change their word use to create a more convincing “foreign” personality. This was noticed by researchers at the <a href="https://doi.org/10.1080/02699200400026900">University of Central Florida</a> who saw an American patient who developed a British accent following a stroke and who started using British English words like lift (instead of elevator) and mum (instead of mom). </p>
<p>The patient explained that it was easier for her to allow people to believe that she was from England, rather than trying to explain that her accent was the result of a stroke. Although she insisted that her use of “Briticisms” was not under her conscious control.</p>
<p>Full recovery from the accent change is difficult and often requires intensive speech therapy for a long time. But there have been cases of fairly quick recovery.</p><img src="https://counter.theconversation.com/content/200603/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Johan Verhoeven received funding from the Leverhulme Foundation. </span></em></p><p class="fine-print"><em><span>Stefanie Keulen received funding from Research Council of the Vrije Universiteit Brussel (2013-2017) and the Research Foundation Flanders (2017-2021).</span></em></p>There have only been around 200 reported cases of foreign accent syndrome since it was first reported in 1907.Johan Verhoeven, Professor of Experimental Phonetics, City, University of LondonStefanie Keulen, Assistant Professor/Research Leader, Vrije Universiteit BrusselLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1945192022-12-13T19:02:14Z2022-12-13T19:02:14ZNaur, yeah: Australia, you’re performing linguistic magic when you pronounce the two-letter word ‘no’. Here’s why<p>Have you ever thought about your pronunciation of the word “no”? If you say it out loud now, can you sense the movement of your tongue and lips as you form the “o” sound? You may notice there’s a lot to the pronunciation of the word in an Australian accent.</p>
<p>Clips of Australians saying this short, two-letter word have been trending on TikTok over the last year, with listeners fascinated by its pronunciation. </p>
<p>Speakers from outside Australia are also having a go at pronouncing the word themselves. Interestingly, when they write it out, they spell the word “naur”. </p>
<p>So, what is it people are hearing in the Aussie “no”, and why do they think there is an “r” sound at the end?</p>
<p><iframe id="tc-infographic-796" class="tc-infographic" height="400px" src="https://cdn.theconversation.com/infographics/796/aab8f1e855b9b287e829953a620ccc0c017909c5/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/oi-were-not-lazy-yarners-so-lets-kill-the-cringe-and-love-our-aussie-accent-s-111753">Oi! We're not lazy yarners, so let’s kill the cringe and love our Aussie accent(s)</a>
</strong>
</em>
</p>
<hr>
<h2>What sorts of sounds make up speech?</h2>
<p>To be able to understand what is happening in an Australian pronunciation of the word, we need to first have a look at some of the elements of speech. Words are made up of vowels and consonants, and vowels themselves can be long or short. </p>
<p>Try saying out loud these words with long vowels: keep, dawn, far, soon and curl. Now these words with short vowels: cat, bed, hut, kid, nod and put. </p>
<p>Short and long vowels are all examples of monophthongs, vowels that have one single vowel element from start to finish. </p>
<p>Another category of vowels is diphthongs. These are vowels that have two distinct elements in one syllable. Words such as loud, prize, bay and void all contain diphthongs. </p>
<p>If we focus on the word “void”, try mouthing this word slowly as you say it out loud, and you may be able to sense your lips starting rounded in the shape of “aw” and then spreading to the shape of “ee”. Even though there are two distinct shapes within the vowel, the entire sound comprises one syllable, so it is called a diphthong. </p>
<p><iframe id="tc-infographic-797" class="tc-infographic" height="400px" src="https://cdn.theconversation.com/infographics/797/8ff299d2fa0129c180c79f15fb346ce0c24b17ac/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-aussie-accent-is-drink-related-thats-just-a-hangover-from-our-cultural-cringe-49956">The Aussie accent is drink-related? That's just a hangover from our cultural cringe</a>
</strong>
</em>
</p>
<hr>
<h2>Okay, so what about the word ‘no’?</h2>
<p>What can happen in the word “no” is that the vowel becomes a triphthong – meaning there are three distinct elements to the vowel sound within one syllable. </p>
<p>While some Australian speakers would pronounce “no” as a diphthong, starting on “oh” as in dog and ending on “oo” as in put, others begin with an unstressed “a” (the sound at the end of the word “sofa”), then move to the “oh” and then “oo”.</p>
<p>Triphthongs are far less common, we don’t hear them often, which could be why the sound stands out to listeners.</p>
<p>You might be wondering how a speaker comes to pronounce “no” as a triphthong, when other words with the same vowel (such as boat, cone, loaf and oak) are pronounced as diphthongs. This could occur because the word “no” is an example of what linguists call an open syllable, meaning it has no consonant at its close. This allows the speaker to lengthen the vowel and draw it out – a feature we love in different Australian accents!</p>
<p><iframe id="tc-infographic-798" class="tc-infographic" height="400px" src="https://cdn.theconversation.com/infographics/798/31a5e3d181a3d02a7bc0325557c30f7911886eea/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>In actor-training, we view vowels and consonants as having two different roles in language: vowels are the emotional components of words, and consonants are the intellect. In a word like “no”, a lot of emotion and feeling can be conveyed in the vowel, allowing a variety of meaning to come through in its pronunciation. </p>
<p>Just think of how many meanings the word “no” can have, from a polite “No” to an emphatic “No!”, to an unsure or contemplative “Noooo”. You would say the word in hundreds of different ways every week. Using intonation, modulation and emphasis, the word is given meaning depending on how you say it. </p>
<h2>But where does the ‘r’ come in?</h2>
<p>To return to the spelling that has taken off on TikTok – why do people think they hear an “r” at the end of an Australian pronunciation? </p>
<p>It could be that the listener is linking the sound to ones they have in their own accent. Another possibility is that when an Australian speaker holds the final part of the triphthong (the short “oo” as in “put”), their tongue may be moving closer to the roof of their mouth, beginning to sound like an “r”. However, they wouldn’t be going there consciously, and it may not feel anything like an “r” to them! </p>
<p>It’s important to note there are many varieties of Australian accents and not every speaker would pronounce “no” in the ways discussed here. Social media has created new platforms for sharing the voices of everyday speakers, not just those trained for media, stage, or screen. We’re now hearing different accent varieties that otherwise may not be heard by a global audience. </p>
<p><iframe id="tc-infographic-799" class="tc-infographic" height="400px" src="https://cdn.theconversation.com/infographics/799/1f8fbc493d7035c0ad7ade71c21d7e6d354df882/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-why-do-aussies-have-a-different-accent-to-canadians-americans-british-people-and-new-zealanders-94725">Curious Kids: Why do Aussies have a different accent to Canadians, Americans, British people and New Zealanders?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/194519/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amy Hume does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There’s a lot to the pronunciation of the word ‘no’ in an Australian accent.Amy Hume, Lecturer In Theatre (Voice), Victorian College of the Arts, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1925482022-12-12T13:36:16Z2022-12-12T13:36:16ZDo accents disappear?<figure><img src="https://images.theconversation.com/files/499828/original/file-20221208-14190-uu5no9.jpg?ixlib=rb-1.1.0&rect=27%2C352%2C6079%2C2821&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Speech patterns.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/diverse-cultures-international-communication-royalty-free-image/1390317952?phrase=language&adppopup=true">Bobboz via Getty Images</a></span></figcaption></figure><p>In Boston, there are <a href="https://www.cbsnews.com/boston/news/boston-accent-endangered-growing-population-language-expert-marjorie-feinstein-whittaker-david-wade/">reports of people pronouncing the letter “r</a>.” Down in Tennessee, people are <a href="https://www.youtube.com/watch?v=bMggeVfS6j8">noticing a lack of a Southern drawl</a>. And Texans have <a href="https://www.seattletimes.com/nation-world/are-texans-losing-their-distinctive-twang/">long worried about losing their distinctive twang</a>.</p>
<p>Indeed, around the United States, communities are <a href="https://www.cnn.com/2022/05/03/health/regional-american-accents-wellness/index.html#:%7E:text=What%20I%20came%20to%20find,at%20a%20very%20slow%20pace.&text=The%20significance%20of%20evolving%20accents,used%20to%20in%20the%20past.">voicing a common anxiety</a>: Are Americans losing their accents?</p>
<p>The fear of accent loss often emerges within communities that face demographic and technological changes. But on an individual level “losing one’s accent” is also part of a profit-driven industry, with <a href="https://doi.org/10.1002/9781405198431.wbeal0004">accent reduction services</a> promising professional and personal benefits to clients who change their speech by ironing out any regionalisms or foreign pronunciations.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rLwbzGyC6t4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Boston has one of the most famous – and often-parodied – American accents.</span></figcaption>
</figure>
<p>But is it really possible to lose one’s accent? <a href="https://scholar.google.com/citations?user=FKWfqK0AAAAJ&hl=en">Linguistic researchers</a> <a href="https://scholar.google.com/citations?user=7wvy13gAAAAJ&hl=en">like us</a> suggest the answer is complicated — no one becomes truly “accentless,” but accents can and do change over time.</p>
<p>To us, what’s more interesting is why so many people believe they can lose their accent – and why there are such differing opinions about why this may be a good or bad thing.</p>
<h2>Is there a ‘standard’ accent?</h2>
<p>It’s best to think of an accent as a distinct, systematic, rule-governed way of speaking, including sound features such as intonation, stress and pronunciation.</p>
<p>Accent is not a synonym for dialect, but it’s related. Dialect is an umbrella term for the way a community pronounces words (phonology), creates words (morphology), and orders words (syntax).</p>
<p>Accent is the phonological part of a dialect. For example, when it comes to the Boston dialect, a key feature of its accent is <a href="https://repository.upenn.edu/cgi/viewcontent.cgi?article=1009&context=pwpl">r-deletion, or r-dropping</a>. This occurs most frequently after certain vowels, so that a phrase like “far apart” could be pronounced like “fah apaht,” with the “r” sound vocalizing, or turning into a vowel. This results in a longer vowel pronunciation in each word.</p>
<p>Many people believe that there is a single standard way of speaking in each country, and that this perceived standard is inherently the best form of speech. However, linguists often point out that the concept of a standard accent is better understood as <a href="https://www.taylorfrancis.com/chapters/mono/10.4324/9780203348802-5/standard-language-myth-rosina-lippi-green">an idealization rather than a reality</a>. In other words, no one speaks “standard English”; rather, it is an imagined way of using language that exists only in grammar and style books.</p>
<p>One reason linguists agree there is no one true standard is that, through the years, there have been multiple supposed standards, such as <a href="https://www.encyclopedia.com/humanities/encyclopedias-almanacs-transcripts-and-maps/network-standard">Received Pronunciation in the U.K. and Network Standard in the U.S.</a> – think of a newsreader’s cadence in a <a href="https://www.youtube.com/watch?v=amgzdqbdsHQ">1950s BBC newsreel</a>, or Kent Brockman’s <a href="https://www.youtube.com/watch?v=W4jWAwUb63c">on “The Simpsons</a>.”</p>
<p>The idea of a standard changes over time and place. There has never been a single standard that’s been fully agreed upon – and broadcast outlets across the spectrum have never consistently held to those standards anyway.</p>
<p>Even so, this idea of a standard accent is powerful. An episode of NPR’s podcast “Code Switch” tells the story of <a href="https://www.npr.org/transcripts/636442508">Deion Broxton</a>, who in recent years applied for jobs as a broadcasting reporter but was repeatedly turned down because of his Baltimore accent.</p>
<p>Many other workplace and educational environments similarly perpetuate the idea that nonstandard accents are less appropriate, or even inappropriate, in certain professional spaces. Scholars have found that Southern U.S. accent features are more accepted in <a href="https://doi.org/10.1177/2378023121999161">government, law and service-oriented workplaces than in the technology sector</a>. The acceptability of nonstandard accents may correlate with differences in class and culture, with newer or higher-prestige industries expecting more standard speech in the workplace.</p>
<h2>What is accent leveling?</h2>
<p>The pressure to sound standard is one force that can lead to what linguists describe as “<a href="https://doi.org/10.1515/flin.1998.32.1-2.35">dialect leveling</a>” or “accent leveling.” This occurs when there is a loss of diverse features among regional language varieties. For example, if a U.S. Southerner feels social or economic pressure to shift from pronouncing the word “right” with one vowel – sounding like “raht” – to make it sound like “ra-eeyt” with a diphthong (two vowel sounds), they may be diminishing their use of <a href="https://www.babbel.com/en/magazine/united-states-of-accents-southern-american-english">a common marker for Southern speech</a>. This is technically not accent loss, but rather accent change. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/UcxByX6rh24?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A guide to U.S. accents.</span></figcaption>
</figure>
<p>But accent leveling can also be motivated by language contact, when people with multiple dialects come into regular interaction because of migration and other demographic mobility. Areas that have in recent decades experienced high levels of immigration have often pointed to the mixing of different languages and accents as driving the loss of traditional, distinctive speech patterns.</p>
<p>Although modern conveniences such as cars, highway systems and the internet make moving and interacting across distances easier than ever before, accent leveling due to human geography is not new. As the U.S. South became more industrial in the late 19th century, and people moved into bigger communities, <a href="https://benjamins.com/catalog/veaw.g18.24bai">an accent leveling occurred</a>, resulting in some of the features we now say are distinctly Southern. We see this in, for example, the <a href="https://www.acelinguist.com/2020/01/the-pin-pen-merger.html">pin/pen merger</a>. Before 1875, vowels before nasal sounds like “m” and “n” in words such as “pin” and “pen” were pronounced differently. But some Southern speakers in the late 19th century began to pronounce “pen” and “pin” identically, with this merger generally spreading throughout Southern American English in the first half of the 20th century.</p>
<p>A <a href="https://benjamins.com/catalog/veaw.g18.24bai">similar trajectory occurred</a> with other Southern accent features, such as the shifting of the diphthong in “right” to a single vowel sound closer to “raht” and <a href="https://repository.upenn.edu/cgi/viewcontent.cgi?article=1815&context=pwpl">the spread of Southern drawl</a> – with lengthening of vowels, in which words such as “that” are pronounced more like “thaa-uht.”</p>
<p>As long as humans continue moving and time keeps passing, accent change will continue happening, too.</p>
<h2>Why people fear accent loss</h2>
<p>Many people fear accent loss because <a href="https://doi.org/10.1177/1360780418816335">language is intimately tied to identity</a>. But when considering the connection between language and identity, it is worth distinguishing genuine concerns about dialect loss from more irrational fears about language change. </p>
<p>In a broader sense, <a href="https://onlinelibrary.wiley.com/doi/10.1002/9781119147282.ch3">the spread of American English on a global scale</a>, and its economic and social effects, can lead to the loss of local identities, traditions and languages. There are similar concerns about loss of regional accents in the U.S.</p>
<p>Linguists argue that <a href="https://doi.org/10.2307/417058">dialect death should be taken seriously</a>. It results in the loss of diverse cultures and intellectual traditions. Because language is so important to identity, some communities around the world have made deliberate efforts to <a href="https://www.degruyter.com/document/doi/10.1515/ijsl.2005.2005.175-176.193/html">revitalize dialects</a> that have been dying, such as the rural Valdres dialect of Norwegian. This variety experienced a resurgence thanks to <a href="https://doi.org/10.1111/j.1548-1395.2012.01116.x">a dialect popularity contest</a> held by a radio network in Norway.</p>
<p>Similarly, in the U.S. there have been efforts to revitalize particular dialects of Indigenous languages, such as the <a href="https://shareok.org/handle/11244/44895">Skiri and South Band dialects of the Pawnee language in Oklahoma</a>, and to embrace varieties such as <a href="https://www.routledge.com/African-American-English-Structure-History-and-Use/Mufwene-Rickford-Bailey-Baugh/p/book/9780367760687">African American English</a>.</p>
<p>The successes of language revitalization and maintenance can be applauded without suggesting that all types of language change must be resisted. There is a difference between powerful social and economic forces compelling a shift in one’s accent and the natural shifting of language due to regular interactions among people from different backgrounds and regions. </p>
<h2>Embracing accents, embracing change</h2>
<p>When people talk of “accent loss,” it is always good to explore the shifting demographics of the area to question whether the accent is truly being lost, whether it is changing or whether it is being maintained alongside many other accents new to the region.</p>
<p>For example, when students at our school, Kennesaw State University in Georgia, were <a href="https://www.youtube.com/watch?v=SZRQA0zpqmY">recently asked why the Southern accent was changing</a>, several noted the number of people from the North who are moving to the Atlanta metro area. </p>
<p>When people move from one region to another, our desire to communicate effectively can lead to <a href="https://doi.org/10.1002/9781444318159.ch10">accommodating one another’s accent</a>, producing slight shifts in how we speak and at times even adopting features of one another’s accents. </p>
<p>With time, these shifts become normalized, and new accent features can emerge. </p>
<p>But such accent evolution isn’t something that should cause concern.</p>
<p>Linguistic accommodation allows for better communication among individuals and groups from different geographic locations and across different spaces and cultures – a thing to celebrate and not automatically fear.</p><img src="https://counter.theconversation.com/content/192548/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Many people fear the disappearance of the unique way some communities speak. But accent loss is a complicated notion and embracing both language variation and change can be an important social goal.Chris C. Palmer, Professor of English, Kennesaw State UniversityMichelle Devereaux, Associate Professor of English Education, Kennesaw State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1868772022-08-08T12:20:46Z2022-08-08T12:20:46ZWhen was talking invented? A language scientist explains how this unique feature of human beings may have evolved<figure><img src="https://images.theconversation.com/files/475710/original/file-20220722-19-z3alh3.jpeg?ixlib=rb-1.1.0&rect=0%2C0%2C5734%2C3828&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Humans are the only animals that express their thoughts in full sentences.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/son-whispering-into-fathers-ear-royalty-free-image/1270752418?adppopup=true">Oliver Rossi/DigitalVision via Getty Images</a></span></figcaption></figure><figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><a href="https://theconversation.com/us/topics/curious-kids-us-74795">Curious Kids</a> is a series for children of all ages. If you have a question you’d like an expert to answer, send it to <a href="mailto:curiouskidsus@theconversation.com">curiouskidsus@theconversation.com</a>.</em></p>
<hr>
<blockquote>
<p><strong>When was talking invented? – Albert R., age 12, Florida</strong></p>
</blockquote>
<hr>
<p>The truth is, no one knows for sure when talking was “invented.” It’s a big mystery. But as <a href="https://www.socsci.uci.edu/%7Erfutrell/">a language scientist</a> for 15 years, I can tell you our best guess about when people started talking to each other using language, and how we think it got started.</p>
<h2>Human language and how long it’s been around</h2>
<p>Talking is an activity unique to <em>Homo sapiens</em>, our species. In <a href="https://www.jstor.org/stable/24940617">every culture where most people can hear</a>, people talk with spoken language. And in groups where lots of people are deaf – as in <a href="https://en.wikipedia.org/wiki/Village_sign_language">certain</a> <a href="http://sandlersignlab.haifa.ac.il/html/html_eng/pdf/EMERGING_SIGN_LANGUAGES.pdf">villages</a> where a lot of people are born deaf for genetic reasons – or in Deaf communities throughout the world, people talk with their hands, using sign languages. There are <a href="https://www.littlepassports.com/blog/world-community/the-many-languages-of-sign-language/">lots of different sign languages</a>, just as there are lots of different spoken languages.</p>
<p>Birds sing songs. <a href="https://theconversation.com/when-dogs-bark-are-they-using-words-to-communicate-153345">Dogs bark</a>, and cats meow. But these forms of communication are simple compared with human language. An animal might make 10 different sounds, for example, but an adult human knows <a href="https://doi.org/10.1098/rstb.2009.0213">more than 20,000 words</a>. Additionally, we’re the <a href="https://web.stanford.edu/class/linguist197a/hockett60sciam.pdf">only animal</a> that expresses thoughts in full sentences. Because language is unique to humans and so different from anything else in the animal kingdom, researchers don’t really think language was invented; instead we think it evolved during human beings’ evolution from other apes. </p>
<p>So to find out when talking started, you have to look back to when humans first evolved. Scientists believe humans as we know them today likely <a href="https://doi.org/10.1126/sciadv.aao5961">evolved around 300,000 years ago</a>. Some of our evolutionary ancestors like <em>Homo erectus</em> and cousins like the Neanderthals <a href="https://doi.org/10.1016/j.cobeha.2018.01.001">may have had language too</a>, but researchers don’t know for sure.</p>
<figure class="align-center ">
<img alt="A chalk drawing of monkey to human evolution" src="https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/476369/original/file-20220727-11735-ejyizd.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Scientists believe that ancestors to modern humans may have used speech too.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/evolution-royalty-free-image/163746345?adppopup=true">altmodern/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>What’s amazing is that for almost all of that time, all people did with language was talk; there wasn’t any reading or writing until <a href="https://sites.utexas.edu/dsb/tokens/the-evolution-of-writing/">roughly 5,000 years ago</a>, which is recent compared with how long modern humans have been around. For almost all of the time that humans existed on planet Earth, no one read a book or a sign, or wrote down their name.</p>
<p>People started writing things down <a href="https://sites.utexas.edu/dsb/tokens/from-accounting-to-writing/">so they could keep track of accounts</a>. For example, if Farmer Joe owed Farmer Jill three sheep, then they would draw a picture of a sheep and write down three marks. Eventually these little pictures turned into hieroglyphics and then into the letters that we use today to write down all kinds of things like grocery lists and poems and stories. </p>
<h2>Where talking comes from</h2>
<p>Another question you might wonder about is where talking comes from. Before people used language, how did they communicate with each other? Did they just make sounds at each other as animals do? The truth is, we don’t know the answer here either. But there are two main theories. </p>
<p><a href="https://webspace.ship.edu/cgboer/langorigins.html">The first theory</a> is that language started with people making different sounds, mostly imitating the things around them, like animal calls, nature sounds and the sounds of tools. Eventually they started using these sounds to talk to each other. They might make the sound of whooshing wind to talk about the weather or imitate the sound of a bird to tell a friend that there was a bird nearby. Then over hundreds of thousands of years, those sounds turned into words that people began to learn as part of their language. At some point, people started stringing the words together to form sentences.</p>
<p><a href="https://mitpress.mit.edu/books/origins-human-communication">The other main theory</a>, which is a more recent idea, is that people started off by gesturing – pointing at things with their hands, imitating actions using their bodies and making faces. Eventually these gestures turned into a full sign language. This process continues today in villages where lots of people are deaf. If a lot of deaf people who don’t know a sign language come together, <a href="https://www.pbs.org/wgbh/evolution/library/07/2/l_072_04.html">they will spontaneously invent one</a> within a few years. </p>
<p>This theory guesses that after developing sign languages, people eventually started making sounds along with their gestures. At some point, they switched to mostly making sounds that became words instead of just using their bodies. The reason they switched to making sounds, the theory goes, is that talking out loud lets you communicate with someone even when you can’t see them. </p>
<p>Big questions like this let all of us explore what it means to be human beings. Only humans have language, and so figuring out where language comes from is a way to figure out where we come from too.</p>
<hr>
<p><em>Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to <a href="mailto:curiouskidsus@theconversation.com">CuriousKidsUS@theconversation.com</a>. Please tell us your name, age and the city where you live.</em></p>
<p><em>And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.</em></p><img src="https://counter.theconversation.com/content/186877/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Futrell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A language scientist explains that talking was never invented but has evolved over hundreds of thousands of years.Richard Futrell, Associate Professor of Language Science, University of California, IrvineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1859792022-07-19T13:17:08Z2022-07-19T13:17:08ZFace masks affect how children understand speech differently from adults – new research<figure><img src="https://images.theconversation.com/files/474590/original/file-20220718-16-c16rv4.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C5463%2C3628&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/happy-black-teacher-her-students-wearing-1798363270">Drazen Zigic/Shutterstock</a></span></figcaption></figure><p>While mask-wearing is no longer required in many locations, it remains in use as a way to limit the spread of COVID-19. One of the criticisms of masks has been that they make communication more difficult. A recent report by the <a href="https://www.gov.uk/government/publications/evidence-summary-covid-19-children-young-people-and-education-settings">UK Department for Education</a>, for example, suggests that mask wearing during the pandemic caused communication difficulties in classrooms. </p>
<p>However, our <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2022.879156/abstract">new research</a> shows that for people without hearing and language difficulties, the effects of face masks on the understanding of speech are in fact mild. </p>
<p>Although face masks slow down our understanding of speech, they rarely lead to misunderstandings. Masks also do not affect our understanding in all situations. They generally only have an effect when the topic of the conversation is unpredictable. </p>
<p>26 children (aged eight to 12) and 26 adults without hearing or language difficulties took part in our study. We showed them videos of a person speaking while wearing a cloth face mask and asked them to repeat back the last word of each sentence they had heard. This allowed us to measure how quickly and how accurately people understand face-masked speech. </p>
<p>As well as testing our participants’ understanding of masked versus non-masked speech, we also manipulated the video in order to test the audio and visual effects of the mask separately. This meant that, for instance, the video showed a non-masked speaker but played audio recorded with the mask on. </p>
<p>We found that children process masked speech up to 8% less accurately and 8% more slowly than normal speech, while adults process masked speech up to 6.5% less accurately and 18% more slowly. </p>
<p>In general, adults responded to speech faster than children in the study - about 23% faster (148 milliseconds) when listening to face mask speech and 29% faster (176 milliseconds) when listening to normal speech. Adults’ highly efficient processing of normal speech could be one reason why the effect of face masks on their speed is more pronounced.</p>
<h2>The impact of face masks</h2>
<p>Face masks change our use of language in two ways. <a href="https://theconversation.com/the-science-of-how-you-sound-when-you-talk-through-a-face-mask-139817">They change what a speaker sounds like</a> and may give the impression that their speech is muffled. Most masks also block the view of the speaker’s lips.</p>
<p>Surprisingly, our research shows that the way masks change the sound when we speak affects children more than the visual obstruction of the speaker’s lips. The reason for this could be that <a href="https://onlinelibrary.wiley.com/doi/10.1111/j.1460-9568.2011.07685.x">children are not as good</a> at combining visual information with sound as adults are when hearing and seeing a speaker. As a result, seeing the speaker’s lip movements while hearing masked speech does not improve how accurately they understand what is being said. </p>
<p>This is different from adults, who find masked speech more difficult to understand because of the unique combination of visual blocking and sound changes. We found that acoustically muffled masked speech does not affect adults’ understanding when they can see the speaker’s lip movements. Similarly, concealing the talker’s mouth does not have an effect when the speech sound is clear. However, most masks conceal the mouth and change the speech sound at the same time.</p>
<h2>What we’re talking about matters</h2>
<p>Interestingly, the topic of conversation matters. Face masks affect our understanding less when we can anticipate what our conversation partner is going to say. </p>
<figure class="align-center ">
<img alt="Two people wearing masks sat on park bench" src="https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/474599/original/file-20220718-71797-l5lav4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The combination of blocked visuals and muffled sound affects how adults understand masked speech.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/photo-young-attractive-couple-sitting-on-1726550038">Irene Castro Moreno/Shutterstock</a></span>
</figcaption>
</figure>
<p>This is because knowing the conversation context helps us to understand language quickly and effortlessly. For example, in the sentence “For your birthday I baked this cake”, the words “birthday” and “baked” are related in meaning to the last word “cake” and often occur together. Our brains can use this information to predict what a speaker is going to say. </p>
<p>Our study shows that giving this type of contextual information reduces the difficulties in understanding masked speech. When given high contextual information, both children and adults process masked speech only 1% less accurately than normal speech. This explains why communicating with masks causes difficulties in some situations but not others. </p>
<p>While there have been fears that masks would affect children’s learning, in the classroom teachers use many techniques that increase contextual information. They design lessons in a way that builds upon students’ existing knowledge and use images, keywords and written text. All of these techniques support children’s understanding of what is being said and help them to compensate for face mask effects. </p>
<p>Listeners also use other clues that reduce mask effects. For example, most masks do not cover the upper part of the face. This is good news because seeing the speaker’s eyes and upper face helps us to understand <a href="https://theconversation.com/the-science-of-how-you-sound-when-you-talk-through-a-face-mask-139817">masked speech better</a>. As a result, our comprehension of language is remarkably robust.</p>
<p>However, participants in our study did not have any <a href="https://theconversation.com/face-masks-are-a-challenge-for-people-with-hearing-difficulties-137423">hearing or speech difficulties</a>, and only listened to an adult speaker <a href="https://cognitiveresearchjournal.springeropen.com/track/pdf/10.1186/s41235-021-00314-0.pdf">in quiet conditions</a>. We don’t know how mask wearing has affected children’s communication with their peers, or its impact on other aspects of their learning and <a href="https://www.mind.org.uk/information-support/coronavirus/mask-anxiety-face-coverings-and-mental-health/">wellbeing</a>.</p><img src="https://counter.theconversation.com/content/185979/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Julia Schwarz has received funding from the Cambridge Language Sciences Incubator Fund, the ESRC, and the Gates Cambridge Trust.</span></em></p>New research explores how face masks affect our understanding of speech.Dr. Julia Schwarz, PhD Candidate in Linguistics, University of CambridgeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1868672022-07-18T12:15:12Z2022-07-18T12:15:12ZBabies can learn language sounds in the first few hours of being born – new research<figure><img src="https://images.theconversation.com/files/473850/original/file-20220713-12-pfxiim.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">They are soaking up everything your say.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/african-american-mother-playing-adorable-little-1572216073">Art_Photo/Shutterstock</a></span></figcaption></figure><p>We often think of babies as blank canvases with little ability to learn during the first few weeks of life. But babies actually start processing language and speech incredibly early. Even while in the womb, they learn to discern voices, along with some speech sounds. At birth, they already prefer speech sounds over other types of <a href="https://pubmed.ncbi.nlm.nih.gov/17286838/">non-language sounds</a>.</p>
<p>But exactly how the baby brain learns to process complex language sounds is still a bit of a mystery. In our recent study, published in Nature Human Behaviour, we uncovered details of this mindbogglingly speedy learning process – <a href="https://www.nature.com/articles/s41562-022-01355-1">starting in the first few hours of birth</a>.</p>
<p>We collaborated with a neonatal research team in China, who fitted babies’ heads with a small cap covered in sophisticated light emitting devices designed to measure tiny changes in oxygen levels in the babies’ brains. Detectors in the cap could help us determine which areas of the brain were active over time. </p>
<p>The procedure, which is entirely safe and painless, was carried out within three hours of the babies being born. It only required the baby to wear a small elastic cap and to shine minute infrared lights (essentially heat radiation) through the head. This fits with the common practice in many cultures to wrap newborns in a close-fitting blanket to pacify them – easing the transition from the comfort of the womb to the wild world of autonomous physical existence.</p>
<p>Within three hours of being born, all babies were exposed to pairs of sounds that most researchers would predict they should be able to distinguish. This included vowels (such as “o”) and these same vowels played backwards. Usually, reversed speech is very different from normal (forward) speech, but in the case of isolated vowels, the difference is subtle. In fact, in our study, we found that adult listeners could only distinguish between the two instances 70% of the time.</p>
<p>What surprised us is that newborns failed to differentiate between forwards and backwards vowels immediately after birth, because we found no difference between brain signals collected in each case in the first three hours of birth. In hindsight, we should not have been so surprised considering how subtle the difference was. </p>
<p>However, we were stunned to discover that after listening to these sounds for five hours, newborns started differentiating between these forwards and backwards vowels. First, their response to forwards vowels became faster than to backwards vowels. And after a further two hours, during which they mostly slept, their brain responded to forwards vowels not only faster but also more strongly compared with babies trained with different vowels or babies who remained in silence.</p>
<p>This means that in the first day of life, it takes only a few hours for the baby’s brain to learn the subtle difference between natural and slightly unnatural speech sounds. </p>
<p>We were further able to see that brain regions of the superior temporal lobe (a part of the brain associated with auditory processing) and of the frontal cortex (involved in planning complex movements) were involved in processing the vowel sounds, especially in the left hemisphere. That’s similar to the pattern that underpins language comprehension and production in adults.</p>
<p>And even more fascinating, we were able to detect cross-talk (communication between different brain areas) between these regions in both the group of baby participants that were exposed to speech sounds, but not in those who had not experienced any training. In other words, neurons of the trained babies were having a “conversation” across the brain in a way that was not seen in babies who remained in silence during the same period.</p>
<figure class="align-center ">
<img alt="Close up of young father holding his newborn baby son" src="https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/474252/original/file-20220715-12-vda9cd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">It is beneficial to talk to newborns.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-young-father-holding-his-newborn-511881799">Halfpoint/Shutterstock</a></span>
</figcaption>
</figure>
<p>Newborns probably benefit directly from being talked to from the very first moments they have left the womb. Clearly, “nurture” – the changing of the mind by the environment – starts on day one.</p>
<h2>Babies aren’t pre-programmed</h2>
<p>We can also consider these findings in the context of a trendy concept in neuroscience today, namely <a href="https://www.sciencedirect.com/science/article/abs/pii/S221194931200004X">embodiment theory</a>. Embodiment is the idea that our thoughts and mental operations are not pre-programmed or operate mysteriously from some inherited, genetic code but rather build upon direct experience of the world around us, through the sensory channels that start operating from birth, such as hearing, seeing, tasting, smelling and touching.</p>
<p>Even though our brain has a predisposition to learn based on its organisation and function defined by the genetic code inherited from our parents, it is also able to feel the environment as soon as it is born, and this immediately helps our internal representations of the world around us.</p>
<p>I would suggest that you not only talk to your baby but also share with them all sorts of sensory experiences of the world as soon as they are in your arms – be it exposing them to music, letting them smell flowers or showing them objects or views they’ve never seen before. By encouraging more varied experiences, you give the baby brain new avenues to grow and develop, and probably more creative abilities for the future.</p><img src="https://counter.theconversation.com/content/186867/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Guillaume Thierry is supported by the Polish National Agency for Academic Exchange (NAWA) under the NAWA Chair Programme (PPN/PRO/2020/1/00006) and he is also affiliated with the Faculty of English at Adam Mickiewicz University, Poznań, Poland.</span></em></p>Babies who remain in silence hours after birth have different brains to those who listen to sounds.Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1817712022-05-20T12:13:52Z2022-05-20T12:13:52ZWhat makes us subconsciously mimic the accents of others in conversation<figure><img src="https://images.theconversation.com/files/464014/original/file-20220518-17-6a6giq.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5352%2C3739&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When you imitate the speech of others, there's a thin line between whether it's a social asset or faux pas.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/new-politics-convention-chicago-by-franklin-mcmahon-news-photo/526979648?adppopup=true">Franklin McMahon/Corbis via Getty Images</a></span></figcaption></figure><p>Have you ever caught yourself talking a little bit differently after listening to someone with a distinctive way of speaking? </p>
<p>Perhaps you’ll pepper in a couple of y’all’s after spending the weekend with your Texan mother-in-law. Or you might drop a few R’s after binge-watching a British period drama on Netflix.</p>
<p>Linguists call this phenomenon “<a href="https://www.sciencedaily.com/releases/2022/03/220308120147.htm">linguistic convergence</a>,” and it’s something you’ve likely done at some point, even if the shifts were so subtle you didn’t notice. </p>
<p>People tend to converge toward the language they observe around them, whether it’s <a href="https://doi.org/10.1016/0010-0277(94)90048-5">copying word choices</a>, <a href="https://doi.org/10.1037/0033-2909.134.3.427">mirroring sentence structures</a> or <a href="https://doi.org/10.1016/j.wocn.2011.09.001">mimicking pronunciations</a>.</p>
<p><a href="https://scholar.google.com/citations?user=GWJGP9AAAAAJ&hl=en">But as a doctoral student in linguistics</a>, I wanted to know more about how readily this behavior occurs: Would people converge based on evidence as flimsy as their own expectations of how someone might sound?</p>
<p>Three years of experimentation and an entire dissertation later, I had my answer, which was <a href="https://www.linguisticsociety.org/sites/default/files/Wade%20Lg%20article.pdf">just published</a> in the academic journal Language.</p>
<p>People do, in fact, converge toward speech sounds they expect to hear – even if they never actually hear them.</p>
<h2>What, exactly, is convergence?</h2>
<p>But before getting into the specifics, let’s talk about what convergence is and how it’s related to other speech adjustments like <a href="https://doi.org/10.1002/9781405166256.ch13">code-switching</a>, which refers to alternating between language varieties, or <a href="https://books.google.com/books?hl=en&lr=&id=cTPUrGpvHs0C&oi=fnd&pg=PA235&dq=rickford+mcnair+knox+&ots=TYFzbWpMrr&sig=lT_lmKj4qKiJ6wnWFEk1uoYWF2o#v=onepage&q=rickford%20mcnair%20knox&f=false">style-shifting</a>, which happens when a person uses different linguistic features in different situations. </p>
<p>Convergence refers to the shifts people make to their speech to approximate that of those around them. This is an intentionally broad definition meant to encompass all sorts of adjustments, whether intentional or inadvertent, prominent or subtle, or toward entire dialects or particular linguistic features.</p>
<figure class="align-center ">
<img alt="Drawing of people seated at a bar." src="https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=444&fit=crop&dpr=1 600w, https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=444&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=444&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=558&fit=crop&dpr=1 754w, https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=558&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/464025/original/file-20220518-19-w2t15y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=558&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">When people chat with one another, certain sounds and word choices will converge.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/americans-in-chicago-watch-richard-nixons-trip-to-china-on-news-photo/526989396?adppopup=true">Franklin McMahon/Corbis via Getty Images</a></span>
</figcaption>
</figure>
<p>You could imitate aspects of speech you actually observe. Or maybe you throw in some words you think kids these days use, only to have your use of “bae” and “lit” be met with teenage eye rolls. </p>
<p>Code-switching or style-shifting can also be examples of convergence, as long as the shift is toward an interlocutor – the person you’re talking to. But people can also shift away from an interlocutor, and this is called “<a href="https://www.researchgate.net/publication/269710388_The_Language_of_Intergroup_Distinctiveness">divergence</a>.”</p>
<p>Code-switching and style-shifting can occur for other reasons, too, like how you feel, what you’re talking about and how you want to be perceived. You might drop your G’s more and say things like “thinkin’” when reminiscing about a prank you played in high school – but switch to more formal speech when the conversation shifts to a new job you’re applying to.</p>
<h2>Are expectations enough to alter speech?</h2>
<p>To determine whether people converge toward particular pronunciations they expect but never actually encounter, I needed to start my investigation with a feature that people would have clear expectations about. I landed on the “I” vowel, as in “time,” which in much of the southern U.S. is pronounced more like “Tom.” This is called “<a href="https://doi.org/10.1080/03740463.2005.10416086">monophthongization</a>,” and it is a hallmark of Southern speech.</p>
<p>I wanted to know whether people would produce a more Southern-like “I” vowel when they heard someone speak with a Southern accent – and here’s the crucial part – even if they never heard how that person actually pronounced “I.”</p>
<p>So I designed an experiment, disguised as a guessing game, in which I got more than 100 participants to say a bunch of “I” words. </p>
<p>In the first part of the game, they read a series of clues on their computer screen – things like, “this U.S. coin is small, silver, and worth 10 cents.” </p>
<p>Then they named the word being described – “dime!” – and I recorded their speech. </p>
<p>In the second part of the game, I had participants listen to clues read by a noticeably Southern-accented talker and instructed them to respond in the same way. By comparing their speech before and after hearing a Southern accent, I could determine whether they converged.</p>
<p>Using <a href="https://www.fon.hum.uva.nl/praat/">acoustic analysis</a>, which gives us precise measurements of how participants’ “I” vowels sound, I observed that Southerners and non-Southerners alike did, in fact, shift their “I” vowels toward a slightly more Southern-like pronunciation when listening to the Southern-accented talker. </p>
<p>They never actually heard how the Southerner produced this vowel, since none of the clues contained the “I” vowel. This means they were anticipating how this Southerner might say “I,” and then converging toward those expectations.</p>
<p>This was pretty clear evidence that people converge not just toward speech they observe but also toward speech they expect to hear. </p>
<h2>Social asset or faux pas?</h2>
<p>What does this say about human behavior? </p>
<p>For one, it means that people perceive accents as coherent collections of different linguistic features. Hearing accent features X and Y tells people to expect accent feature Z, because they know X, Y and Z go together. </p>
<p>But it’s not just that people passively know things about others’ accents. This knowledge can even shape your own speech.</p>
<p>So why does this happen? And how do those on the receiving end perceive it?</p>
<p>First, it’s important to point out that convergence is usually very subtle – and there’s a reason. Overly exaggerated convergence – sometimes called <a href="https://books.google.com/books/about/Accommodation_Theory.html?id=s_jVSAAACAAJ">overaccommodation</a> – can be perceived as mocking or patronizing.</p>
<p>You’ve probably witnessed people switch to a slower, louder, simpler speech style when talking to an elderly person or a nonnative speaker. This type of over-the-top convergence is often based on assumptions about limited comprehension – and it can socially backfire. </p>
<p>“Why are they talking to me like I’m a child?” the listener might think. “I understand them just fine.”</p>
<figure class="align-center ">
<img alt="Drawing of woman speaking to elderly woman in bed." src="https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=505&fit=crop&dpr=1 600w, https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=505&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=505&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=634&fit=crop&dpr=1 754w, https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=634&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/464020/original/file-20220518-17-395nb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=634&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Overly exaggerated convergence can be perceived as mocking or patronizing.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/nurse-helping-a-senior-citizen-with-breakfast-by-franklin-news-photo/526990218?adppopup=true">Franklin McMahon/Corbis via Getty Images</a></span>
</figcaption>
</figure>
<p>For expectation-driven convergence – which, by definition, is not rooted in reality – such a faux pas might be even more likely. If you don’t have an actual speech target to converge toward, you might resort to inaccurate, simplistic or stereotyped ideas about how someone will speak. </p>
<p>However, subtler shifts – in what might be called the “sweet spot” of convergence – can have a number of benefits, from social approval to more efficient and successful communication. </p>
<p>Consider a toddler who calls their pacifier a “binky.” You’d probably be better off asking “where’s the binky?” and not “where’s the pacifier?” </p>
<p>Reusing the terms our interlocutors use is not just cognitively easier for us – since it takes <a href="https://psycnet.apa.org/record/2014-04570-010">less effort to come up with a word we just heard</a> – but it often has the added benefit of making communication easier for our partner. The same could be said for using a more familiar pronunciation.</p>
<p>If people can anticipate how someone will speak even sooner – before they utter a word – and converge toward that expectation, communication could, in theory, be even more efficient. If expectations are accurate, expectation-driven convergence could be a social asset.</p>
<p>That’s not to say that people necessarily go around consciously making these sorts of calculations. In fact, <a href="https://philpapers.org/rec/PICTAM">some explanations</a> for convergence suggest that it is an unintentional, automatic consequence of speech comprehension.</p>
<p>Regardless of why convergence happens, it’s clear that even beliefs about others play a major role in shaping the way people use language – for better or for worse.</p><img src="https://counter.theconversation.com/content/181771/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lacey Wade receives funding from the National Science Foundation. </span></em></p>We often imitate styles of speech we hear – what’s known as ‘linguistic convergence.’ But a researcher wanted to see if we alter our speech based on the mere expectation of how someone will sound.Lacey Wade, Postdoctoral Researcher, University of PennsylvaniaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1817032022-04-29T12:21:31Z2022-04-29T12:21:31ZGilbert Gottfried and the mechanics of crafting one of the most memorable voices of all time<figure><img src="https://images.theconversation.com/files/460333/original/file-20220428-12-yk51sc.jpg?ixlib=rb-1.1.0&rect=13%2C8%2C2982%2C1985&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Marlee Matlin covers her ears as Gottfried performs during the Comedy Central Roast of Donald Trump in 2011.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/ComedyCentralRoastofDonaldTrump/9f6393d4b209436788f830ae5dfd82cf/photo?Query=gilbert%20gottfried&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=Anytime&totalCount=154&currentItemNo=131">AP Photo/Charles Sykes</a></span></figcaption></figure><p>Though Gilbert Gottfried’s voice has alternatively been described as “<a href="https://variety.com/2022/film/news/gilbert-gottfried-dead-dies-comedian-aladdin-1235231387">shrill</a>,” “<a href="https://www.looper.com/132868/whatever-happened-to-gilbert-gottfried/">annoying</a>” and “<a href="https://www.cnn.com/2022/04/12/entertainment/gilbert-gottfried-death/index.html">grating</a>,” you can’t say it isn’t memorable.</p>
<p>Gottfried, <a href="https://www.nytimes.com/2022/04/12/arts/gilbert-gottfried-dead.html">who died on April 12, 2022</a>, didn’t naturally sound this way. Watch him perform as a cast member during on the sixth season of “<a href="https://www.youtube.com/watch?v=idtrUge0wAQ">Saturday Night Live</a>,” and you’ll hear a voice that sounds downright angelic by comparison. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/idtrUge0wAQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Gilbert Gottfried’s brief run as a cast member on ‘Saturday Night Live’ occurred before the development of his signature voice.</span></figcaption>
</figure>
<p>But as he developed his comic persona, that distinctive sound made its way into his performances in stand-up comedy, advertising, television and film – perhaps most famously as Iago in “<a href="https://www.imdb.com/title/tt0103639/?ref_=nv_sr_srsg_0">Aladdin</a>,” Mr. Peabody in “<a href="https://www.imdb.com/title/tt0100419/?ref_=nv_sr_srsg_2">Problem Child</a>” and as a <a href="https://www.youtube.com/watch?v=j8PGzYwTsqM">squawking duck</a> in advertisements for the insurance giant Aflac. </p>
<p>Clearly, Gottfried figured out how to create a character that perfectly synced a personality with a voice that matched – a particularly valuable skill for actors that requires a combination of technique and instinct.</p>
<h2>The smooth operators</h2>
<p>In 2001, the Center for Voice Disorders at Wake Forest University <a href="https://newsroom.wakehealth.edu/News-Releases/2002/01/Americans-Speak-Out-Select-the-Best-and-Worst-Voices-in-America-In-Online-Polling">surveyed Americans</a> asking them who possessed the best and worst voices. The actors with the three best voices were James Earl Jones, Sean Connery and Julia Roberts. </p>
<p>The worst? Leading the pack was Fran Drescher of “<a href="https://www.imdb.com/title/tt0106080/">The Nanny</a>” fame, followed by Roseanne Barr and – you guessed it – Gilbert Gottfried.</p>
<p><a href="https://sc.edu/study/colleges_schools/artsandsciences/theatre_and_dance/our_people/directory/tobolski_erica.php">As a voice specialist</a> who teaches acting, voice and speech, I work with students and clients who often want to sound more like Connery and Roberts, and less like Gottfried.</p>
<p>Three distinct subsystems are involved in vocal production: the larynx, <a href="https://medlineplus.gov/ency/imagepages/19708.htm">or voice box</a>, which houses the vocal folds; the lungs and diaphragm in breathing; and areas where sounds resonate, or the vocal tract.</p>
<p>Speaking well involves a mix of understanding this vocal anatomy, utilizing proper breathing techniques and learning how to speak without excess tension. Collectively, these elements are known as <a href="https://voicefoundation.org/health-science/voice-disorders/anatomy-physiology-of-voice-production/the-voice-mechanism/">the voice mechanism</a>. </p>
<p>If a student or client comes into a session seeking a more effective voice, it’s these fundamentals that will be addressed. When these elements work together, they create a balanced vocal quality, one that’s generally perceived as confident and professional – think <a href="https://www.youtube.com/watch?v=RIiNuLgUInk">Morgan Freeman</a>. </p>
<h2>Developing a character</h2>
<p>But there’s a special niche for voices that are unusual.</p>
<p>The very skills that an actor learns to create a melodious voice can also be manipulated for a character voice – which is exactly what Gottfried was able to do, along with other actors who developed memorable characters, such as Jim Carrey in “<a href="https://www.imdb.com/title/tt0110475/?ref_=vp_close">The Mask</a>” and Eartha Kitt as Yzma in “<a href="https://www.youtube.com/watch?v=IHA2rNGUusU">The Emperor’s New Groove</a>.” Meryl Streep has been especially adept at creating unique voices for a number of roles, but one that stands out to me is her portrayal of Margaret Thatcher in “<a href="https://www.youtube.com/watch?v=dnwG9lTd4-M">The Iron Lady</a>.”</p>
<p>Understanding what you can change – and how to change it – is the key. </p>
<p>In my voice-over class, for example, I introduce a range of vocal qualities that can be mined to develop new voices. Five of the most common are a hoarse voice, a breathy one, a creaky one – also known as <a href="https://www.youtube.com/watch?v=4L7-9N1xQZA">vocal fry</a> – a voice that incorporates hypernasality and one that accentuates hyponasality, which refers to how most people sound when they have a cold.</p>
<p>One of the best and most immediate ways to change your voice is by placing it in a specific resonating area of the body – such as the sinuses or throat – or by changing how the vocal folds vibrate. </p>
<p>In a class on character voice, I coach students to direct the sound of their voice into their nasal cavity for a hypernasal sound, and into the back of their throat, the pharyngeal cavity, for a hyponasal sound. </p>
<p>To trigger a hypernasal sound, you could quack like a duck – “Aflac!” – or mimic <a href="https://www.imdb.com/name/nm0002121/">Margaret Hamilton’s</a> Wicked Witch of the West from “The Wizard of Oz” with the phrase “I’ll get you, my pretty!”</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/OQ_g6NOo7yo?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Wicked Witch of the West in ‘The Wizard of Oz’ possesses the hallmarks of the hypernasal sound.</span></figcaption>
</figure>
<p>For a hyponasal sound, pinch your nostrils together so no sound comes through the nasal passage, and you’ll sound like you have a stuffy nose. Widening the back of your throat while you speak will create a sound similar to that of Lenny from “Loony Tunes.” </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bs-Q0JmWjj0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Sounding droopy and dopey like Lenny can involve accentuating a hyponasal sound.</span></figcaption>
</figure>
<p>Want to sound like <a href="https://www.imdb.com/name/nm0001413/">Julie Kavner’s</a> rendition of Marge Simpson, who speaks with a creaky voice? Relax your throat and say “uhhh” in a very low pitch. The vocal folds are short and thick and create a slow vibration. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/RUPi9e_LWM4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Marge Simpson pushes back against suggestions that she sounds like Vice President Kamala Harris.</span></figcaption>
</figure>
<p>To achieve a breathy quality, sigh out an easy “hahhh” with half voice and half breath. Marilyn Monroe singing “<a href="https://www.youtube.com/watch?v=iH3oOVKt0WI">Happy Birthday</a>” to President John F. Kennedy captures this vocal quality perfectly. </p>
<p>If Gilbert Gottfried were to walk into my classroom and ask me to analyze his character voice, I would describe it as a combination of hypernasality and raspy, with a bit of stridency thrown in. He speaks in a relatively high pitch with little modulation and stays at a consistently high volume. </p>
<p>Of course, Gottfried perfected this sound, and it worked in tandem with his brand of humor. If you were to develop something similar, just make sure you could figure out when to hit the “off” switch.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/R-SxktKa7Fc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Gilbert Gottfried as Mr. Peabody in ‘Problem Child.’</span></figcaption>
</figure>
<p>[<em>Over 150,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-150ksignup">Sign up today</a>.]</p><img src="https://counter.theconversation.com/content/181703/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Erica Tobolski does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Though it was exceedingly grating, the late comedian was able to perfect a sound that worked in tandem with his brand of humor.Erica Tobolski, Professor of Theatre and Dance, University of South CarolinaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1803852022-03-31T12:42:50Z2022-03-31T12:42:50ZWhat is aphasia? An expert explains the condition forcing Bruce Willis to retire from acting<figure><img src="https://images.theconversation.com/files/455332/original/file-20220330-23801-c9ph71.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5991%2C3961&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Bruce Willis has announced he is stepping away from acting.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/bruce-willis-attends-the-17th-annual-a-great-night-in-news-photo/1140471275?adppopup=true">Theo Wargo/Getty Images</a></span></figcaption></figure><p><em>Actor Bruce Willis, 67, is “<a href="https://www.nytimes.com/2022/03/30/movies/bruce-willis-aphasia.html">stepping away” from his career in film and TV</a> after being diagnosed with aphasia, his family announced on March 30, 2022.</em></p>
<p><em>In a message <a href="https://www.instagram.com/p/Cbu-CyELWio/">posted on Instagram</a>, his daughter, Rumer Willis, said that the condition was “impacting his cognitive abilities.”</em></p>
<p><em><a href="https://www.bu.edu/sargent/profile/swathi-kiran-ph-d-ccc-slp/">Swathi Kiran</a>, director of the <a href="https://www.bu.edu/aphasiaresearch/">Aphasia Research Laboratory</a> at Boston University, explains what aphasia is and how it impairs the communication of those with the condition.</em></p>
<h2>What is aphasia?</h2>
<p><a href="https://www.mayoclinic.org/diseases-conditions/aphasia/symptoms-causes/syc-20369518">Aphasia</a> is a communication disorder that affects someone’s ability to speak or understand speech. It also impacts how they understand written words and their ability to read and to write.</p>
<p>It is important to note that aphasia can take different forms. Some people with aphasia only have difficulty understanding language – a result of damage to the <a href="https://www.medicalnewstoday.com/articles/temporal-lobe#:%7E:text=The%20temporal%20lobe%20is%20one,conscious%20and%20long%2Dterm%20memory.">temporal lobe</a>, which governs how sound and language are processed in the brain. Others only have difficulty with speaking – indicating damage to the <a href="https://www.medicalnewstoday.com/articles/318139">frontal lobe</a>. A loss of both speaking and comprehension of language would suggest damage to both the large temporal lobe and frontal lobe.</p>
<p>Almost everyone with aphasia struggles when trying to come up with the names of things they know, but can’t find the name for. And because of that, they have trouble using words in sentences. It also affects the ability of those with the condition to read and write.</p>
<h2>What causes aphasia?</h2>
<p>In most cases, <a href="https://www.stroke.org/en/about-stroke/effects-of-stroke/cognitive-and-communication-effects-of-stroke/stroke-and-aphasia#:%7E:text=It's%20a%20language%20disorder%20that,home%2C%20socially%20or%20at%20work.">aphasia results from a stroke or hemorrhage</a> in the brain. It can also be caused by damage to the brain from impact injury such as a car accident. Brain tumors can also result in aphasia.</p>
<p>There is also a separate form of the condition called <a href="https://www.mayoclinic.org/diseases-conditions/primary-progressive-aphasia/symptoms-causes/syc-20350499">primary progressive aphasia</a>. This starts off with mild symptoms but gets worse over time. The medical community doesn’t know what causes primary progressive aphasia. We know that it affects the same brain regions as in cases where aphasia results from a stroke or hemorrhage, but the onset of symptoms follow a different trajectory.</p>
<h2>How many people does it affect?</h2>
<p>Aphasia is unfortunately quite common. Approximately <a href="https://www.stroke.org.uk/effects-of-stroke/communication-problems#:%7E:text=Aphasia%20affects%20your%20ability%20to,of%20stroke%20survivors%20have%20it.">one-third of all stroke survivors</a> suffer from it. In the U.S., around <a href="https://www.aphasia.org/aphasia-resources/aphasia-factsheet/">2 million people have aphasia and around 225,000 Americans</a> are diagnosed every year. Right now, we don’t know what proportion of people with aphasia have the primary progressive form of the condition.</p>
<p>There is no gender difference in terms of who suffers from aphasia. But people at higher risk of stroke – so those with cardiovascular disabilities and diabetes – are <a href="https://www.niddk.nih.gov/health-information/diabetes/overview/preventing-problems/heart-disease-stroke">more at risk</a>. This also means that minority groups are more at risk, simply because of the <a href="https://www.cdc.gov/healthequity/racism-disparities/index.html#:%7E:text=The%20data%20show%20that%20racial,compared%20to%20their%20White%20counterparts.">existing health disparities in the U.S</a>.</p>
<p>Aphasia can occur at any age. It is usually people over the age of 65 simply because they have a higher risk of stroke. But young people and even babies can develop the condition.</p>
<h2>How is it diagnosed?</h2>
<p>When people have aphasia after stroke or hemorrhage, the diagnosis is made by a neurologist. In these cases, patients will have displayed a sudden onset of the disorder – there will be a huge drop in their ability to speak or communicate.</p>
<p>With primary progressive aphasia, it is harder to diagnose. Unlike in cases of stroke, the onset will be very mild at first – people will slowly forget the names of people or of objects. Similarly, difficulty in understanding what people are saying will be gradual. But it is these changes that trigger diagnosis.</p>
<h2>What is the prognosis in both forms of aphasia?</h2>
<p>People with aphasia resulting from stroke or hemorrhage will recover over time. How fast and how much depends on the extent of damage to the brain, and what therapy they receive.</p>
<p>Primary progressive aphasia is degenerative – the patient will deteriorate over time, although the rate of deterioration can be slowed.</p>
<h2>Are there any treatments?</h2>
<p>The encouraging thing is that aphasia is treatable. In the non-progressive form, <a href="https://www.nidcd.nih.gov/health/aphasia">consistent therapy</a> will result in recovery of speech and understanding. One-on-one repetition exercises can help those with the condition regain speech. But the road can be long, and it depends on the extent of damage to the brain.</p>
<p>With primary progressive aphasia, symptoms of speech and language decline will get worse over time.</p>
<p>But the clinical evidence is unambiguous: Rehabilitation can help stroke survivors regain speech and the understanding of language and <a href="https://www.aphasia.org/aphasia-resources/primary-progressive-aphasia/">can slow symptoms</a> in cases of primary progressive aphasia.</p>
<p>Clinical trial of certain types of drugs are under way but in the early stages. There do not appear to be any miracle drugs. But for now, speech rehabilitation therapy is the <a href="https://www.nidcd.nih.gov/health/aphasia">most common treatment</a>.</p><img src="https://counter.theconversation.com/content/180385/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Swathi Kiran receives funding from National Institutes of Health. She is a Board Member of the National Aphasia Association. </span></em></p>The ‘Die Hard’ actor is suffering from a communications disorder that affects 2 million Americans.Swathi Kiran, Professor of Neurorehabilitation, Boston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1803992022-03-31T09:01:23Z2022-03-31T09:01:23ZWhat is aphasia, the condition Bruce Willis lives with?<p>After a career spanning 40 years, 67-year-old Bruce Willis has stepped away from acting due to health issues, including a diagnosis of aphasia. </p>
<p>Willis’ family released a <a href="https://www.instagram.com/p/Cbu-mD7LMPg/?utm_source=ig_web_copy_link">heartfelt statement via Instagram today</a> to let fans know.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1509200180213698562"}"></div></p>
<p>Never heard of aphasia? You’re not alone. </p>
<p>Aphasia is a communication disability caused by damage or changes to the language networks of the brain. </p>
<p>Often considered a difficulty with “getting words out”, aphasia can in fact impact every aspect of a person’s life.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/in-a-chatty-world-losing-your-speech-can-be-alienating-but-theres-help-121943">In a chatty world, losing your speech can be alienating. But there's help</a>
</strong>
</em>
</p>
<hr>
<h2>How does aphasia affect people?</h2>
<p>A person with aphasia can have difficulty speaking, understanding others, reading, writing and using numbers. </p>
<p>Aphasia impacts everything from conversations, negotiating, expressing emotions, storytelling, asking questions, to writing an email. </p>
<p>When communication is affected, so is the ability to share information, engage in relationships and interact meaningfully with the world. </p>
<p>Aphasia can <a href="https://www.tandfonline.com/doi/abs/10.1080/02687038.2014.928664?journalCode=paph20#:%7E:text=In%20particular%2C%20those%20with%20aphasia,">change relationships</a> with family and friends, <a href="https://www.tandfonline.com/doi/abs/10.1080/02687030701640941">make it harder</a> to get out and do things (such as use public transport or do the shopping), affect <a href="https://www.tandfonline.com/doi/full/10.1080/02687038.2019.1594151">self-identity</a> and, as for Willis, can impact <a href="https://www.tandfonline.com/doi/abs/10.1080/02687038.2011.563861?journalCode=paph20">the ability to work</a>.</p>
<p>Depression and other negative mood changes <a href="https://www.tandfonline.com/doi/abs/10.1080/02687038.2019.1673304">are common</a> in people with aphasia, as is a reduction in their <a href="https://journals.lww.com/lww-medicalcare/Fulltext/2010/04000/The_Relationship_of_60_Disease_Diagnoses_and_15.14.aspx">self-perceived quality of life</a>. </p>
<h2>What causes aphasia and how common is it?</h2>
<p>Different types of aphasia can result from different brain conditions, most commonly stroke but also <a href="https://www.stroke.org.uk/what-is-aphasia/types-of-aphasia">brain tumour</a>, traumatic brain injury, and types of dementia, such as <a href="https://my.clevelandclinic.org/health/diseases/17387-primary-progressive-aphasia-ppa#:%7E:text=Primary%20progressive%20aphasia%20(PPA)%20can,before%20going%20to%20the%20doctor.">primary progressive aphasia</a>. </p>
<p>So there is a wide range of variability in the severity and types of communication affected.</p>
<p>Primary progressive aphasia can occur in younger people, but is <a href="https://my.clevelandclinic.org/health/diseases/17387-primary-progressive-aphasia-ppa#:%7E:text=Primary%20progressive%20aphasia%20(PPA)%20can,before%20going%20to%20the%20doctor">most commonly diagnosed</a> between age 50 and 75. </p>
<p><a href="https://linkinghub.elsevier.com/retrieve/pii/S0003999316300417">One-third of people</a> who have had a stroke will also experience aphasia. </p>
<p>While it’s most likely to affect older adults, brain injuries, strokes and tumours causing aphasia can also affect <a href="https://www.childneurologyfoundation.org/disorder/aphasia/">children</a>, <a href="https://pubmed.ncbi.nlm.nih.gov/27538892/">adolescents</a> and <a href="https://www.frontiersin.org/10.3389%2Fconf.fnhum.2019.01.00094/event_abstract">young adults</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-brain-regions-control-our-language-and-how-do-we-know-this-63318">What brain regions control our language? And how do we know this?</a>
</strong>
</em>
</p>
<hr>
<p>Based on <a href="https://www2.deloitte.com/content/dam/Deloitte/au/Documents/Economics/deloitte-au-dae-economic-impact-stroke-report-061120.pdf">current stroke statistics</a>, it is estimated that at least 140,000 Australian live with aphasia.</p>
<p>Despite the high rates and evidence of negative impacts, awareness of aphasia in the public and health-care professions <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7731675/">is low</a>. </p>
<h2>What else plays a role?</h2>
<p>A person’s environment <a href="http://aphasiology.pitt.edu/1762/1/185.pdf">has a big impact</a> in enabling or disabling people with aphasia. </p>
<p>The <a href="https://www.who.int/health-topics/social-determinants-of-health#tab=tab_1">social determinants of health</a> influence the way someone experiences, recovers from, and lives with aphasia. </p>
<p>So people who have good access to health care, who hold high social positions, are wealthy, and have the support of an engaged family may be less impacted by the condition. Willis can be grateful in this respect.</p>
<p>The impact of aphasia is not just felt by the person with aphasia. The <a href="https://www.tandfonline.com/doi/abs/10.1080/02687038.2013.768330">psychological and social impact, as well as the disability</a> resulting from aphasia on the family is significant. </p>
<h2>How is it treated?</h2>
<p>There is no cure for aphasia. But <a href="https://www.tandfonline.com/doi/abs/10.1080/09638288.2021.2012843?journalCode=idre20">interventions</a> such as speech pathology can <a href="https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD000425.pub4/full">make a massive difference</a>. Though there is no “one size fits all” approach. </p>
<p>Speech pathologists are <a href="https://www.speechpathologyaustralia.org.au/SPAweb/Resources_for_the_Public/What_is_a_Speech_Pathologist/SPAweb/Resources_for_the_Pubic/What_is_a_Speech_Pathologist/What_is_a_Speech_Pathologist.aspx?hkey=7e5fb9f8-c226-4db6-934c-0c3987214d7a">experts in communication disabilities</a>. They work within multidisciplinary health-care teams across a variety of hospital and community-based sites. This includes working with medical, nursing and allied health professionals such as psychologists, occupational therapists, social workers and physiotherapists. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-can-all-help-to-improve-communication-for-people-with-disabilities-101199">We can all help to improve communication for people with disabilities</a>
</strong>
</em>
</p>
<hr>
<p>Interventions for people with <a href="https://www.tandfonline.com/doi/full/10.1080/09638288.2022.2051080">progressive</a> and <a href="https://www.latrobe.edu.au/research/centres/health/aphasia/research/treatment-effectiveness">post-stroke</a> aphasia are tailored to the person, their family and community, with consideration of many factors including aphasia diagnosis and cause, severity and type of communication difficulties, level of participation in communication-related activities, the communication environment, their goals, mood and quality of life. </p>
<p>New and improved treatments are also <a href="https://www.latrobe.edu.au/research/centres/health/aphasia/research">being developed</a>. </p>
<h2>Do I have aphasia? What should I look out for?</h2>
<p>Sudden or gradual decline and changes in communication, personality, behaviour, memory and thinking skills should be investigated by a doctor. This could be a local GP, neurologist or geriatrician. A speech pathologist can also be a part of this process.</p>
<p>Be aware of the signs of stroke and aphasia associated with dementia. This may include difficulty finding the right word, mixing up words or sounds (for example, “cat” or “gog” for “dog”), using nonsense words, not being able to get any words out or not being able to understand others. If these changes are sudden or accompanied by a facial droop or difficulty moving your arms or legs, treat it as a medical emergency and <a href="https://strokefoundation.org.au/About-Stroke/Learn/signs-of-stroke">seek urgent medical attention</a>. </p>
<p>Willis and his family demonstrate love and strength in facing aphasia “head on”. Their sentiments of embracing social connectedness and to continue to live by Willis’ words of “Live it up” provide hope for others with aphasia around the world.</p>
<p>We can all play our part in being <a href="https://www.aphasia.ca/communication-tools-communicative-access-sca/">more effective communication partners</a> for people living with aphasia.</p><img src="https://counter.theconversation.com/content/180399/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Abby Foster receives funding from La Trobe University and the Centre of Research Excellence in Ear & Hearing Health of Aboriginal and Torres Strait Islander Children. She is an affiliate of the Centre of Research Excellence in Aphasia Recovery & Rehabilitation </span></em></p><p class="fine-print"><em><span>Caroline Baker receives funding from Stroke Foundation and Speech Pathology Australia. She is an affiliate of the Centre of Research Excellence in Aphasia Recovery & Rehabilitation </span></em></p>Bruce Willis’ family today revealed he has been diagnosed with aphasia. So what actually is aphasia and why haven’t we heard of it before?Abby Foster, Allied Health Research Advisor, Monash HealthCaroline Baker, Speech Pathology Research and Clinical Practice Lead, Monash HealthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1786092022-03-05T14:53:37Z2022-03-05T14:53:37ZHow Kwame Nkrumah’s midnight speech set a tradition for marking the moment of liberation<figure><img src="https://images.theconversation.com/files/450141/original/file-20220305-19-gqchw6.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Kwame Nkrumah's vision still resonates with Ghanaians</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/jbdodane/9761663542">JB Dodane/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>As Ghana celebrates the <a href="https://nationaltoday.com/ghana-independence-day/">65th anniversary</a> of its independence from Britain, it is worth revisiting the landmark speech Kwame Nkrumah delivered at midnight to mark the event of Ghana’s birth. Nkrumah had led a mass movement demanding self-government in the anticolonial struggle and was, with independence, poised to become the first Prime Minister of independent Ghana.</p>
<p>Ghana was the first sub-Saharan African country to gain its independence from colonial rule. Accordingly, Nkrumah’s speech at the moment of liberation set a tone of pride in Ghana’s accomplishment along with hope for freedom struggles still in progress across decolonising Africa and its diaspora. </p>
<p>Today, Nkrumah’s midnight speech stands as a model of African political leadership that avoids the mimicry of Western models.</p>
<p>Addressing a large and excited crowd, Nkrumah’s first words at midnight were:</p>
<blockquote>
<p>At long last the battle has ended! And thus Ghana, your beloved country, is free for ever.</p>
</blockquote>
<p>At the climax of the speech, Nkrumah acknowledged the larger stakes of the moment, declaring: </p>
<blockquote>
<p>Our independence is meaningless unless it is linked up with total liberation of the African continent.</p>
</blockquote>
<p>My <a href="https://www.tandfonline.com/doi/full/10.1080/15358593.2022.2027996">recent analysis</a> of Nkrumah’s midnight speech reflects on how he used his performance at the moment of Ghana’s independence to outline his vision of colonial freedom. Nkrumah’s revolutionary rhetoric refused the narrow grounds on which Britain was offering Ghana independence. Instead, he sought to generate new forms of belonging outside the conditions that were the remnants of colonialism. </p>
<p>Nkrumah embraced various populations in the colony who had been devalued by the colonial administration and ignored by African leaders who were his rivals. His rhetoric worked alongside political rallies to organise a mass base that was a means of distinction for his party, the Convention People’s Party.</p>
<p>In addition, he advocated for pan-African union so that Ghana and other emergent African countries wouldn’t perpetuate the legacies of colonial rule. At the time, Nkrumah worried that the piecemeal liberation of colonised territories would limit the transformative potential of independence. Instead, he promoted African union as a way to establish new shared identities and a self-determined presence in international affairs. </p>
<p>Today, Nkrumah’s vision of a united Africa stands as a testament to the common humanity of Africans. Nkrumah’s embrace of the mass base and pan-African discourses mattered because it injected populist energies into Gold Coast politics and demonstrated a way for Africans to pursue sovereignty within conditions of their own making.</p>
<h2>Nkrumah’s vision of freedom</h2>
<p>Trinidadian journalist <a href="https://www.jstor.org/stable/274609">George Padmore</a>, one of Nkrumah’s closest advisors, singled out how Nkrumah and the Convention People’s Party offered a new form of political leadership that was centred on</p>
<blockquote>
<p>the plebeian masses, the urban workers, artisans, petty traders, market women and fishermen, the clerks, the junior teachers, and the vast farming communities of the rural areas.</p>
</blockquote>
<p>It is fitting, then, that, in the speech, Nkrumah named the people on equal terms with the chiefs when he recognised those who would “reshape the destiny of this country”. Rather than taking his cues from traditional rulers, Nkrumah used this mass base to ensure that the possibilities of postcolonial society would not be limited by precolonial traditions. </p>
<p>He also promoted the masses as representatives of the “new African” who is </p>
<blockquote>
<p>ready to fight his own battle and show that after all the black man is capable of managing his own affairs.</p>
</blockquote>
<p>This proud and defiant vision of African political achievement was in stark contrast to racist and imperial ways of knowing that degraded and doubted African and black potential. </p>
<p>A second major theme of Nkrumah’s midnight speech was his view of the role of pan-Africanism in relationship to national consolidation. He said that Ghana’s independence was</p>
<blockquote>
<p>meaningless unless it is linked up with total liberation of the African continent. </p>
</blockquote>
<p>Although this became one of the most famous statements of the speech, its novel sentiment should not be overlooked. It marked Nkrumah’s widening of freedom to include pan-African dimensions. In subsequent years, Nkrumah would coordinate efforts across the continent, including the <a href="http://hrlibrary.umn.edu/africa/OAU_Charter_1993.html">1963 ratification of the Organisation of African Unity</a>. </p>
<p>Today, one of the enduring tributes to his work encouraging political and economic cooperation among African nations is the statue of Nkrumah on the grounds of the African Union building in Addis Ababa, Ethiopia, which depicts him as he was dressed during the midnight speech.</p>
<p>One of the curious aspects of Nkrumah’s midnight speech is the fact that he asked the band to play the Ghana National Anthem twice. The first time, it was played after a moment of silence and Nkrumah’s declaration: “Ghana is free forever!”</p>
<p>Later, Nkrumah called for the anthem to be played again, saying:</p>
<blockquote>
<p>this time … it is going to be played in honour of the foreign states who are here with us today.“ </p>
</blockquote>
<p>This second anthem, however, has been written out of most of the widespread records of the speech (including the version that Nkrumah included in his 1961 book, <em>I Speak of Freedom</em>).</p>
<p>My archival work in both Ghana and the US has recovered a complete version of the speech that includes the second anthem and other omitted passages.</p>
<p>In my view, these dual anthems mark both the national and international audiences that Nkrumah was addressing. </p>
<p>For Nkrumah, achieving genuine freedom was not as simple as merely renaming the Gold Coast "Ghana” and replacing the colonial administers in Accra’s Christiansborg Castle with African agents. The “hard work” that Nkrumah focused on that night included a social and ideological reorganisation to match the political changes underway within independence. In this view, the pursuit of pan-African union was central to the transfiguration of the political kingdom. </p>
<h2>Beyond Ghana</h2>
<p>Nkrumah’s midnight speech is everywhere in Ghana today. It circulates on radio and in social media posts. Key quotations from it are emblazoned on t-shirts, posters, magazine covers, billboards, and beyond. As Nkrumah has ascended to founding father status within Ghana’s current Fourth Republic, contemporary politicians from all sides of the political spectrum invoke it. This is true even when advocating for policies that are in direct tension with those of Nkrumahism.</p>
<p>What is less well known, however, is that, in part because of Nkrumah’s influence and the catalytic role of Ghana’s freedom, the midnight independence speech has become a transnational tradition tied to moments of postcolonial foundation across the globe. </p>
<p>The midnight staging of Nkrumah’s speech was, in fact, an allusion to <a href="https://www.cam.ac.uk/tryst_with_destiny">the midnight speech</a> that Jawaharlal Nehru delivered for India’s independence ten years earlier. In addition, the convention of a midnight independence ceremony became a recurring practice for other countries emerging from colonial rule. Midnight independence ceremonies in subsequent years included Nigeria (1960), Sierra Leone (1961), Tanzania (1961), Botswana (1966), Angola (1975), and Zimbabwe (1980). </p>
<p>Across the Black Atlantic, Guyana marked independence with a midnight celebration (1966) and even the 1997 handover of Hong Kong to China was celebrated with a midnight countdown.</p><img src="https://counter.theconversation.com/content/178609/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Erik Johnson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Nkrumah’s rhetorical vision used the politics of the crowd to build a postcolonial community outside of the conscripts of colonialism.Erik Johnson, Assistant Professor, Media and Communications Studies, Stetson University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1663242021-08-27T12:31:27Z2021-08-27T12:31:27ZTikTok, #BamaRush and the irresistible allure of mocking Southern accents<figure><img src="https://images.theconversation.com/files/417869/original/file-20210825-19-1bffixs.jpg?ixlib=rb-1.1.0&rect=211%2C30%2C2548%2C1840&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The University of Alabama's Alpha Phi sorority runs out of Bryant-Denny Stadium during bid day in 2014.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/AlabamaSororities/2f63d36f739d40aea211a77cb69ac367/photo?Query=alabama%20AND%20sorority&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=50&currentItemNo=26">AP Photo/Brynn Anderson</a></span></figcaption></figure><p>As college students across the country return to campuses grappling with the COVID-19 delta variant, Greek letters of a different variety have captivated social media feeds with stunning virality.</p>
<p>The #BamaRush trend on TikTok introduced followers to the annual recruitment process for <a href="http://www.npcwomen.org/">National Panhellenic Conference</a> sororities at the University of Alabama. The popular videos offer a firsthand perspective on the recruitment process, showcasing the various events and the women’s corresponding fashion choices – the “outfit of the day,” or #OOTD – for each stage.</p>
<p>When <a href="https://www.nytimes.com/2021/08/17/style/bama-rush-explained.html">this phenomenon</a> came to my attention, I noticed that TikTok’s algorithm fed me not only the posts of women participating in #BamaRush but also <a href="https://www.tiktok.com/@i_delz/video/6994805208417111302?is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">parody videos</a> made by people glued to the unfolding events.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1428841325546274816"}"></div></p>
<p>In these videos, I immediately observed a fixation on the women’s accents, which <a href="https://www.buzzfeed.com/daily/alabama-sorority-rush-tiktok-bamarush">one reporter</a> described as “thick” and “heavy.” </p>
<p>Having been born and raised in northeast Georgia and educated in North Carolina, I was quite young when I intuited that, if I were to be taken seriously as an actor, a scholar and a human, my accent would have to go. By the time I arrived in New York in 2006, I had successfully erased most markers of my Southernness from my speech. What remained I was able to surgically remove after receiving notes and feedback from directors and coaches. </p>
<p><a href="https://theatre.utk.edu/people/katie-cunningham/">Now I teach voice and speech</a> to actors in a theater program in the South, and I think a lot about how people perceive the native speech varieties of this region. What’s behind this enduring fascination with – and thinly veiled disdain for – some Southern American accents?</p>
<h2>Sorority culture rife with issues</h2>
<p>I speak from personal experience about sorority culture because, for a short time, I was a member of one at the University of North Carolina at Chapel Hill. I experienced recruitment from within and witnessed some of the problematic aspects of this system. My sophomore year, I formally withdrew – what’s called “<a href="https://www.teenvogue.com/story/10-things-to-know-about-dropping-your-sorority">de-sistering</a>.”</p>
<p>During the flood of media coverage of the #BamaRush trend, Slate’s ICYMI podcast did an <a href="https://slate.com/culture/2021/08/alabama-rush-tiktok-videos-explained.html">explainer episode</a> addressing all the “-isms” inherent to certain Greek organizations, including racism, sexism, classism and weight discrimination.</p>
<p>In fact, the University of Alabama’s own student newspaper, The Crimson White, <a href="https://cw.ua.edu/16498/news/the-final-barrier-50-years-later-segregation-still-exists/">published an article</a> in 2013 that investigated racism in sorority recruitment, spurring <a href="https://www.theguardian.com/world/2013/sep/17/sorority-segregation-ended-university-alabama">a process of integration</a>. (Yes, in 2013!) </p>
<p>To be clear: There are plenty of things to criticize about the National Panhellenic Conference’s sorority culture.</p>
<p>Accents, however, aren’t one of them.</p>
<h2>Inside the #BamaRush accents</h2>
<p>Among the #BamaRush vloggers, one who garnered intense attention during the unfolding recruitment process was Makayla Culpepper, whose <a href="https://www.tiktok.com/@whatwouldjimmybuffettdo/video/6994056773153967365?is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">pronunciation of “philanthropy”</a> in an early round was the subject of much mockery. In fact, <a href="https://www.tiktok.com/@whatwouldjimmybuffettdo/video/6996715146299067654?is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">she credits</a> this pronunciation as the genesis of her newfound internet stardom. Culpepper, who is biracial, was <a href="https://www.seventeen.com/life/school/a37291501/alabama-rush-tiktok/">subsequently dropped</a> from recruitment under dubious circumstances.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1426906951837958144"}"></div></p>
<p>Other pronunciations that have <a href="https://www.tiktok.com/@gabbyyhorne/video/6997066057445838085?traffic_type=google&referer_url=amp_bamarushtok&referer_video_id=6997066057445838085&is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">piqued the interest</a> of onlookers include words in what linguists and accent coaches call the PRICE <a href="https://en.wikipedia.org/wiki/Lexical_set">lexical set</a>, a category of words that are generally pronounced with the same vowel sound in their stressed syllable. </p>
<p>Several of the #BamaRush TikTokkers pronounce words in the PRICE set – such as “bite,” “rice,” “my” and “right” – with a single vowel that sounds something like “ah.” This differs from the way these words are pronounced in a <a href="https://vimeo.com/75961485">so-called General American</a> accent, in which a speaker glides through two different vowel sounds, resulting in something like “aight” in “right.” Some of the women’s pronunciations of “on” and “own” are nearly indistinguishable, <a href="https://www.southerncultures.org/article/on-and-on-appalachian-accent-and-academic-power/">another marker of some dialects of Southern American English</a>. </p>
<p><a href="https://bittersoutherner.com/with-drawl#.YSahZS2cbs0">The quality described as Southern drawl</a> may be related to the way some speakers vocalize words like “dress” and “hair” with a lengthened glide between vowels and syllable break: “dray-ess” and “hay-ur.”</p>
<p>It is questionable to connect the undeniably performative aspects of these videos – the fashion shows, the bid day envelope opening <a href="https://www.tiktok.com/@hannahmorris2/video/6996761679719582982?is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">videos</a>, the choreographed <a href="https://www.tiktok.com/@ebbabyyy/video/6995531572199836933?lang=en&is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">dorm room introductions</a> – with the mistaken idea that these accents are part of the performance. When investigating the #BamaRush trend, I heard these women described more than once as “<a href="https://www.tiktok.com/@torikonchel/video/6995003127602613510?lang=en&is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=6904819716789044741">characters</a>” in an unfolding “drama.” </p>
<p>But this is not a scripted series with characters or a reality show with contestants. They are not playing at sounding like this. It’s just their speech. And speech is essential to identity.</p>
<h2>The cost of satirizing Southern accents</h2>
<p>In 2019, an episode of the podcast series <a href="https://www.wnycstudios.org/podcasts/dolly-partons-america/episodes/dolly-partons-america-episode">Dolly Parton’s America</a> included interviews with students at my institution, the University of Tennessee, Knoxville. In it, they shared their encounters with the realities of linguistic bias. As one interviewee noted, her own mother cautioned her: “If you want people to take you seriously, we’re going to have to work on the way you talk.” </p>
<p>One cost of scoffing at Southern accents is the ceaseless perpetuation of negative stereotypes about Southern people. <a href="https://libres.uncg.edu/ir/uncg/f/J_DeJesus_Northern_2013.pdf">A 2013 study</a> found that by the age of 9 or 10, all children – including Southern children – identified Northern-accented speakers as sounding “smarter,” which indicates that they’re internalizing stereotypes about speech at a young age. </p>
<p>Psycholinguist <a href="https://www.google.com/books/edition/How_You_Say_It/pkSvDwAAQBAJ?hl=en&gbpv=0">Katherine Kinzler</a> has also shown that accent-based biases may be tied to the mistaken assumption that speakers should be able to adjust their speech to conform to societal norms. Kinzler argues that this “perception of controllability” is at the root of weight- and mental health-based stigmas as well. </p>
<p>Furthermore, most mockery of Southern accents underestimates the <a href="https://catalog.ldc.upenn.edu/LDC2012S03">linguistic diversity</a> of the South and creates the false perception that Southern accents are all the same. In addition, <a href="https://read.dukeupress.edu/american-speech/article-abstract/93/3-4/497/136131/Southern-Speech-With-A-Northern-AccentPerformance?redirectedFrom=fulltext">research shows</a> that most accent imitations are not especially accurate. There is a reason accent and dialect coaches are specially trained to help actors do this work respectfully and convincingly. </p>
<p>The harm of stereotypical accent imitations is one familiar to many whose speech exists outside the accepted “standard,” like speakers of <a href="https://www.britannica.com/topic/African-American-English">African American English</a> and those for whom English is a second language. The same forces that reduce Southern speech to a uniform monolith also run the risk of reducing the idea of “Southernness” to a single stereotype: white, unintelligent, bigoted. This discounts the diversity of the South, and the significant cultural and political power of Black Southerners, who make up large shares of the populations of many Southern states – <a href="https://www.census.gov/quickfacts/AL">including Alabama</a>.</p>
<p>So why should people care about Southern accents being the butt of viral jokes? </p>
<p>Ignoring the way that speech and identity are so inextricably linked erases the people behind the voices. Going after these women’s accents when there is much about the institutions themselves to legitimately critique feels like punching down. This is especially true when the accent is played only for laughs.</p><img src="https://counter.theconversation.com/content/166324/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The author was a member of an NPC sorority from 2001-2002.</span></em></p>There’s plenty to critique about sorority culture. But going after Southern accents is punching down.Kathryn Cunningham, Assistant Professor of Theatre, University of TennesseeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1662432021-08-19T18:01:21Z2021-08-19T18:01:21ZBat pups babble and bat moms use baby talk, hinting at the evolution of human language<figure><img src="https://images.theconversation.com/files/416565/original/file-20210817-27-17iivj8.jpg?ixlib=rb-1.1.0&rect=6%2C74%2C4128%2C2541&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A babbling pup produces distinct syllables, visualized in this composite image.</span> <span class="attribution"><a class="source" href="https://www.eurekalert.org/multimedia/782534">Michael Stifter and Ahana Fernandez</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>“Mamama,” “dadada,” “bababa” – parents usually welcome with enthusiasm the sounds of a baby’s babble. Babbling is the first milestone when learning to speak. <a href="https://www.wiley.com/en-us/Phonological+Development%3A+The+First+Two+Years%2C+2nd+Edition-p-9781118342831">All typically developing infants babble</a>, no matter which language they’re learning. </p>
<p>Speech, the oral output of language, requires precise control over the lips, tongue and jaw to produce one of the basic speech subunits: the syllable, like “ba,” “da,” “ma.” <a href="https://www.wiley.com/en-us/Phonological+Development%3A+The+First+Two+Years%2C+2nd+Edition-p-9781118342831">Babbling is characterized by universal features</a> – for example, repetition of syllables and use of rhythm. It lets an infant <a href="https://www.routledge.com/The-Emergence-of-the-Speech-Capacity/Oller/p/book/9780805826296">practice and playfully learn</a> how to control their vocal apparatus to correctly produce the desired syllables.</p>
<p>More than anything else, <a href="https://www.cambridge.org/core/books/evolution-of-language/2347BC6741639875250495BA3435056F">language defines human nature</a>. But its evolutionary origins have puzzled scientists for decades. Investigating the biological foundations of language across species – as I do in bats – is a promising way to gain insights into key features of human language.</p>
<p><a href="https://scholar.google.com/citations?user=FZy7JlIAAAAJ&hl=en&oi=ao">I’m a behavioral biologist</a> who has spent many months of 10-hour days sitting in front of bat colonies in Panama and Costa Rica recording the animals’ vocalizations. My colleagues and I have found striking parallels between the <a href="https://science.sciencemag.org/lookup/doi/10.1126/science.abf9279">babbling produced by these bat pups and that by human infants</a>. Identifying a mammal that shares similar brain structure with human beings and is also capable of vocal imitation may help us understand the cognitive and neuromolecular foundations of vocal learning.</p>
<h2>Vocal learning in other animals</h2>
<p>Scientists learned a great deal about vocal imitation and vocal development by studying songbirds. They are among the best-known vocal learners, and the learning process of young male songbirds <a href="https://doi.org/10.1146/annurev.neuro.22.1.567">shows interesting parallels</a> to human speech development. Young male songbirds also practice their notes in a practice phase reminiscent of human infant babbling.</p>
<p>However, songbirds and people possess different vocal apparatus – birds vocalize by using a syrinx, humans use a larynx – and their brain architecture differs. So drawing direct conclusions from songbird research for humans is limited.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="brown bat on tree with its mouth open while vocalizing" src="https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/416572/original/file-20210817-17-s2xcgs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A greater sac-winged bat pup babbling in its day roost.</span>
<span class="attribution"><a class="source" href="https://www.eurekalert.org/multimedia/782536">Michael Stifter</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Luckily, in Central America’s tropical jungle, there’s a mammal that engages in a very conspicuous vocal practice behavior that is <a href="https://doi.org/10.1007/s00114-006-0127-9">strongly reminiscent of human infant babbling</a>: the neotropical greater sac-winged bat, <em>Saccopteryx bilineata</em>. The pups of this small bat, dark-furred with two prominent white wavy stripes on the back, engage in daily babbling behavior during large parts of their development.</p>
<p>Greater sac-winged bats possess a large vocal repertoire <a href="https://doi.org/10.1007/s00265-004-0768-7">that includes 25</a> <a href="https://doi.org/10.1016/j.anbehav.2008.05.018">distinct syllable types</a>. A syllable is the smallest acoustic unit, defined as a sound surrounded by silence. These adult bats create <a href="https://doi.org/10.1007/s00265-004-0768-7">multisyllabic vocalizations and two song types</a>. The territorial song warns potential rivals that the owner is ready to defend their home turf, while the courtship song lets female bats know about a male bat’s fitness as a potential mate.</p>
<p>Of particular interest to me and my colleagues, the greater sac-winged bat is <a href="https://doi.org/10.1098/rsbl.2009.0685">capable of vocal imitation</a> – the ability to learn a previously unknown sound from scratch by ear. It requires acoustic input, like human parents talking to their infants, or in the case of the greater sac-winged bat, adult males that sing.</p>
<p>The only other non-human mammal that scientists have <a href="https://www.jstor.org/stable/4535550">documented babbling is the pygmy marmoset</a>, a small South American primate species that is not capable of vocal imitation. The greater sac-winged bat offered the first possibility to study pup babbling in detail in a species that can imitate the vocalizations of others. But just how similar is bat babbling to human infant babbling?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="woman kneels behind video camera pointed at tree in tropical environment" src="https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/416567/original/file-20210817-15-vuosuh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Fernandez spent long days in the field recording the vocalizations of greater sac-winged bat pups in their day roosts.</span>
<span class="attribution"><a class="source" href="https://www.eurekalert.org/multimedia/782539">Michael Stifter</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Hundreds of hours of bat babbling</h2>
<p>To answer that question, I monitored the vocal development of wild pups in eight colonies. During the day, <em>S. bilineata</em> find shelter and protection in tree crevices and outer walls of buildings. They’re very light-tolerant, and adults like to stay several centimeters apart from one another, making it easier for us to observe and record particular individuals.</p>
<p>To be able to recognize specific bats, I marked their forearms with colored plastic bands. I followed 20 pups from birth until weaning. Starting around 2.5 weeks of age, and continuing until weaning around 10 weeks old, pups babble away between sunrise and sunset in the day roost. It’s very loud, audible even to the human ear because some babbled syllables are within our hearing range (others are too high for us to hear). For each pup, I recorded babbling bouts – some of which lasted as long as 43 minutes – and the accompanying behaviors throughout their entire development. In contrast, adult bats produce <a href="https://doi.org/10.1007/s00265-004-0768-7">vocalizations that last no more than a few minutes</a>.</p>
<p><audio preload="metadata" controls="controls" data-duration="10" data-image="" data-title="Excerpt of a babbling bout of a Saccopteryx bilineata pup, in real-time." data-size="244600" data-source="Ahana A. Fernandez" data-source-url="https://www.eurekalert.org/multimedia/782535" data-license="CC BY-ND" data-license-url="http://creativecommons.org/licenses/by-nd/4.0/">
<source src="https://cdn.theconversation.com/audio/2247/babbling-excerpt-1-real-speed-black-background-credit-ahana-fernandez.mp3" type="audio/mpeg">
</audio>
<div class="audio-player-caption">
Excerpt of a babbling bout of a Saccopteryx bilineata pup, in real-time.
<span class="attribution"><a class="source" rel="nofollow" href="https://www.eurekalert.org/multimedia/782535">Ahana A. Fernandez</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a><span class="download"><span>239 KB</span> <a target="_blank" href="https://cdn.theconversation.com/audio/2247/babbling-excerpt-1-real-speed-black-background-credit-ahana-fernandez.mp3">(download)</a></span></span>
</div></p>
<p>Scientists have known for a while that <a href="https://doi.org/10.1007/s00114-006-0127-9">pups learn how to sing by</a> <a href="https://doi.org/10.1098/rsbl.2009.0685">vocally imitating adult tutors while babbling</a>. But our new study <a href="https://science.sciencemag.org/lookup/doi/10.1126/science.abf9279">provides the first formal analysis</a> that their babbling really does share many of the features that characterize babbling in human infants: duplication of syllables, use of rhythm and an early onset of the babbling phase during development.</p>
<p>Just as human infants produce sounds that are recognizable as what are called canonical adult syllables – those with mature features that sound like what an adult speaker produces – bat pups’ babbling consists of syllable precursors that are part of the adult vocal repertoire.</p>
<p>And just as human babbling includes what are probably playful sounds produced as the infant explores their voice, bat babbling includes so-called protosyllables that are only produced by pups.</p>
<p>Moreover, pup babbling is universal. Each pup, regardless of sex and regional origin, babbled during its development.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/DN-9a4MVA1Q?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The pup (on the right, with darker fur) sits beside the mother bat and engages in babbling behavior in the day roost. <em>Credit: Michael Stifter</em></span></figcaption>
</figure>
<h2>Baby talk, from mom to pup</h2>
<p>During my first field season, I noticed that during babble sequences, mothers and pups interacted behaviorally and vocally. Mothers produced a distinct call type directed at pups while babbling.</p>
<p>We humans alter our speech depending on whether we are addressing infants or adults. This infant-directed speech – also known as motherese – is a <a href="https://doi.org/10.1126/science.277.5326.684">special form of social feedback for the vocalizing infant</a>. It’s <a href="https://doi.org/10.1037/0012-1649.24.1.14">characterized by universal features</a>, including higher pitch, slower tempo and exaggerated intonation contours. The timbre – the voice color – <a href="https://doi.org/10.1016/j.cub.2017.08.074">also changes when people speak “motherese”</a> compared to when talking to other adults. Timbre is what makes a voice sound a bit cold and harsh or warm and cozy. Could it be that female bats also changed their timbre, depending on whom they directed their calls to?</p>
<p>The results were clear: For the first time, we’d found a non-human mammal that changes the color of voice depending on the addressee. <a href="https://doi.org/10.3389/fevo.2020.00265">Bats also use baby talk</a>!</p>
<p>[<em>Over 100,000 readers rely on The Conversation’s newsletter to understand the world.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=100Ksignup">Sign up today</a>.]</p>
<p>Our results introduce the greater sac-winged bat as a promising candidate for cross-species comparisons about the evolution of human language. Babbling is like a behavioral readout of the ongoing vocal learning happening in the brain. When pups babble, they imitate the adult song – and provide us with insight about when learning is taking place. It offers the unique possibility to study the genes that are involved in vocal imitation.</p>
<p>And since bats share their basic brain architecture with people, we can translate our research findings from bats to humans. I’m fascinated that two mammal species that are so different share striking parallels in how they reach the same goal: to acquire a complex adult vocal repertoire – namely, language.</p><img src="https://counter.theconversation.com/content/166243/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ahana Aurora Fernandez does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Vocal imitation is a key part of how humans learn to speak. New research shows that bats babble to learn and use baby talk to teach, just like people do.Ahana Aurora Fernandez, Postdoctoral Researcher in Behavioral Ecology and Bioacoustics, Museum für Naturkunde, BerlinLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1611432021-06-28T12:13:31Z2021-06-28T12:13:31ZDanish children struggle to learn their vowel-filled language – and this changes how adult Danes interact<figure><img src="https://images.theconversation.com/files/408207/original/file-20210624-21-169i9pi.png?ixlib=rb-1.1.0&rect=0%2C0%2C1169%2C420&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The way Danes speak makes it much harder for Danish children to learn the language. </span> <span class="attribution"><span class="source">Fabio Trecca</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>Denmark is a rich country with an extensive welfare system and strong education. Yet surprisingly, Danish children have trouble learning their mother tongue. Compared to Norwegian children, who are learning a very similar language, Danish kids on average know <a href="https://doi.org/10.1111/lang.12450">30% fewer words</a> at 15 months and take nearly <a href="https://doi.org/10.1080/01690965.2010.515107">two years longer to learn the past tense</a>. In “<a href="https://archive.org/details/hamletsh00shak">Hamlet</a>,” William Shakespeare famously wrote that “something is rotten in the state of Denmark,” but he might as well have been talking about the Danish language.</p>
<p>We are a <a href="https://scholar.google.com/citations?hl=en&user=_0jbd88AAAAJ">cognitive scientist</a> and <a href="https://pure.au.dk/portal/en/persons/fabio-trecca(76079e3a-3860-4424-8829-899ab5fa5243).html">language scientist</a> from the <a href="https://projects.au.dk/the-puzzle-of-danish/">Puzzle of Danish</a> group at Aarhus University and Cornell. Through <a href="https://doi.org/10.1111/lang.12450">our research</a>, we have found that the uniquely peculiar way that Danes speak seems to make it difficult for Danish children to learn their native language – and this challenges some central tenets of the science of language.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two spectrograms with the one for Danish a nearly continuous bar and the one for Norwegian shows sharp breaks." src="https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=305&fit=crop&dpr=1 600w, https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=305&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=305&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=384&fit=crop&dpr=1 754w, https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=384&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/408210/original/file-20210624-15-1ytzcfy.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=384&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A visual depiction of the words for ‘smoked trout’ spoken out loud in Danish (top) and Norwegian (bottom). Note how in Danish the two words completely melt into each other.</span>
<span class="attribution"><span class="source">Fabio Trecca</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Why is Danish so hard?</h2>
<p>There are three main reasons why Danish is so complicated. First, with about 40 different vowel sounds – compared to between 13 and 15 vowels in English depending on dialect – Danish has one of the largest vowel inventories in the world. On top of that, Danes often turn consonants into vowel-like sounds when they speak. And finally, Danes also like to “swallow” the ends of words and omit, on average, about a quarter of all syllables. They do this not only in casual speech but also when reading aloud from written text.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/s-mOy8VUEBk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The difficulty of Danish is no secret in Scandinavia, as seen in this clip from a Norwegian comedy TV show.</span></figcaption>
</figure>
<p>Other languages might incorporate one of these factors, but it seems that <a href="https://global.oup.com/academic/product/the-phonology-of-danish-9780198242680?cc=us&lang=en&#">Danish may be unique in combining all three</a>. The result is that Danish ends up with an abundance of sound sequences with few consonants. Because consonants play an important role in helping listeners figure out where words begin and end, the preponderance of vowel-like sounds in Danish <a href="https://doi.org/10.1111/lang.12325">appears to make it difficult to understand and learn</a>. It isn’t clear why or how Danish ended up with these strange quirks, but the upshot seems to be, as the German author <a href="http://www.zeno.org/Literatur/M/Tucholsky,+Kurt/Werke/1927/Eine+sch%C3%B6ne+D%C3%A4nin">Kurt Tucholsky quipped</a>, that “the Danish language is not suitable for speaking … everything sounds like a single word.”</p>
<h2>Kids learn later, adults process differently</h2>
<p>Before we could study the way Danish children learn their native language, we needed to figure out whether the peculiarities of Danish speech affected their ability to understand it. </p>
<p>To do this, our team sat Danish two-year-olds in front of a screen showing two objects, such as a car and a monkey. We then used an eye tracker to trace where the kids were <a href="https://doi.org/10.1177/0023830919893390">looking while listening to Danish sentences</a>.</p>
<p>When the children heard the consonant-rich “Find bilen!” – which sounds like “Fin beelen!” when spoken and means “Find the car!” – the toddlers would look at the car quite quickly.</p>
<p>However, when they heard the vowel-rich “Her er aben!” – which sounds like “heer-ahben!” and means “Here’s the monkey!” – it took the kids nearly half a second longer to look at the monkey. In this vowel-laden sentence, the boundaries between words become blurry and make it harder for the toddlers to understand what is being said. Half a second may not seem like much, but in the world of speech it is a very long time. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A woman speaking to her young child." src="https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=403&fit=crop&dpr=1 600w, https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=403&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=403&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=506&fit=crop&dpr=1 754w, https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=506&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/408215/original/file-20210624-15-2t427r.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=506&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Children learn language by listening to people speak, but the quirks of Danish make this a harder process compared to other languages.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/small-boy-talking-to-his-mother-royalty-free-image/163870038?adppopup=true">Thanasis Zovoilis/Moment via Getty Images</a></span>
</figcaption>
</figure>
<p>But does the abundance of vowels in Danish also make it more difficult for children to learn their native language? It turns out that it does. In another study, we found that <a href="https://doi.org/10.1016/j.jecp.2017.10.011">toddlers struggle to learn new words</a> when these words are sandwiched between a lot of vowels.</p>
<p>Danish children do, of course, eventually learn their native tongue. However, our group has found that the effects of the opaque Danish sound structure don’t go away when children grow up: Instead, they seem to shape the way adult Danes process their language. Denmark and Norway are closely related historically, culturally, economically and educationally. The two languages also have similar grammars, past tense systems and vocabulary. Unlike Danes, though, Norwegians actually pronounce their consonants.</p>
<p>In several experiments, we asked Danes and Norwegians to listen to sentences in which either a word was deliberately created to sound ambiguous (like a word halfway between “tent” and “dent”) or the meaning of the whole sentence was unusual (such as “The goldfish bought a boy for his sister”). We found that because Danish speech is so ambiguous, Danes <a href="https://doi.org/10.1111/lang.12450">rely much more on context</a> – including what was said in the conversation before, what people know about each other and general background knowledge – to figure out what somebody is saying compared to adult Norwegians. </p>
<p>Together, these results indicate that the way people interpret language is not static, but dynamically adapts to the challenges posed by the specific language or languages they speak.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man motioning with his hands as he explains something to another person." src="https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/408219/original/file-20210624-17-w0hjl8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Adults who speak Danish rely more on contextual clues – like what they talked about earlier and what they know about the other person – than speakers of other languages.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/businessman-discussing-project-with-coworker-royalty-free-image/596367015?adppopup=true">Thomas Barwick/Stone via Getty Images</a></span>
</figcaption>
</figure>
<h2>Not all languages are the same</h2>
<p>There has been a longstanding debate within the language sciences about whether all languages are <a href="https://doi.org/10.1075/hl.39.2-3.08jos">similarly complex</a> and whether this might affect how people’s brains learn and process language. Our discovery about Danish challenges the idea that all native languages are equally easy to learn and use. Indeed, learning different languages from birth may lead to distinct and separate ways of processing those languages.</p>
<p>Our results also have important practical implications for people who are struggling with language – whether because of a single traumatic event like a stroke or due to genetic and other long-term factors. Many current interventions meant to support language recovery are based on <a href="https://doi.org/10.1080/02687030903437682">studies in one language, usually English</a>. Researchers assume that these interventions would apply in the same way to individuals speaking other languages. However, if languages vary substantially in the way they’re learned and processed, an intervention that might work for one language might not work as well for another.</p>
<p><a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/myth-of-language-universals-language-diversity-and-its-importance-for-cognitive-science/25D362A6566FCA4F51054D1C41104654">Linguists have looked at differences between languages before</a>, but few have been concerned with the possible impact that such differences may have on the kind of processing machinery that develops during language learning. Instead, much of the focus has been on searching for universal linguistic patterns that hold across all or most languages. However, our research suggest that linguistic diversity may result in variation in the way we learn and process language. And if a garden-variety language like Danish has such hidden depths, who knows what we’ll find when we look more closely at the rest of the <a href="http://langscape.umd.edu/map.php">world’s approximately 7,000 languages</a>?</p>
<p>[<em>Get the best of The Conversation, every weekend.</em> <a href="https://theconversation.com/us/newsletters/weekly-highlights-61?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=weeklybest">Sign up for our weekly newsletter</a>.]</p><img src="https://counter.theconversation.com/content/161143/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The Puzzle of Danish project is supported by the Danish Council for Independent Research (FKK Grant DFF-7013-00074 to MHC).
</span></em></p><p class="fine-print"><em><span>Fabio Trecca receives funding from the TrygFonden foundation of Denmark. </span></em></p>Recent research on Danish shows that not only is it hard for Danish children to learn their mother tongue, but adult Danes use their native language differently than speakers of other languages.Morten H. Christiansen, The William R. Kenan, Jr., Professor of Psychology, Cornell UniversityFabio Trecca, Assistant Professor of Cognitive Science of Language, Aarhus UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1521592021-02-03T20:11:36Z2021-02-03T20:11:36ZWhy some words hurt some people and not others<figure><img src="https://images.theconversation.com/files/377053/original/file-20210104-17-6gxoyh.jpg?ixlib=rb-1.1.0&rect=13%2C0%2C4639%2C3153&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Communication between people would be very difficult, if not impossible, without discursive memory. Our memories allow us to understand each other or to experience irreconcilable differences.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The <a href="https://www.theglobeandmail.com/canada/article-university-of-ottawa-professor-at-centre-of-controversy-says-she/">October 2020 controversy at the University of Ottawa</a> surrounding the use of the n-word reminded us that there are parts of our history — such as the transatlantic slave trade, the Holocaust or the repression of First Nations — that must be approached with respect and empathy, even when they are talked about in an effort to better understand them.</p>
<p>Only those who have lived through these experiences can fully feel the pain and humiliation associated with certain words such as the n-word. It must be acknowledged that certain words always carry a heavy burden with them. Their mere evocation can bring back painful memories, buried deep in what is known as discursive memory.</p>
<p>As a specialist and researcher in linguistics and discourse analysis, I am interested in communication between individuals from different cultures because the misunderstandings it provokes are often based on unconscious reflexes and reference points, which makes them all the more pernicious.</p>
<h2>The role of discursive memory</h2>
<p>Communication between humans would be very difficult, if not impossible, without discursive memory. Our memories allow us to understand each other or to experience irreconcilable differences.</p>
<p>“Every nasty word we utter joins sentences, then paragraphs, pages and manifestos and ends up killing the world,” entertainer Gregory Charles said in a <a href="https://cdn-4ncwrlkdq3uc0b1u7x.netdna-ssl.com/wp-content/uploads/2017/01/gregory-charles-facebook.png?x22205">tweet</a>, quoting his father, after the attack at the Grand Mosque in Québec City in 2017. This idea, expressed here in a concrete way, is defined by specialists in discourse analysis by the concept of <a href="https://doi.org/10.4000/linx.1158">interdiscourse</a>.</p>
<p>Thus, words are not just a collection of letters and are not isolated from their context. Moreover, each context in which a term is used generates a particular perception in the person receiving it. Hence the multiplication of references.</p>
<hr>
<p>
<em>
<strong>
À lire aussi :
<a href="https://theconversation.com/how-memories-are-formed-and-retrieved-by-the-brain-revealed-in-a-new-study-125361">How memories are formed and retrieved by the brain revealed in a new study</a>
</strong>
</em>
</p>
<hr>
<p>In the courses on language and reasoning that I give, where almost every subject is covered, I sometimes notice that some students feel embarrassed, irritated or see their foreheads crease when they hear a word that otherwise leaves other students insensitive. This prompted me to <a href="https://dallamalefofana.blogspot.com/2017/09/la-communication-une-question-de.html">look into the question</a>.</p>
<p>In linguistics, words have a more unanimous form (signifier) and meaning (signified) but they refer to very personal (referent) realities.</p>
<p>The relationship between the signifier and the signified is <a href="https://doi.org/10.1080/00437956.1967.11435496">actually arbitrary</a> but it is stable. On the other hand, the referent is more unstable. Each listener perceives a term according to his or her experience of it. Let us take the word “love” as an example. For those who have always been happy in love, the word will have a positive connotation. But for those who have experienced disappointments in love, it will have a negative connotation.</p>
<p>To better understand, we can also think of a hockey game. When an individual who is not familiar with the mores of North American society watches a hockey game between the Montréal Canadiens and the Boston Bruins, he sees people dressed warmly who slide nimbly on the ice and compete for a puck using rods with curved ends. So much for the meaning. This superficial gaze can be likened to understanding a text whose cultural context and reference is unknown.</p>
<p>But the hockey-loving Québecer — who has already seen the Canadiens and the Bruins play, who knows the potential outcome of each game, the players’ statistics and the consequences of each gesture — lives in anticipation. An informed spectator watches the game but at the same time reviews all the games he has already seen. This “layered” view can be likened to speech.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=377&fit=crop&dpr=1 600w, https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=377&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=377&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=474&fit=crop&dpr=1 754w, https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=474&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/369381/original/file-20201113-15-1ac7yqi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=474&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Pauline Marois, leader of the Parti Québécois, observes Pierre Karl Péladeau at a news conference in St-Jérôme, Québec. Mr. Péladeau, who was announcing his candidacy in that riding at the time, had created controversy by shouting that he wanted to ‘make Québec a country,’ with his fist raised.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Graham Hughes</span></span>
</figcaption>
</figure>
<p>In 2014, when businessman and former politician Pierre Karl Péladeau raised his fist and shouted that he wanted to “<a href="https://www.cbc.ca/news/canada/montreal/parti-quebecois-pierre-karl-peladeau-announcement-1.3562533">make Québec a country</a>,” he caused an outcry. While an uninformed spectator might be surprised at the turmoil caused by this statement, others saw it as an echo of General Charles de Gaulle’s cry of “<a href="https://www.thecanadianencyclopedia.ca/en/article/de-gaulle-and-vive-le-quebec-libre-feature">Vive le Québec libre</a>,” shouted from the balcony of Montréal City Hall in 1967.</p>
<p>But these words and the gesture that accompanied them also reminded us of “Vive la France libre” (long live free France), a quotation pronounced by De Gaulle in 1940, awakening the patriotic flame of the French. This was the slogan for the liberation of France during the Second World War. The words uttered by Péladeau are the text, while the context — and the implications — of these words are the interdiscourse.</p>
<h2>Taking advantage of the implicit</h2>
<p>The use of the implicit, presupposition or implied may have a legal or other advantage. Very often, in public communication, certain statements made against a political opponent, for example, may be the subject of defamation suits.</p>
<p>On the other hand, a simple allusion to an act that is no longer current makes it possible to make a point of view understood without asserting it. The person targeted is liable for having put together the pieces of the puzzle himself or herself and for having deduced from it an idea that his or her interlocutor has not formally expressed.</p>
<p>It is also possible to take advantage of the symbolic capital of certain events. Think of the famous “<a href="https://www.britannica.com/topic/Jaccuse">J'accuse” by Émile Zola</a>, which is the title of an open letter published on Jan. 13, 1898, in a Parisian daily newspaper accusing the then French president of antisemitism. The expression was later used in political texts, plays, songs, posters and art works. “J'accuse” is not just a headline over a text by Émile Zola, it carries a polemical charge that has shaken an entire republic!</p>
<h2>Becoming aware of the mechanism</h2>
<p><a href="https://doi.org/10.4000/aad.1200">Discursive memory</a> therefore has its advantages. However, the fact that the audience does not always have the cultural or historical references to understand a speaker’s allusion can be problematic.</p>
<p>Not being aware of this discursive mechanism can cause many misunderstandings. Understanding it certainly helps to communicate better. But a speaker in bad faith may take advantage of it. In such a case, beyond the words and their scope, there remains the intention of the speaker. And this intention, as in the case of the use of the n-word, is very difficult to appreciate.</p>
<p>Be that as it may, some words carry their burden, no matter how they are wrapped. Putting yourself in your audience’s shoes is the key to good communication. Understanding first and accepting that each person may perceive a word differently can help establish a dialogue.</p><img src="https://counter.theconversation.com/content/152159/count.gif" alt="La Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dalla Malé Fofana ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Because of context and history, some words and phrases carry a heavy burden with them. Their mere mention can bring back painful memories and problematic situations.Dalla Malé Fofana, Chargé de cours , Linguistique, Sciences du langage et Communication, Bishop's UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1532872021-01-20T13:33:26Z2021-01-20T13:33:26ZI’m a First Amendment scholar – and I think Big Tech should be left alone<figure><img src="https://images.theconversation.com/files/378908/original/file-20210114-18-xggw2h.jpg?ixlib=rb-1.1.0&rect=14%2C9%2C3196%2C2123&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Twitter's ban of Trump has concerned free speech advocates across the political spectrum.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/presdient-donald-trump-and-the-twitter-logo-are-seen-in-news-photo/883899928?adppopup=true">Jaap Arriens/NurPhoto via Getty Images</a></span></figcaption></figure><p><a href="https://blog.twitter.com/en_us/topics/company/2020/suspension.html">Twitter’s banning of Trump</a> – an action also taken by other social media platforms, including Facebook, Instagram, YouTube and Snapchat – <a href="https://www.theguardian.com/us-news/2021/jan/17/trump-twitter-ban-five-free-speech-experts-weigh-in">has opened a fierce debate about freedom of expression</a> and who, if anyone, should control it in the United States. </p>
<p><a href="https://books.google.com/books/about/Soft_Edge_Nat_Hist_Future_Info.html?id=4laFAgAAQBAJ&source=kp_book_description">I’ve</a> <a href="https://paullevinson.blogspot.com/2007/07/flouting-of-first-amendment-transcript.html">written</a> and taught about this fundamental issue for decades. I’m a staunch proponent of the First Amendment.</p>
<p>Yet I’m perfectly OK with Trump’s ban, for reasons legal, philosophical and moral.</p>
<h2>The ‘spirit’ of the First Amendment</h2>
<p>To begin, it’s important to point out what kind of freedom of expression the First Amendment and its extension to local government via the <a href="https://www.law.cornell.edu/constitution/amendmentxiv">Fourteenth Amendment</a> protect. The Supreme Court, through various decisions, has ruled that the government cannot restrict speech, the press and other forms of communications media, whether it’s <a href="https://www.oyez.org/cases/1996/96-511">on the internet</a> or <a href="https://www.mtsu.edu/first-amendment/article/505/new-york-times-co-v-united-states">in newspapers</a>. </p>
<p>Twitter and other social media platforms are not the government. Therefore, their actions are not violations of the First Amendment. </p>
<p>But if we’re champions of freedom of expression, shouldn’t we nonetheless be distressed by any restriction on communication, be it via a government agency or a corporation? </p>
<p>I certainly am. <a href="https://human-as-media.com/2020/11/01/paul-levinson-on-the-power-of-power-and-advertisers-over-media-2/">I’ve called</a> nongovernmental suppressions of speech to be violations of “the spirit of the First Amendment.” </p>
<p>Every time CBS <a href="https://www.vulture.com/2010/02/eminem_lil_wayne_and_drake_get.html">bleeps a performance of a hip-hop artist on the Grammys</a>, the network is, in my view, engaging in censorship that violates the spirit of the First Amendment. The same is true <a href="https://www.thefire.org/the-10-worst-colleges-for-free-speech-2018/">whenever a private university forbids a peaceful student demonstration</a>.</p>
<p><a href="https://www.oif.ala.org/oif/?p=13411">These forms of censorship may be legal</a>, but the government often lurks behind the actions of these private entities. For example, when the Grammys are involved, the censorship is taking place out of <a href="https://www.scotusblog.com/2012/06/wardrobe-malfunction-case-finally-ends/">fear of governmental reprisal</a> via the Federal Communications Commission. </p>
<h2>When governmental suppression is sanctioned</h2>
<p>So, why, then, am I OK with the fact that Twitter and other social media platforms took down Trump’s account? And, while we’re at it, why am I fine with <a href="https://www.theverge.com/2021/1/13/22228675/amazon-parler-takedown-violent-threats-moderation-content-free-speech">Amazon Web Services removing the Trump-friendly social media outlet Parler</a>? </p>
<p>First, a violation of the spirit of the First Amendment is never as serious as a violation of the First Amendment itself. </p>
<p>When the government gets in the way of our right to freely communicate, Americans’ only recourse is the U.S. Supreme Court, which all too often has supported the government – wrongly, in my view.</p>
<p>The court’s 1919 “<a href="https://supreme.justia.com/cases/federal/us/249/47/#tab-opinion-1928047">clear and present danger</a>” and 1978 “<a href="https://www.mtsu.edu/first-amendment/article/113/federal-communications-commission-v-pacifica-foundation">seven dirty words</a>” decisions are among the most egregious examples of such flouting of the First Amendment. The 1919 decision qualified the crystal-clear language of the First Amendment – “Congress shall make no law” – with the vague exception that government could, in fact, ban speech in the face of a “clear and present danger.” The 1978 decision defined broadcast language meriting censorship with the even vaguer “indecency.” </p>
<p>And a government ban on any kind of communication, ratified by the Supreme Court, applies to any and all activity in the United States – period – until the court overturns the original decision. </p>
<p>In contrast, social media users can take their patronage elsewhere if they don’t approve of a decision made by a social media company. Amazon Web Services, though massive, is not the only app host available. <a href="https://www.newsweek.com/parler-domain-new-host-service-epik-1560880">Parler may have already found a new home</a> on the far-right hosting service Epik, though <a href="https://thehill.com/policy/technology/534782-parler-receiving-russian-firms-services-as-it-gears-up-to-relaunch">Epik disputes this</a>.</p>
<p>The point is that a corporate violation of the spirit of the First Amendment is, in principle, remediable, whereas a government violation of the First Amendment is not – at least not immediately.</p>
<p>Second, the First Amendment, let alone the spirit of the First Amendment, doesn’t protect communication that amounts to <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1285164#">a conspiracy to commit a crime</a>, and certainly not murder. </p>
<p>I would argue that it’s plainly apparent that Trump’s communication – whether it was <a href="https://www.nbcnews.com/politics/donald-trump/trump-suggests-injection-disinfectant-beat-coronavirus-clean-lungs-n1191216">suggesting the injection of disinfectant</a> to counteract COVID-19 or urging his supporters to “<a href="https://www.reuters.com/article/us-usa-election-protests/trump-summoned-supporters-to-wild-protest-and-told-them-to-fight-they-did-idUSKBN29B24S">fight</a>” to overturn the election – repeatedly endangered human life. </p>
<h2>Be careful what you wish for</h2>
<p>Given that Trump was still president – albeit with just a few weeks left in office – when Twitter banned him, that ban was, indeed, a big deal. </p>
<p>Jack Dorsey, co-founder and CEO of Twitter, appreciated both the need and perils of such a ban, <a href="https://twitter.com/jack/status/1349510769268850690">tweeting</a>, “This moment in time might call for this dynamic, but over the long term it will be destructive to the noble purpose and ideals of the open internet. A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same.”</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1349510769268850690"}"></div></p>
<p>In other words, a company that violates the spirit of the First Amendment can “feel much the same” to the public as government actually violating the First Amendment.</p>
<p>To be sure, I think it’s concerning that a powerful cohort of social media executives can deplatform anyone they want. But the alternative could be far worse. </p>
<p>Back in 1998, many were worried about the seeming monopolistic power of Microsoft. Although the U.S. government <a href="https://en.wikipedia.org/wiki/United_States_v._Microsoft_Corp.#Appeal">won a limited antitrust suit</a>, it declined to pursue further efforts to break up Microsoft. At the time, <a href="https://www.academia.edu/35204094/Leave_Poor_Microsoft_Alone">I argued</a> that problems of corporate predominance tend to take care of themselves and are less powerful than the forces of a free marketplace. </p>
<p>Sure enough, the preeminent position of Microsoft was soon contested and replaced by the <a href="https://mashable.com/2011/05/09/apple-google-brandz-study/">resurgence of Apple</a> and <a href="https://www.cnbc.com/2019/06/11/amazon-beats-apple-and-google-to-become-the-worlds-most-valuable-brand.html">the rise of Amazon</a>.</p>
<p>Summoning the U.S. government to counter these social media behemoths is the proverbial slippery slope. Keep in mind that the U.S. government already controls a <a href="https://www.nytimes.com/2020/07/20/us/politics/trump-chicago-portland-federal-agents.html">sprawling security apparatus</a>. It’s easy to envision an administration with the ability to regulate social media not wielding that power to protect the freedoms of users but instead using it to insulate themselves from criticism and protect their own power.</p>
<p>We may grouse about the immense power of social media companies. But keeping them free from the far more immense power of the government may be crucial to maintaining our freedom.</p><img src="https://counter.theconversation.com/content/153287/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Levinson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s concerning that tech executives can exercise so much power over who can use their platforms. But the alternative – government intervention – could be much worse.Paul Levinson, Professor of Communication and Media Studies, Fordham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1499202020-11-25T19:02:15Z2020-11-25T19:02:15ZForensic linguists can make or break a court case. So who are they and what do they do?<figure><img src="https://images.theconversation.com/files/371218/original/file-20201125-19-nxbog3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> </figcaption></figure><p>If you’re an avid viewer of crime shows, you’ve probably come across cases in which an expert, often a psychologist, is called in to help solve a crime using their language analysis skills.</p>
<p>However, in real life it’s the job of <a href="https://www.iafl.org/">forensic linguists</a> like myself to provide such evidence in courts, here in Australia and around the world. </p>
<p>Forensic linguists can provide expert opinion on a variety of language-related dilemmas, including unattributed voice recordings, false confessions, trademark disputes and, of course, a fair share of <a href="https://www.buzzfeednews.com/article/davidmack/man-arrested-threatening-biden-harris-letter">threatening letters</a>. </p>
<p>But what do we look for when doing this?</p>
<h2>Reading between the lines (and everything else)</h2>
<p>Linguistics is the scientific study of language. Thus, linguists are uniquely placed to provide expert opinions on how language is used. Linguists study: </p>
<ul>
<li><p>grammatical structures, wherein changes in punctuation patterns between texts can signal different authors</p></li>
<li><p>semantics, which explores how speakers and listeners form meaning, such as when making sense of a written text</p></li>
<li><p>phonetics and phonology, which refer to the sounds of language. We can recognise subtle differences in the sound of a vowel when produced by different speakers, or by speakers of different dialects and languages.</p></li>
<li><p>sociolinguistics, which looks at how language use varies across different social groups. For example, we can identify when someone from a non-English language background might misunderstand a question. This is because the variety of English they’re familiar with would differ, in small but notable ways, from native English speakers. </p></li>
</ul>
<p>Since the first known forensic linguistic case <a href="https://books.google.com.au/books/about/The_Evans_Statements.html?id=68LdtQEACAAJ&redir_esc=y">in 1953</a>, all of the above abilities have proven <a href="https://www.jstor.org/stable/3086556?seq=1">invaluable in courts</a> time and time again. Yet the work done by forensic linguists seems to largely elude members of the public. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Illustration of confused people." src="https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=252&fit=crop&dpr=1 600w, https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=252&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=252&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=316&fit=crop&dpr=1 754w, https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=316&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/371226/original/file-20201125-22-1afg38w.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=316&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sociolinguistics is a branch of language study focused on the relationship between language and various groups in society.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>A widely misunderstood field</h2>
<p>Ironically, a big problem for forensic linguists (and linguistics in general) relates to language. It comes down to how we use the word “linguist”. </p>
<p>Some people think this refers to a person who speaks many different languages, or is particularly fluent in their speech or writing. These non-technical interpretations are easy to conflate with the academic discipline of linguistics. </p>
<p>But apart from causing linguists a headache at dinner parties, does it really matter if people misunderstand what linguists do?</p>
<p>It seems so. Widespread ignorance on the vitality of forensic linguistics has led to some of the most egregious miscarriages of justice in Australian history.</p>
<p>In 2018, the Western Australia Court of Appeal <a href="https://www.abc.net.au/news/2017-04-12/gene-gibson-josh-warneke-manslaughter-conviction-quashed-appeal/8436550">overturned the conviction</a> of manslaughter for Gene Gibson, an Aboriginal man with a cognitive impairment for whom English was a <a href="https://www.sbs.com.au/nitv/nitv-news/article/2017/08/09/gene-gibson-seeks-25m-compensation-unfair-conviction">third language</a>. </p>
<p>Police interviewed Gibson without an interpreter, assuming one wasn’t needed to assess his English fluency. This neglect resulted in Gibson spending nearly five years in prison for a crime he didn’t commit. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Lawyer Michael Lundberg speaks to the media outside the WA Supreme Court." src="https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/371224/original/file-20201125-17-eg63oj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In 2018, Gene Gibson was awarded a total A$1.5 million in compensation by the West Australian government, after being jailed for a crime he didn’t commit. Gibson’s lawyer Michael Lundberg (pictured) told the ABC the payment wasn’t as large as he’d hoped.</span>
<span class="attribution"><span class="source">Rebecca Le May/AAP</span></span>
</figcaption>
</figure>
<p>People who speak English as an additional language sometimes don’t know their <a href="https://www.une.edu.au/__data/assets/pdf_file/0006/114873/Communication-of-rights.pdf">legal rights</a> in situations such as police interviews. </p>
<p>In the past, these defendants or witnesses have been treated as though they understood complex legal English simply because they could chat about the weather, or their family. Such casual conversations are not a suitable test for language fluency.</p>
<h2>The verbose wild west of the web</h2>
<p>Another example where linguistics intersects with criminals is found in the rapid <a href="https://www.routledge.com/Digital-Criminology-Crime-and-Justice-in-Digital-Society/Powell-Stratton-Cameron/p/book/9781138636743">increase in crimes</a> involving digital communication. These online offences are made easy by the anonymity and reach allowed on social media platforms.</p>
<p>Correctly identifying individuals who post threatening, defamatory or false messages online is of chief importance for investigators as it can help protect those targeted.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Illustration of an on-screen text." src="https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=430&fit=crop&dpr=1 600w, https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=430&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=430&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=541&fit=crop&dpr=1 754w, https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=541&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/371220/original/file-20201125-22-mcu8gg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=541&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Social media use has skyrocketed in the past decade, boosting the trend of ‘viral’ content. This has hugely shifted the defamation landscape.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>This task, carried out by forensic linguists, is known as “<a href="https://www.researchgate.net/publication/220542531_Who's_At_The_Keyboard_Authorship_Attribution_in_Digital_Evidence_Investigations">authorship attribution</a>”. It relies on correctly grouping together texts produced by the same author, by isolating textual features specific to that author. </p>
<p>These features are usually related to grammatical structure and are deeply embedded in each person’s individual authorial style. They are difficult to manipulate by would-be imposters.</p>
<p>Authorship attribution is certainly challenging, as there’s no “text fingerprint” or distinct pattern of language use that can be allocated to each of us. Still, big data analysis, <a href="https://doi.org/10.1093/llc/fql019">combined with linguistic theory</a>, is getting us closer to a reliable system.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/forensic-linguists-explore-how-emojis-can-be-used-as-evidence-in-court-133462">Forensic linguists explore how emojis can be used as evidence in court</a>
</strong>
</em>
</p>
<hr>
<p>A “stylistics” approach, featured in one <a href="https://tvtonight.com.au/2020/10/australian-story-oct-19-2.html">Australian Story episode</a> last month, describes patterns of language that are similar or different between two specific texts. </p>
<p>But this approach makes no attempt to calculate how common these patterns might be in any other authored text. This oversight is typical of non-linguists attempting to undertake linguistic analysis, as they often don’t know what constitutes a common feature of language.</p>
<p>For instance, if two documents feature the word “cant” (“can’t” without an apostrophe), a non-expert may see this as a strong indicator of a common author. </p>
<p>But according to the <a href="http://wse1.webcorp.org.uk/home/blogs.html">Birmingham Blog Corpus</a> — a collection of almost 630,000,000 words taken from blogs — this word is spelled without an apostrophe about 3.6% of the time. </p>
<h2>Technology-facilitated analysis</h2>
<p>More reliable methods of identifying authorship, or identifying a speaker in a voice recording, are possible with both specialised linguistic knowledge and computer processing power. </p>
<p>Advancing this field doesn’t require any fancy new technology. It requires more investment in Australia’s capacity for forensic linguistic research. In an increasingly digital world, in-depth research on text authorship and voice identification will prove crucial to future law enforcement. </p>
<p>It’s also important we increase awareness of the power (and limitations) of linguistic analysis among the general public, and especially among officers of the law and judiciary.</p>
<p>Bringing more linguistics into schools, such as with <a href="https://www.vcaa.vic.edu.au/curriculum/vce/vce-study-designs/englishlanguage/Pages/Index.aspx">Victoria’s VCE English Language subject</a>, would be a great way to equip the next generation of these experts.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/can-criminal-suspects-be-identified-just-by-the-sound-of-their-voice-114815">Can criminal suspects be identified just by the sound of their voice?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/149920/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Georgina Heydon is the immediate past president of the International Association of Forensic Linguists, a not-for-profit, academic organisation. She provides expert opinions on language matters in civil and criminal matters.</span></em></p>For decades, forensic linguists have helped crack cases involving false author attribution, masked voices, false confessions in criminal cases and copyright disputes.Georgina Heydon, Associate professor, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.