tag:theconversation.com,2011:/africa/topics/language-processing-31101/articlesLanguage processing – The Conversation2019-01-10T11:52:44Ztag:theconversation.com,2011:article/1073362019-01-10T11:52:44Z2019-01-10T11:52:44ZHearing hate speech primes your brain for hateful actions<figure><img src="https://images.theconversation.com/files/252909/original/file-20190108-32133-z0wsva.jpg?ixlib=rb-1.1.0&rect=15%2C1238%2C3183%2C2198&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Inflammatory words can prime a mind.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/t8T_yUgCKSM">Elijah O'Donnell/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p><a href="https://theconversation.com/escuchar-expresiones-de-odio-predispone-nuestro-cerebro-a-cometer-actos-de-odio-110872"><em>Leer en español</em></a>.</p>
<p>A mark on a page, an online meme, a fleeting sound. How can these seemingly insignificant stimuli lead to acts as momentous as participation in a racist rally or the massacre of innocent worshippers? Psychologists, neuroscientists, linguists and philosophers are developing a new theory of language understanding that’s starting to provide answers.</p>
<p>Current research shows that humans understand language by activating sensory, motor and emotional systems in the brain. According to this new simulation theory, just reading words on a screen or listening to a podcast activates areas of the brain in ways similar to the activity generated by literally being in the situation the language describes. This process makes it all the more easy to turn words into actions. </p>
<p>As a cognitive psychologist, <a href="https://scholar.google.com/citations?user=W4fT1ocAAAAJ&hl=en&oi=sra">my own research</a> has focused on <a href="https://doi.org/10.1016/j.cortex.2011.04.010">developing simulation theory</a>, <a href="https://doi.org/10.3758/BF03196313">testing it</a>, and using it to create <a href="http://dx.doi.org/10.1037/tps0000100">reading comprehension interventions</a> for young children.</p>
<h2>Simulations are step one</h2>
<p>Traditionally, linguists have analyzed language as a set of words and rules that convey ideas. But how do ideas become actions?</p>
<p><a href="https://www.ncbi.nlm.nih.gov/pubmed/11301525">Simulation theory</a> <a href="https://doi.org/10.1016/j.cortex.2011.04.010">tries to answer</a> that question. In contrast, many traditional theories about language processing <a href="https://www.ncbi.nlm.nih.gov/pubmed/3375398">give action short shrift</a>. </p>
<p>Simulation theory proposes that processing words depends on activity in people’s neural and behavioral systems of action, perception and emotion. The idea is that perceiving words drives your brain systems into states that are nearly identical to what would be evoked by directly experiencing what the words describe. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/253116/original/file-20190109-32142-7qux90.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When you read the sentence, your mind simulates what it would be like to actually live through the experience.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-couple-walking-on-beach-moonlight-159045794">Joyce Vincent/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>Consider the sentence “The lovers held hands while they walked along the moonlit tropical beach.” According to simulation theory, when you read these words, your brain’s motor system simulates the actions of walking; that is, the neural activity elicited by comprehending the words is similar to the neural activity generated by literal walking. Similarly, your brain’s perceptual systems simulate the sight, sounds and feel of the beach. And your emotional system simulates the feelings implied by the sentence.</p>
<p>So words themselves are enough to trigger simulations in motor, perceptual and emotional neural systems. Your brain creates a sense of being there: The motor system is primed for action and the emotional system motivates those actions.</p>
<p>Then, one can act on the simulation much as he’d act in the real situation. For example, language associating an ethnic group with “bad hombres” could invoke an emotional simulation upon seeing members of the group. If that emotional reaction is strong enough, it may in turn motivate action – maybe making a derogatory remark or physically lashing out.</p>
<p>Although simulation theory is still under scientific scrutiny, there have been many successful tests of its predictions. For example, using neuroimaging techniques that track blood flow in the brain, researchers found that listening to action words such as “lick,” “pick” and “kick” <a href="https://doi.org/10.1016/S0896-6273(03)00838-9">produces activity in areas of the brain’s motor cortex</a> that are used to control the mouth, the hand and the leg, respectively. Hearing a sentence such as “The ranger saw an eagle in the sky” generates a <a href="https://doi.org/10.1111/1467-9280.00430">mental image using the visual cortex</a>. And using Botox to block activity in the muscles that furrow the brow <a href="https://doi.org/10.1177/0956797610374742">affects the emotional system</a> and slows understanding of sentences conveying angry content. These examples demonstrate the connections between processing speech and motor, sensory and emotional systems.</p>
<p>Recently, my colleague psychologist <a href="https://scholar.google.com/citations?user=7Marb_sAAAAJ&hl=en&oi=ao">Michael McBeath</a>, our graduate student Christine S. P. Yu and I discovered yet another robust connection between language and the emotional system.</p>
<p>Consider pairs of single-syllable English words that differ only in whether the vowel sound is “eee” or “uh,” such as “gleam-glum” and “seek-suck.” Using all such pairs in English – there are about 90 of them – we asked people to judge which word in the pair was more positive. Participants selected the word with the “eee” sound two-thirds of the time. This is a remarkable percentage because if linguistic sounds and emotions were unrelated and people were picking at the rate of chance, only half of the “eee” words would have been judged as the more positive.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/253117/original/file-20190109-32127-1aov6a7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Just activating your smile muscles tilts your emotions toward the positive.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/closeup-portrait-happy-elderly-gentleman-pink-362500571">AshTproductions/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>We propose that this relation arose because saying “eee” activates the same muscles and neural systems as used when smiling – or saying “cheese!” In fact, mechanically inducing a smile – as by holding a pencil in your teeth without using your lips – <a href="http://dx.doi.org/10.1037/0022-3514.54.5.768">lightens your mood</a>. Our new research shows that saying words that use the smile muscles can have a similar effect.</p>
<p>We tested this idea by having people chew gum while judging the words. Chewing gum blocks the systematic activation of the smile muscles. Sure enough, while chewing gum, the judged difference between the “eee” and “uh” words was only half as strong. We also demonstrated the same effects in China using pairs of Mandarin words containing the “eee” and “uh” sounds.</p>
<h2>Practice through simulation makes actions easier</h2>
<p>Of course, motivating someone to commit a hate crime requires much more than uttering “glum” or “suck.”</p>
<p>But consider that simulations become <a href="http://dx.doi.org/10.1037/0033-295X.95.4.492">quicker with repetition</a>. When one first hears a new word or concept, creating its simulation can be a mentally laborious process. A good communicator can help by using hand gestures to convey the motor simulation, pointing to objects or pictures to help create the perceptual simulation and using facial expressions and voice modulation to induce the emotional simulation.</p>
<p>It makes sense that the echo chamber of social media provides the practice needed to both speed and shape the simulation. The mental simulation of “caravan” can change from an emotionally neutral string of camels to an emotionally charged horde of drug dealers and rapists. And, through the repeated simulation that comes from repeatedly reading similar posts, the message becomes all the more believable, as each repetition produces another instance of almost being there to see it with your own eyes.</p>
<p><a href="https://psychology.berkeley.edu/people/dan-i-slobin">Psycholinguist Dan Slobin</a> suggested that habitual ways of speaking lead to <a href="http://psycnet.apa.org/record/1996-98701-002">habitual ways of thinking about the world</a>. The language that you hear gives you a vocabulary for discussing the world, and that vocabulary, by producing simulations, gives you habits of mind. Just as reading a scary book can make you afraid to go in the ocean because you simulate (exceedingly rare) shark attacks, encountering language about other groups of people (and their exceedingly rare criminal behavior) can lead to a skewed view of reality.</p>
<p>Practice need not always lead down an emotional rabbit hole, though, because alternative simulations and understandings can be created. A caravan can be simulated as families in distress who have the grit, energy and skills to start a new life and enrich new communities.</p>
<p>Because simulation creates a sense of being in a situation, it motivates the same actions as the situation itself. Simulating fear and anger literally makes you fearful and angry and <a href="https://doi.org/10.1177/1368430210374483">promotes aggression</a>. Simulating compassion and empathy literally makes you act kindly. We all have the obligation to think critically and to speak words that become humane actions.</p><img src="https://counter.theconversation.com/content/107336/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arthur Glenberg owns shares in Moved by Reading, LLC which intends to market software to enhance reading comprehension. He has received funding from the National Science Foundation, the Institute of Education Sciences, the National Institutes of Health and the Office of Naval Research.</span></em></p>A new theory of language suggests that people understand words by unconsciously simulating what they describe. Repeated exposure – and the simulation that comes with it – makes it easier to act.Arthur Glenberg, Professor of Psychology, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/782362017-06-22T00:55:11Z2017-06-22T00:55:11ZTeaching machines to understand – and summarize – text<figure><img src="https://images.theconversation.com/files/174803/original/file-20170620-32381-slfxg1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Can artificial intelligence help us stop drowning in paperwork?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/businessman-under-document-vector-eps10-174262493">Jiw Ingka/shutterstock.com</a></span></figcaption></figure><p>We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a <a href="http://time.com/money/4506297/how-retailers-track-you/">store’s loyalty rewards card</a> or connects to an online service, his or her activities are governed by the <a href="http://heinonline.org/HOL/LandingPage?handle=hein.journals/isjlpsoc4&div=27">equivalent of hundreds of pages of legalese</a>. <a href="https://www.theguardian.com/technology/2017/mar/03/terms-of-service-online-contracts-fine-print">Most people pay no attention</a> to these massive documents, often labeled “terms of service,” “user agreement” or “privacy policy.”</p>
<p>These are just part of a much wider societal problem of information overload. There is so much data stored – <a href="http://whatsabyte.com/">exabytes</a> of it, as much stored as has ever been spoken by people in all of human history – that it’s humanly <a href="https://kb.osu.edu/dspace/bitstream/handle/1811/72839/ISJLP_V4N3_543.pdf?sequence=1">impossible to read and interpret</a> everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.</p>
<p>As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it <a href="https://doi.org/10.1109/BigData.2016.7840639">in terms regular people can understand</a>.</p>
<h2>Can computers understand text?</h2>
<p>Computers store data as 0’s and 1’s – data that cannot be directly understood by humans. They interpret these data as instructions for displaying text, sound, images or videos that are meaningful to people. But can computers actually understand the language, not only presenting the words but also their meaning?</p>
<p>One way to find out is to ask computers to <a href="https://www.technologyreview.com/s/607828/an-algorithm-summarizes-lengthy-text-surprisingly-well/">summarize their knowledge</a> in ways that people can understand and find useful. It would be best if AI systems could process text quickly enough to help people make decisions as they are needed – for example, when you’re signing up for a new online service and are asked to agree with the site’s privacy policy. </p>
<p>What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to <a href="https://theconversation.com/7-in-10-smartphone-apps-share-your-data-with-third-party-services-72404?sr=1">pay particular attention to certain issues</a>, like when an email address is shared, or whether search engines can index personal posts. Companies could use this capability, too, to analyze contracts or other lengthy documents.</p>
<p>To do this sort of work, we need to combine a range of AI technologies, including <a href="https://theconversation.com/us/topics/machine-learning-8332">machine learning</a> algorithms that take in large amounts of data and independently identify connections among them; <a href="https://www.aaai.org/ojs/index.php/aimagazine/article/view/1029/947">knowledge representation</a> techniques to express and interpret facts and rules about the world; <a href="https://www.wired.com/2016/04/long-form-voice-transcription/">speech recognition</a> systems to convert spoken language to text; and <a href="https://doi.org/10.2200/S00762ED1V01Y201703HLT037">human language comprehension</a> programs that process the text and its context to determine what the user is telling the system to do.</p>
<h2>Examining privacy policies</h2>
<p>A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).</p>
<p>These companies’ cloud-based systems typically keep <a href="http://www.datacenterknowledge.com/archives/2017/03/07/how-to-survive-an-aws-outage/">multiple copies</a> of users’ data as part of backup plans to prevent service outages. That means there are more potential targets – each data center must be securely protected both physically and electronically. Of course, <a href="https://www.ftc.gov/public-statements/2012/03/big-data-big-issues">internet companies recognize customers’ concerns</a> and employ security teams to <a href="https://www.ftc.gov/tips-advice/business-center/privacy-and-security/data-security">protect users’ data</a>. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human – and perhaps even no single attorney – can truly <a href="https://www.theatlantic.com/technology/archive/2014/09/why-privacy-policies-are-so-inscrutable/379615/">understand them</a>.</p>
<p>In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including <a href="https://aws.amazon.com/privacy/">Amazon AWS</a>, <a href="https://www.facebook.com/legal/FB_Work_Privacy">Facebook</a>, <a href="https://www.google.com/policies/privacy/">Google</a>, <a href="http://www8.hp.com/us/en/privacy/privacy.html">HP</a>, <a href="https://www.oracle.com/legal/privacy/privacy-policy.html">Oracle</a>, <a href="https://www.paypal.com/us/webapps/mpp/ua/privacy-full">PayPal</a>, <a href="https://www.salesforce.com/company/privacy/full_privacy.jsp">Salesforce</a>, <a href="https://www.snap.com/en-US/privacy/privacy-policy/">Snapchat</a>, <a href="https://twitter.com/en/privacy">Twitter</a> and <a href="https://www.whatsapp.com/legal/">WhatsApp</a>.</p>
<h2>Summarizing meaning</h2>
<p>Our software examines the text and uses <a href="https://doi.org/10.1145/2935694.2935696">information extraction techniques</a> to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses <a href="http://dx.doi.org/10.1162/COLI_a_00239">linguistic analysis</a> to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements.</p>
<p>For example, our system identified one aspect of Amazon’s privacy policy as telling a user, “You can choose not to provide certain information, but then you might not be able to take advantage of many of our features.” Another aspect of that policy was described as “We may also collect technical information to help us identify your device for fraud prevention and diagnostic purposes.”</p>
<p><iframe id="iJPwR" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/iJPwR/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>We also found, with the help of the summarizing system, that privacy policies often include rules for third parties – companies that aren’t the service provider or the user – that people might not even know are involved in data storage and retrieval. </p>
<p>The <a href="https://doi.org/10.1109/BigData.2016.7840639">largest number of rules in privacy policies</a> – 43 percent – apply to the company providing the service. Just under a quarter of the rules – 24 percent – create obligations for users and customers. The rest of the rules govern behavior by third-party services or corporate partners, or could not be categorized by our system.</p>
<p><iframe id="NpL9o" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/NpL9o/4/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The next time you click the “I Agree” button, be aware that you may be agreeing to share your data with other hidden companies who will be analyzing it.</p>
<p>We are continuing to improve our ability to succinctly and accurately summarize complex privacy policy documents in ways that people can understand and use to access the risks associated with using a service.</p><img src="https://counter.theconversation.com/content/78236/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Karuna Pande Joshi receives funding from NSF, ONR, IBM, GE Research and Cisco. </span></em></p><p class="fine-print"><em><span>Tim Finin has received funding from the National Science Foundation, the Office of Naval Research and IBM.</span></em></p>Nobody can understand the legal language in privacy policies. Can artificial intelligence digest the text and produce a human-readable explanation?Karuna Pande Joshi, Research Associate Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore CountyTim Finin, Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore CountyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/786112017-05-31T20:59:33Z2017-05-31T20:59:33ZNatural language processing and affective computing<figure><img src="https://images.theconversation.com/files/171593/original/file-20170531-23531-aro8u2.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How can a machine understand who said what? </span> </figcaption></figure><p>What are the natural extensions of machine learning (ML) and deep learning as well as natural language processing (NLP) and affective computing (AC)?</p>
<p>To many people, what distinguish machines from humans is emotion. True, some existentialists might push the envelope and go so far as to say consciousness (which is a valid argument), but the primary existential reality is emotion. A computer is not a living entity, does not understand empathy and cannot gauge how we feel. It does not and cannot care whether its users are happy, sad, frustrated or simply regretting a heavy lunch.</p>
<p>But the subject of emotions is a pertinent one as it defines most of our decisions. Recent advances in social cognitive neuroscience and behavioural science now prove that rationality is only one of the governing principles in decision making. We are also affected by moods and consult our ‘gut feeling’ about an investment. If we were ruled by pure reason, then why would we engage in charity and philanthropy?</p>
<p>Emotions and language, which are one of the ways in which we convey emotions, thus play an important role in the way investment decisions are executed. Owing to this, considerable progress has been made in recent years in understanding the subtleties of language and emotion. These areas of study are referred to as NLP and AC respectively.</p>
<h2>Natural language processing</h2>
<p>In our previous research, we have largely focused on the quantitative methods of analysis. However NLP is more qualitative in nature. While quantitative data is easier to compartmentalize in the form of, say, share prices, time-series data analysis, qualitative data is harder to define and statistically model.</p>
<p>For example, when Facebook was about to become a publicly listed company in 2012, how was it that company that was <a href="https://techcrunch.com/2012/02/01/facebook-ipo-facebook-ipo-facebook-ipo/">earning $1 billion in revenue</a> <a href="https://en.wikipedia.org/wiki/Initial_public_offering_of_Facebook">was given a market valuation of $90 billion</a>? The reason for this was based on several qualitative factors such as its core ideas, its teams, and its projected potential to earn high revenue, etc. In this view, qualitative data plays an important role even in financial decisions. In fact, structured quantitative data in the form of spreadsheets and relational databases only account for <a href="http://www.smartdatacollective.com/michelenemschoff/206391/quick-guide-structured-and-unstructured-data">20% of all available data</a>. The remaining 80% is in the form of social media posts (especially Twitter), images, email, text messages, audio files, Word documents, PDFs and other unstructured forms.</p>
<p>To extract insights from these unstructured sources of data, NLP uses ML and AI to understand unstructured text and provide context to language. This allows us to parse information (summarize, reduce or categorize qualitative data) being received via announcements, rumours, expert forecasts etc. in real-time and hone the decision-making process from the barrage of online information.</p>
<p>In terms of interpreting this qualitative information, <a href="http://www.mind.ilstu.edu/curriculum/protothinker/natural_language_processing.php">NLP can be broken down into the following sub-topics</a>:</p>
<ul>
<li><p>Signal processing: the conversion of spoken words into text.</p></li>
<li><p>Syntactic analysis: what is the structure and grammar of the sentences</p></li>
<li><p>Semantic analysis: what is the meaning of the words. What is the logic of the sentences.</p></li>
<li><p>Pragmatics: what is the context of the sentences. How is it related to the rest of the sentences.</p></li>
</ul>
<p>Most NLP is based on the last three. Recent advances in each of these sub-topics has allowed us to use NLP to a deeper understanding of public perception of products, services, brands and companies.</p>
<p>Sentiment thus plays a very important role in decision making and the ability of a machine to convert human language into machine readable code and convert it into actionable insights is the capability offered by NLP.</p>
<h2>Affective computing</h2>
<p>The topic of sentiment brings us to AC. While NLP is capable of reading or converting words into a stream of logic that can be used as an input to a computation devise, there are subtle nuances that humans use to communicate. Sentiments can be deciphered if they are written based on the structure of the words, the meaning of the sentence and even the speed at which it is written (do you type faster or slower when you are angry?)</p>
<p>But language is only one way of exhibiting emotions. We also tend to use our bodies. A head shake can mean agree, disagree, confused, or simply a stiff neck. While it is relatively easy for us to decipher the meaning of these gestures (depending on which cultural context we are immersed in), automating this task to a computer is a true challenge for scientists trying to bridge the barrier between machine-human interfaces. This has led to the development of AC.</p>
<p>AC is the study and development of systems and devices that can recognize, interpret, process and simulate human affects. <a href="https://www.bbvaopenmind.com/en/what-is-affective-computing/">It is an interdisciplinary field spanning computer science, psychology, and cognitive science</a> that attempts to sense the emotional state of a user using sensors and deep learning.</p>
<p>Much like in the way a deep learning algorithm is fed thousands and thousands of images capable of identifying variations of how the number can be written, AC uses a similar method to learn how a face is smiling – instead of numbers, feed the computer with tons of data of what a smile looks like for men, women and children of any nationality and color.</p>
<p>Affective computing is the next step in the analysis of sentiments and emotions. At a <a href="https://www.youtube.com/watch?v=WSj26ncU_po">recent conference at MIT</a> experts forecasted that it will affect all areas of industry from advertising and market research to consumer products. As facial expressions and spoken words can now be observed and analyzed in real time, we are now entering an era in which physical responses to stimuli can be identified and emotions can transformed into measurable results providing us with guidelines to what actions need to be taken.</p>
<h2>Business implications</h2>
<p>Chatbots are already adopting AC. They are programs that mimic conversation with people using artificial intelligence (AI). With recent advances in AI, chatbots have become accurate conversationalists, especially when focused on a specific domain. As increasing number of consumers prefer to text, chatbots are increasingly becoming the de facto mode of communicating with a website. Rather than have a client helpline or a customer service helpline, chatbots offer a cheaper and easier mode of communication.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=344&fit=crop&dpr=1 600w, https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=344&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=344&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=433&fit=crop&dpr=1 754w, https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=433&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/167856/original/file-20170504-21635-sk41lg.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=433&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Start-ups specializing in chatpots.</span>
</figcaption>
</figure>
<p>The uses of AC in the world of business are yet to be fully explored. One possibility is that if you are prone to stress or work in a high stress environment that is detrimental to your health, then sensors that can measure your emotional state and gauge what you are feeling. This information can be used as input data to modify your insurance plan as insurers can now know what the stress levels and potential health risks of their insured are.</p>
<hr>
<p><em>Terence Tse and Mark Esposito are the authors of <a href="https://www.amazon.com/dp/B01N4OZQ8C">“Understanding How the Future Unfolds: Using DRIVE to Harness the Power of Today’s Megatrends”</a>. Kary Bheemaiah is the author of <a href="https://www.amazon.com/Blockchain-Rethinking-Macroeconomic-Policy-Economic/dp/1484226739/ref=sr_1_1?ie=UTF8&qid=1485508680&sr=8-1&keywords=kariappa">“The Blockchain Alternative: Rethinking Macroeconomic Policy and Economic Theory”</a>.</em></p><img src="https://counter.theconversation.com/content/78611/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The research on which this article is based was sponsored by KPMG/ESCP Europe Chair Governance, Strategy, Risks, and Performance. </span></em></p>Important advances have been made in the areas of automatic language processing and emotional computing, and that could have big implications for business.Mark Esposito, Professor of Business & Economics at Harvard University and Grenoble École de Management, Grenoble École de Management (GEM)Kariappa Bheemaiah, Associate research scientist Cambridge Judge Business School and lecturer GEM, Grenoble École de Management (GEM)Terence Tse, Associate Professor of Finance / Head of Competitiveness Studies at i7 Institute for Innovation and Competitiveness, ESCP Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/637512016-10-18T01:37:53Z2016-10-18T01:37:53ZWhat the presidential candidates’ data can tell us about Trump and Clinton<p>It’s election season, and the candidates’ and campaigns’ eyes are on you, the voter. Figuring out what you think about something a candidate said last night or tweeted this morning is very big business. All this gathering of data, from <a href="http://www.realclearpolitics.com/epolls/latest_polls/">statewide and national polls</a> and social media alike, can make it seem as if everything we do – or even think – is under scrutiny. In fact, it is.</p>
<p>As a result, elections seem very one-sided: Campaigns can get detailed data allowing them to read, see, hear and analyze almost everything we do. But what we, the people, get for analysis is mostly pundit commentary, not the kind of real analysis that uses data as its source. We are, therefore, left to decipher and discern among often-conflicting perspectives amid the cacophony of online reports, newspaper articles or TV broadcasts. </p>
<p><a href="http://www.npr.org/2016/09/26/495115346/fact-check-first-presidential-debate">Fact checking</a> the candidates is also <a href="http://www.politifact.com">big business</a>, but it tells us more about what the candidates say than about the candidates themselves. If only we could get access to data about the candidates! Then we could do our own analysis, just as they do.</p>
<p>To a large degree, it turns out that we can. Thanks to the vast scope of the internet, we can now turn the tables on the candidates and their campaigns and obtain a wide variety of data, such as <a href="http://www.people-press.org/2016/07/07/2-voter-general-election-preferences/7-7-2016-2-30-10-pm-2/">voter preferences</a>, which can give us an understanding of what people actually think; <a href="https://www.opensecrets.org/pres16/candidate.php?id=N00023864">campaign profiles</a>; <a href="https://www.clintonfoundation.org/sites/default/files/clinton_foundation_annual_report_2014.pdf">corporate and foundation annual reports</a>; and corporate tax information. As I’m teaching my Data Science students, this broad range of factual data allows us to do our own analysis of the candidates, even as the campaigns analyze us.</p>
<h2>Determining what to analyze</h2>
<p>Some of the data you might like to collect for analysis about individual candidates simply are not going to be available – to you or anyone else – unless the candidates choose to make such information available. For example, health or tax records. But some data are available that are unequivocal: debate transcripts. </p>
<p>Debate transcripts are like court transcripts – they are an accurate, factual rendition of who said what. That makes them a very reliable source of information about candidates – devoid of bias or other influence that may be presented in third-party blogging or reporting about the debate. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/FRlI2SQ0Ueg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Their words can tell us quite a bit, too.</span></figcaption>
</figure>
<p>Similarly, social media postings from the candidate directly or on official campaign accounts are excellent sources of data. When we subject them to computer analysis, we can learn many things about the candidates based on how they express themselves. </p>
<h2>Initial analysis</h2>
<p>The transcript can certainly tell us who spoke most, but that’s not the whole picture. How much someone is talking isn’t enough. What are they talking about, and how are they using language to discuss their topics? And how about emotion? </p>
<p>The field of <a href="http://dx.doi.org/10.1145/1643823.1643908">natural language processing</a> offers a wide range of techniques for summarizing large blocks of text, identifying names, identifying core topics and so on. Google has recently released two programs that make this much easier for nontechnical users to explore: “<a href="https://research.googleblog.com/2016/05/announcing-syntaxnet-worlds-most.html">SyntaxNet</a>” and “Parsey McParseFace” (<a href="http://thenextweb.com/dd/2016/05/12/google-just-open-sourced-something-called-parsey-mcparseface-change-ai-forever/">its real name</a>).</p>
<p>A simple word count of the words spoken during the <a href="http://www.presidency.ucsb.edu/debates.php">16 primary debates</a> that took place up to February 2016 suggests that Hillary Clinton spoke about 20 percent more words than Donald Trump. By a simple count, she was the most prolific speaker of all of the candidates in these debates. But that’s not the whole picture. Some candidates may have fielded more questions than others, or been given more leeway to speak at length. When we account for these and other factors – such as how many debates a candidate attended and how many other participants there were – a very different picture emerges: Trump is in fact the most verbose candidate, and exceeds Clinton by around 18 percent.</p>
<p>The quantity of talking isn’t enough. We also need to look at the issues they are talking about, <a href="http://www.atlasobscura.com/articles/how-common-phrases-reveal-key-differences-between-clinton-and-trump">their vocabulary</a> and the emotions they apply. Clinton uses a wider vocabulary: Using the combined data from these primary debates, she used around 2,300 distinct word bases or stems (counting related terms such as “vote,” “voter” and “voting” as a single term). Trump used a much smaller vocabulary of only 1,750 stems. </p>
<p>Clinton uses lengthier, more sophisticated sentence constructions – scoring around 12 on the <a href="http://www.readabilityformulas.com/gunning-fog-readability-formula.php">Gunning Fog Index</a>, which measures the complexity of language – while Trump uses tweet-like short phrases that score a 7. This suggests Clinton is seeking to communicate with a more educated and socially sophisticated audience, while Trump makes an effort to be readily understood at all socioeconomic levels. </p>
<p>We can also use sentiment analysis to get a sense of the language and emotion in the debate. We can determine whether a candidate is under stress or remaining calm by looking at the tone of the words used, or whether they are imparting a positive or negative message. <a href="http://www.techrepublic.com/article/clinton-trump-debate-1-what-data-from-social-media-tells-us/">Analysis of the first presidential debate</a> shows the two candidates were close: Clinton used 53 percent negative terms while Trump used 55 percent. She is also <a href="http://www.vox.com/2016/6/2/11831014/clinton-trump-twitter">more positive when tweeting</a>.</p>
<h2>Turning to social media</h2>
<p>We could also delve deeper into the debate transcripts to look at things like the frequency with which specific topics are addressed, or how the candidates’ debate styles, messages and sentiments change over time. But let’s take a quick look at another valuable source of information: social media.</p>
<p>Twitter, Windows Messenger, Instagram and other sites provide a new and exciting window onto what is being said and thought by society at large. <a href="https://dev.twitter.com/rest/reference/get/search/tweets">These platforms</a> allow us to <a href="https://msdn.microsoft.com/en-us/library/aa751014.aspx">download streams</a> of <a href="https://www.instagram.com/developer/">data for analysis</a>. With just a few lines of programming code you could, for example, get the latest tweets from either or both of the candidates – and often at no cost.</p>
<p>A sentiment analysis of their tweets could reveal how the candidates use social media, and what they’re saying to their audiences on those services. As was found in an <a href="http://varianceexplained.org/r/trump-tweets/">analysis of which device Trump’s account tweeted from</a>, they can even reveal whether a candidate is <a href="https://twitter.com/tvaziri/status/762005541388378112/photo/1">tweeting personally</a>, or whether it’s a campaign staffer standing in.</p>
<p>The internet and social media give us access to a wide variety of data that gives the public insight into facts and tendencies behind the public statements and claims. Even as the candidates and campaigns scrutinize our every click and post, we can keep our own eyes on them too.</p><img src="https://counter.theconversation.com/content/63751/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rick Hutley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Candidates and campaigns are analyzing voters endlessly this election season. But the internet allows us to turn the tables and obtain a wide variety of data about them, too.Rick Hutley, Clinical Professor of Analytics, University of the PacificLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/633182016-09-25T19:32:58Z2016-09-25T19:32:58ZWhat brain regions control our language? And how do we know this?<figure><img src="https://images.theconversation.com/files/137024/original/image-20160908-25244-1zf7n8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Our language abilities are enabled by a co-ordinated network of brain regions that have evolved to give humans a sophisticated ability to communicate.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/homeofbastian/2830133659/in/photolist-5j6b8V-2YLMgN-fdTet2-7E3qmp-dPDTKu-FvFi2-3Z7wR-e77LXU-ezqVA5-dQJ17i-ae4PEz-efSzmw-3eVbfC-ksi7-7Yvcat-8uRQV3-4QjLhj-3gSMF-7kA9JT-8VBQJX-6XfwDL-aaUhVY-6us67C-nqLMx7-95FwVE-dW52ER-ekqqxZ-f5EWZ7-bD3QnG-azHACF-2d1ebs-4pnRe7-a8x7N1-aKvFiX-c8SU6A-nAhRbL-bxzm7F-4wr7ZJ-hrX29-9U6cxS-5gknjQ-9u5Wvw-6YHDmi-kkyR4p-8jnvRE-7P2tfd-rFK9Nx-cvHKWA-bD3PH9-BPupf">[bastian.]/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p><em>The brain is key to our existence, but there’s a long way to go before neuroscience can truly capture its staggering capacity. For now, though, our <a href="https://theconversation.com/au/topics/brain-control-series-31489">Brain Control series</a> explores what we do know about the brain’s command of six central functions: language, mood, memory, vision, personality and motor skills – and what happens when things go wrong.</em></p>
<hr>
<p>When you read something, you first need to detect the words and then to interpret them by determining context and meaning. This complex process involves many brain regions. </p>
<p>Detecting text usually involves the <a href="http://neuroscience.uth.tmc.edu/s2/chapter15.html">optic nerve and other nerve bundles</a> delivering signals from the eyes to the visual cortex at the back of the brain. If you are reading in Braille, you use the <a href="http://neuroscience.uth.tmc.edu/s2/chapter04.html">sensory cortex</a> towards the top of the brain. If you listen to someone else reading, then you use the <a href="http://neuroscience.uth.tmc.edu/s2/chapter13.html">auditory cortex</a> not far from your ears. </p>
<p>A <a href="http://neuroscience.uth.tmc.edu/s4/chapter08.html">system of regions</a> towards the back and middle of your brain help you interpret the text. These include the <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4107834/">angular gyrus</a> in the parietal lobe, <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1749-6632.1976.tb25546.x/abstract">Wernicke’s area</a> (comprising mainly the top rear portion of the temporal lobe), <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4885738/">insular cortex</a>, <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2424405/">basal ganglia and cerebellum</a>.</p>
<hr>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=529&fit=crop&dpr=1 600w, https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=529&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=529&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=665&fit=crop&dpr=1 754w, https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=665&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/138932/original/image-20160923-25499-1v86vev.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=665&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">The Conversation</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<hr>
<p>These regions work together as a network to process words and word sequences to determine context and meaning. This enables our receptive language abilities, which means the ability to understand language. Complementary to this is expressive language, which is the ability to produce language. </p>
<p>To speak sensibly, you must think of words to convey an idea or message, formulate them into a sentence according to grammatical rules and then use your lungs, vocal cords and mouth to create sounds. Regions in your frontal, temporal and parietal lobes formulate what you want to say and the <a href="http://neuroscience.uth.tmc.edu/s3/chapter03.html">motor cortex</a>, in your frontal lobe, enables you to speak the words.</p>
<p>Most of this language-related brain activity is likely occurring in the left side of your brain. But some people use an even mix of both sides and, rarely, some have right dominance for language. There is an evolutionary view that <a href="http://www.jneurosci.org/content/25/45/10351.long">specialisation of certain functions to one side or the other</a> may be an advantage, as many animals, especially vertebrates, exhibit brain function with prominence on one side.</p>
<p>Why the left side is favoured for language isn’t known. But we do know that injury or conditions such as <a href="http://www.neurology.org/content/67/10/1813">epilepsy, if it affects the left side of the brain</a> early in a child’s development, can increase the chances language will develop on the right side. The chance of the person being left-handed is also increased. This makes sense, because the left side of the body is controlled by the motor cortex on the right side of the brain.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=438&fit=crop&dpr=1 600w, https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=438&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=438&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=551&fit=crop&dpr=1 754w, https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=551&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/138756/original/image-20160922-22540-xt2j8i.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=551&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">To speak sensibly, you must think of words to convey an idea or message, formulate them into a sentence according to grammatical rules and then use your lungs, vocal cords and mouth to create sounds.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/paulpod/478995331/in/photolist-JjYCc-yKtpE-6GWm9M-8Bo6Kj-eQM8tp-dB1fWn-53382S-5NyF79-abEHgE-qtbHu-8s9MD-iLV85-4vmaxd-4Q14um-eMgfdz-8kqWBn-aJifav-qvck3-8ku5Rw-o4rcHW-ac55u-7piWQa-dDNWKm-8kqUgt-ekqH7B-6Zo69a-86iZdJ-gCnhb3-m9KL4-5cDdfX-6DqwK3-6MBJJ-qpjzf-3cyUdJ-afF9dL-nr3wd-9t4HD2-6AQmA1-ojh79-2hshBG-3d7qaZ-agQ1vn-4VBLqh-4ebAFg-8pjxvJ-pkr9jn-cgc9fw-4rAjC4-Yfc67-aieCb3">paul pod/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Selective problems</h2>
<p>In 1861, French neurologist Pierre Paul Broca described a patient unable to speak who had no motor impairments to account for the inability. A postmortem examination showed a lesion in a large <a href="http://psychclassics.yorku.ca/Broca/perte-e.htm">area towards the lower middle of his left frontal lobe</a> particularly important in language formulation. This is now known as <a href="http://neuroscience.uth.tmc.edu/s4/chapter08.html">Broca’s area</a>. </p>
<p>The clinical symptom of being unable to speak despite having the motor skills is known as expressive aphasia, or Broca’s aphasia.</p>
<p>In 1867, Carl Wernicke observed an opposite phenomenon. A patient was able to speak but not understand language. This is known as receptive aphasia, or Wernicke’s aphasia. The damaged region, as you might correctly guess, is the Wernicke’s area mentioned above.</p>
<p>Scientists have also observed injured patients with <a href="http://link.springer.com/article/10.1007/BF01067101">other selective problems</a>, such as an inability to understand most words except nouns; or words with unusual spelling, such as those with silent consonants, like reign. </p>
<p>These difficulties are thought to arise from damage to selective areas or connections between regions in the brain’s language network. However, precise localisation can often be difficult given the complexity of individuals’ symptoms and the uncontrolled nature of their brain injury.</p>
<p>We also know the brain’s language regions work together as a <a href="http://www.sciencedirect.com/science/article/pii/S1053811912004703">co-ordinated network</a>, with some parts involved in multiple functions and a level of redundancy in some processing pathways. So it’s not simply a matter of one brain region doing one thing in isolation. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=778&fit=crop&dpr=1 600w, https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=778&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=778&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=977&fit=crop&dpr=1 754w, https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=977&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/138758/original/image-20160922-22530-1mflhwm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=977&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Broca’s area is named after French neurologist Pierre Paul Broca.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Paul_Broca_2.jpg">Wikimedia Commons</a></span>
</figcaption>
</figure>
<h2>How do we know all this?</h2>
<p>Before advanced medical imaging, most of our knowledge came from observing unfortunate patients with injuries to particular brain parts. One could relate the approximate region of damage to their specific symptoms. Broca’s and Wernicke’s observations are well-known examples.</p>
<p>Other knowledge was inferred from brain-stimulation studies. Weak electrical stimulation of the brain while a patient is awake is sometimes performed in patients undergoing surgery to remove a lesion such as a tumour. The stimulation causes that part of the brain to stop working for a few seconds, which can enable the surgeon to identify areas of critically important function to avoid damaging during surgery. </p>
<p>In the mid-20th century, this helped neurosurgeons discover more about the <a href="http://press.princeton.edu/titles/855.html">localisation of language function in the brain</a>. It was clearly demonstrated that while most people have language originating on the left side of their brain, some could have language originating on the right.</p>
<p>Towards the later part of the 20th century, if a surgeon needed to find out which side of your brain was responsible for language – so he didn’t do any damage – he would put to sleep one side of your brain with an anaesthetic. The doctor would then ask you a series of questions, determining your language side from your ability or inability to answer them. This invasive test (which is less often used today due to the availability of functional brain imaging) is <a href="http://www.tandfonline.com/doi/abs/10.1076/jhin.8.3.286.1819">known as the Wada test</a>, named after Juhn Wada, who first described it just after the second world war.</p>
<h2>Brain imaging</h2>
<p>Today, we can get a much better view of brain function by using imaging techniques, especially magnetic resonance imaging (MRI), a safe procedure that uses magnetic fields to take pictures of your brain. </p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=800&fit=crop&dpr=1 600w, https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=800&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=800&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1005&fit=crop&dpr=1 754w, https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1005&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/138760/original/image-20160922-22540-8fqobz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1005&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">When we see activity in a region of the brain, that’s when there is an increase in freshly oxygenated blood flow.</span>
<span class="attribution"><span class="source">from shutterstock.com</span></span>
</figcaption>
</figure>
<p>Using MRI to measure brain function is called functional MRI (fMRI), which detects signals from magnetic properties of blood in vessels supplying oxygen to brain cells. The fMRI signal changes depending on <a href="http://www.pnas.org/content/89/13/5951">whether the blood is carrying oxygen</a>, which means it slightly reduces the magnetic field, or has delivered up its oxygen, which slightly increases the magnetic field. </p>
<p>A few seconds after brain neurons become active in a brain region, there is an increase in freshly oxygenated blood flow to that brain part, much more than required to satisfy the oxygen demand of the neurons. This is what we see when we say a brain region is activated during certain functions.</p>
<p>Brain-imaging methods have revealed that much more of our brain is involved in language processing than previously thought. We now know that <a href="http://www.sciencedirect.com/science/article/pii/S1053811912004703">numerous regions in every major lobe</a> (frontal, parietal, occipital and temporal lobes; and the cerebellum, an area at the bottom of the brain) are involved in our ability to produce and comprehend language. </p>
<p>Functional MRI is also becoming a useful clinical tool. In some centres it has replaced the Wada test to <a href="http://www.ncbi.nlm.nih.gov/pubmed/20097290">determine where language is in the brain</a>. </p>
<p>Scientists are also using fMRI to build up a finer picture of how the brain processes language by designing experiments that compare which areas are active during various tasks. For instance, researchers have observed <a href="http://www.sciencedirect.com/science/article/pii/S0006322302013653">differences in brain language regions</a> of dyslexic children compared to those without dyslexia. </p>
<p>Researchers compared fMRI images of groups of children with and without dyslexia while they performed language-related tasks. They found that dyslexic children had, on average, less activity in Broca’s area mainly on the left during this task. They also had less activity in or near Wernicke’s area on the left and right, and a portion of the front of the temporal lobe on the right. </p>
<p>Could this type of brain imaging provide a diagnostic signature of dyslexia? This is a work-in-progress, but we hope further study will one day lead to a robust, objective and early brain-imaging test for dyslexia and other disorders.</p>
<hr>
<p><em>Want to know how the brain controls your mood? Read today’s accompanying piece <a href="http://theconversation.com/the-emotion-centre-is-the-oldest-part-of-the-human-brain-why-is-mood-so-important-63324">here</a>.</em></p><img src="https://counter.theconversation.com/content/63318/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Abbott receives fellowship funding from the Australian National Imaging Facility. He has received grants from the National Health and Medical Research Council (Australia), the Australian Research Council, and the National Institutes of Health (USA). David works at the Florey Institute of Neuroscience and Mental Health and has honorary affiliations with The University of Melbourne. The Florey acknowledges support from the Victorian Government and in particular the funding from the Operational Infrastructure Support Grant.</span></em></p>When you read this text, certain regions in your brain begin working more than others. Advanced imaging allows scientists to map the brain networks responsible for understanding language.David Abbott, Senior Research Fellow and Head of the Epilepsy Neuroinformatics Laboratory, Florey Institute of Neuroscience and Mental HealthLicensed as Creative Commons – attribution, no derivatives.