tag:theconversation.com,2011:/uk/topics/cognitive-bias-8885/articlesCognitive bias – The Conversation2024-03-26T16:04:24Ztag:theconversation.com,2011:article/2164762024-03-26T16:04:24Z2024-03-26T16:04:24ZExploring the roots of stupidity: first understand the psychology of what lies behind irrational opinions<p>Most people, at one time or another, act foolishly. However, truly ignorant individuals exhibit a lack of introspection and stubbornly cling to their opinions, regardless of how irrational they may be. These people demonstrate unwavering self-assurance and are often oblivious to their own inadequacies. They craft retrospective justifications to validate their beliefs and hold onto them.</p>
<p>Even when presented with opportunities for personal growth and change, they seem incapable of breaking free from their entrenched habits. Reasoning with stubborn individuals can be as perplexing as it is frustrating. Many have written it off as a hopeless task. </p>
<p>As American writer Mark Twain <a href="https://marktwainstudies.com/the-apocryphal-twain-never-argue-with-stupid-people-they-will-drag-you-down-to-their-level-and-beat-you-with-experience/">once cautioned</a>: </p>
<blockquote>
<p>Never argue with stupid people, they will drag you down to their level and then beat you with experience.</p>
</blockquote>
<p>To argue against stupidity only seems to reinforce it. These individuals thrive on power and control, defending their position and denying their foolishness, regardless of counterarguments.</p>
<p>Despite these challenges, it is still possible to sway such people towards more sensible behaviour. It all starts with understanding the roots of stupidity. From a psychological perspective, stupidity is often considered an outcome of cognitive biases or errors in judgment.</p>
<h2>Why biases persist</h2>
<p>Many prominent psychologists attribute irrational beliefs and foolish actions to our cognitive limitations. Research into human cognition and decision-making has shed light on why <a href="https://www2.psych.ubc.ca/%7Eschaller/Psyc590Readings/TverskyKahneman1974.pdf">these biases persist</a>. It reveals that humans are not purely rational beings. <a href="https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555">They switch between fast, intuitive thinking and slow, rational thinking</a>, depending on the situation. </p>
<p><a href="https://pubmed.ncbi.nlm.nih.gov/20920513/">Neuroscientists</a> have also weighed in, noting that the brain’s frontal lobes, responsible for rational thinking, can be overridden by the amygdala, a more primitive system for processing threats. In emergency situations requiring quick decisions, the slower, deliberate information processing is often set aside. </p>
<p>Numerous cognitive biases can help explain some of the nonsensical decisions people make. For instance, individuals can be susceptible to confirmation bias, where they favour information that aligns with their preexisting beliefs. They may also succumb to “anchoring”, becoming overly influenced by the first piece of information they receive (the anchor), even when this information turns out to be irrelevant or arbitrary. </p>
<p>The overconfidence effect is another potential factor at play, causing people to overrate their abilities and knowledge and the accuracy of their beliefs. There is also the phenomenon of <a href="https://www.psychologytoday.com/za/basics/groupthink">groupthink</a>, where groups prioritise consensus and conformity over critical evaluation. </p>
<p>Flawed decisions could also be the result of fundamental <a href="https://online.hbs.edu/blog/post/the-fundamental-attribution-error">attribution error</a>. This involves incorrectly attributing others’ behaviour to internal factors, such as personality, rather than to external factors, like situational influences. </p>
<p>Also, the <a href="https://thedecisionlab.com/biases/availability-heuristic">availability heuristic</a> explains the tendency to rely on information that comes to mind quickly and easily when making decisions. </p>
<p>While these cognitive biases don’t inherently imply stupidity, when left unaddressed, they can pose significant risks.</p>
<h2>Managing the misguided</h2>
<p>When individuals recognise their cognitive biases, they become more willing to participate in productive discussions and gain deeper insights into their own behaviour. Rather than trying to persuade them through rational discourse, one can encourage them to examine these biases. </p>
<p><strong>Promote reflective thinking:</strong> People can be taught how to properly decode the information they encounter. They can learn to discern whether their own observations and beliefs are grounded in accurate evidence. </p>
<p><strong>Advocate greater self-awareness:</strong> When people acquire self-awareness, they are able to reflect on their behaviour more objectively. </p>
<p><strong>Keep people grounded:</strong> Self-absorbed people often lack interest in the opinions of others. They need to attain a more grounded perspective on life and cultivate their capacity for self-evaluation. Empathy is another great remedy for foolishness. </p>
<p><strong>Satire as a tool:</strong> Satire has the potential to stimulate reflection and critical thinking. It gets people to question their assumptions without attacking individuals personally. </p>
<p><strong>Let them learn the hard way:</strong> Instead of instructing individuals to avoid specific foolish activities, one may encourage them to go ahead. It can be risky, but the hope is that when their actions lead to disastrous outcomes, they will learn from the experience. </p>
<p><strong>Lead by example:</strong> An effective leader, whether in government, business or any other sector, requires a combination of intelligence, knowledge, wisdom, empathy and compassion. Additional qualities are critical thinking, problem-solving skills, proficiency in handling complex issues, and the ability to collaborate with others and distinguish between the wise and the foolish. </p>
<p>A leader like this can set an example that contrasts with the conduct of foolish leaders. </p>
<h2>Stupidity in a ‘post-truth’ era</h2>
<p>In today’s “post-truth” era we find ourselves grappling with a daily barrage of public discourse that blurs the line between fact and fantasy. We are fooled by errors and lies, and social media appears to be amplifying such stupidity. The rise of social media has made human follies more visible than ever. We tend to underestimate the number of ignorant individuals in our midst, and the influence such people can exert over large groups.</p>
<p>The dangerous combination of power and stupidity can disrupt the lives of countless people. Unfortunately, as long as there are foolish supporters enabling such leaders, people will be trapped in their own collective foolishness. A significant counterforce against collective stupidity is the presence of institutional safeguards. </p>
<p>Citizens must cultivate a robust civic culture, fostering a society where they can exert influence on their government. There need to be laws that discourage misinformation and legal avenues to counter fake news, especially when it causes personal harm. </p>
<p>Education can lead people to discover and acknowledge their own ignorance, nurturing a more thoughtful and informed society that is better equipped to confront the pitfalls of stupidity.</p>
<p><em>This is an edited version of <a href="https://knowledge.insead.edu/leadership-organisations/how-handle-foolish-people"> an article published</a> by Insead Knowledge.</em></p><img src="https://counter.theconversation.com/content/216476/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Manfred Kets de Vries does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Truly ignorant individuals lack introspection and stubbornly cling to their irrational opinions.Manfred Kets de Vries, Distinguished Clinical Professor of Leadership Development and Organisational Change, INSEADLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1264022024-01-24T17:19:28Z2024-01-24T17:19:28ZTraining to reduce cognitive bias may improve decision making after all<p>Ever since Daniel Kahneman and Amos Tversky formalised the concept of <a href="https://www.sciencedirect.com/topics/neuroscience/cognitive-bias">cognitive bias in 1972</a>, most empirical evidence has given credence to the claim that our brain is incapable of improving our decision-making abilities. Cognitive bias has practical ramifications beyond private life, extending to professional domains including business, military operations, political policy, and medicine.</p>
<p>Some of the clearest examples of the effects of bias on consequential decisions have occurred in military operations. Confirmation bias, that is the tendency to conduct a biased search for and interpretation of evidence in support of our hypotheses and beliefs, has contributed to the downing of Iran Air Flight 655 in 1988 and, more recently, the decision to invade Iraq in 2003. It has also been identified as one of the most deleterious biases on social media, actively contributing to the <a href="https://link.springer.com/article/10.1007/s10796-021-10222-9">development of polarisation and echo chambers in exchanges</a>.</p>
<h2>Can one bend one’s intuition?</h2>
<p>Despite all the attention in recent years on reducing cognitive bias, most evidence suggests that there’s little we can do to improve our professional and personal decision making. But a recent experiment suggests that it may be possible for training to improve decision making in the field.</p>
<p>We are regularly reminded of the many ways that cognitive biases interfere with our decision making. However, beyond teaching a specific skill or rule – for example, how to calculate expected values – reading articles and books or even completing courses and business cases <a href="https://journals.sagepub.com/doi/full/10.1111/j.1745-6924.2009.01142.x">has proven of little help</a> to people in the throes of making a decision. That conclusion was succinctly summarised by Daniel Kahneman, a Nobel Laureate and a founder of the field and, who said in <a href="https://www.theatlantic.com/magazine/archive/2018/09/cognitive-bias/565775/">a 2018 interview</a>:</p>
<blockquote>
<p>“You can’t improve intuition. Perhaps, with very long-term training, lots of talk, and exposure to behavioural economics, what you can do is cue reasoning… Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument, the rules go out the window.”</p>
</blockquote>
<p>That view is backed up by a trail of frustrating findings from the <a href="https://apps.dtic.mil/docs/citations/ADA099435">1980s</a> on, where even trained experts such as <a href="https://www.ncbi.nlm.nih.gov/pubmed/7070445">doctors</a>, <a href="https://www.sciencedirect.com/science/article/abs/pii/074959788790046X">realtors</a> and <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/j.1468-0017.2012.01438.x">philosophers</a> did not show improved decision making when faced with novel contexts and problems in the field.</p>
<p>In an article published in <a href="https://journals.sagepub.com/doi/abs/10.1177/0956797619861429"><em>Psychological Science</em></a>, we report promising results that suggest this post-mortem may be premature. In an experiment involving graduate business students, we found that bias-reduction training can improve decision making in field settings even though reminders of bias are absent.</p>
<h2>Training sessions and computer games</h2>
<p>The experiment was designed to surreptitiously measure the influence of a single <a href="https://journals.sagepub.com/doi/abs/10.1037/1089-2680.2.2.175">de-bias training intervention</a> – the tendency to search for evidence confirming hypotheses and ideas we already suspect or believe to be true, to overweight facts and ideas that support that belief, and to discount or ignore evidence that supports alternate hypotheses.</p>
<p>A little more than half of participants in the experiment (62%) were given the training and then asked to complete a business case designed to test confirmation bias, but they were unaware of the connection between the training and the case. The rest of participants first completed the case and then received training. Even though the time lag between training and the case averaged 18 days and the structure of problems used in the training differed from the case, comparison of the trained and untrained students revealed that training reduced choice of the inferior, hypothesis-confirming case solution by 29%.</p>
<p>To disguise the relationship between training and the case, all graduate business students in three programs were invited to play a serious computer game in a set of sessions that took place over a 20-day window. This particular training intervention has produced large and long-lasting reductions of confirmation bias, correspondence bias, and the bias blind spot, in laboratory contexts. Originally created for the Office of the Director of National Intelligence, it has been used to reduce bias in US government intelligence analysts.</p>
<h2>Imagining you’re leading an automotive racing team</h2>
<p>All graduate students in the three programs also completed, in one of their courses, an unannounced business case known as “Carter Racing”, a case modelled on the fatal decision to launch the <em>Challenger</em> space shuttle in 1986. Here, each student acted as the lead of an automotive racing team making a high-stakes, go/no-go decision: remain in a race or withdraw from it. We then used natural variance in the training schedule to test whether the effects of debias training would transfer to improved decision making in the case, when trainees were not aware that their decision making would be examined for bias.</p>
<p>At first sight, the case narrative and payoff structure favour the hypothesis-confirming choice: remaining in the race. A careful examination of the data provided in the case, however, reveals that withdrawing from the race is an objectively superior option, but it requires the compilation of two charts. The first chart tracks frequencies of engine failures in relation to temperature at the time of the race. The other chart tracks frequencies of races without engine failures by temperatures at the time of the race. Casual inspection of either chart would not reveal the clear relationship between failures and temperature, but when both charts are considered together, the relationship is strikingly clear. A catastrophic engine failure is nearly certain at the low temperature recorded just before the race is to begin.</p>
<p>Participants trained before completing the case were 29% less likely to choose the inferior hypothesis-confirming solution than participants trained after completing the case. To address possible selection biases, such as better students signing up for earlier training sessions, we tested and found that the effect held even if we only compared participants who completed the training one day before or after the case. Further, when controlling for factors including students’ work experience, age, grade point averages, GMAT scores, and propensity to engage in cognitive reflection, we found that the training intervention still significantly improved decision making.</p>
<p>Our analyses of participants’ written justifications for their decisions suggest that their improved decisions were driven by a reduction in confirmatory hypothesis testing. Trained participants spontaneously generated fewer arguments in support of going ahead with the race – the inferior case solution – than did untrained participants.</p>
<h2>Improvement is possible</h2>
<p>These results provide encouraging evidence that training can improve decision making in the field and consequential decisions in professional and personal life. It also addresses the concern that debiasing training may lead people to overcorrect or abandon <a href="https://psycnet.apa.org/record/2008-01984-002">heuristics</a>, the simple rules people rely on to reduce the complexity and effort when making decisions that sometimes <a href="https://www.sciencedirect.com/science/article/abs/pii/S1364661310001713">produce these biases</a>, in situations where they are useful. Trained participants were more likely to choose the optimal case solution, so training benefited rather than impaired decision making.</p>
<p>Of course, these findings are limited to a single field experiment. More research is needed to replicate the effect in other domains and to explain why this game-based training intervention transferred more effectively than have other forms of training tested by past research. Games may be more engaging training interventions than lectures or written summaries of research findings. The game also provided intensive practice and personalised feedback, which is another possibility. A third possibility is the way the intervention taught players about biases. Training may be more effective when it describes cognitive biases and how to mitigate them at an abstract level, and then gives trainees immediate practice testing out their new knowledge on different problems and contexts.</p><img src="https://counter.theconversation.com/content/126402/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anne-Laure Sellier a reçu des financements de la Fondation HEC. </span></em></p><p class="fine-print"><em><span>Carey K. Morewedge previously received funding for other debiasing research from the Intelligence Advanced Research Projects Activity of the United States Government.</span></em></p><p class="fine-print"><em><span>Irene Scopelliti ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>It has long been thought one couldn’t bend one’s intuition. Recent research reveals it is in fact possible to reduce bias through training.Anne-Laure Sellier, Professeur de Sciences Comportementales à HEC Paris et membre du groupe de recherche CNRS-GREGHEC, HEC Paris Business SchoolCarey K. Morewedge, Professor of Marketing, Boston UniversityIrene Scopelliti, Professor of Marketing and Behavioural Science, City, University of LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2117262023-12-21T10:27:16Z2023-12-21T10:27:16ZSocial media drains our brains and impacts our decision making – podcast<figure><img src="https://images.theconversation.com/files/543273/original/file-20230817-40322-o38kim.jpg?ixlib=rb-1.1.0&rect=798%2C167%2C7788%2C5214&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Social media can make us buy products we don't want, new research shows. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/young-asian-business-woman-connecting-to-social-royalty-free-image/1470073460?phrase=social+media&adppopup=true">Oscar Wong/Moment via Getty Images</a></span></figcaption></figure><p>Ever found yourself scrolling through social media late at night and accidentally buying something you regretted? In this episode of <a href="https://theconversation.com/uk/topics/the-conversation-weekly-98901">The Conversation Weekly</a> podcast, we talk to an advertising expert about recent research into how social media <a href="https://www.tandfonline.com/doi/full/10.1080/15252019.2022.2144780">can overload our brains</a> and make us buy products we don’t need or want.</p>
<iframe src="https://embed.acast.com/60087127b9687759d637bade/6581c0553f03c00017d0f360" frameborder="0" width="100%" height="190px"></iframe>
<p></p>
<p><iframe id="tc-infographic-561" class="tc-infographic" height="100" src="https://cdn.theconversation.com/infographics/561/4fbbd099d631750693d02bac632430b71b37cd5f/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<p><a href="https://scholar.google.com/citations?user=cXqXHpsAAAAJ&hl=en">Matthew Pittman</a> is a professor of advertising at the University of Tennessee in the US. In 2022, Pittman and his colleague <a href="https://scholar.google.com/citations?user=GqkucpQAAAAJ&hl=en&oi=ao">Eric Haley</a>, conducted three online studies on Americans aged 18-65 to examine how people under various mental loads respond to advertisements differently.</p>
<p>“Our brain has limited resources and it can also be taxed if we try to do too many things at once and once those resources are depleted, there are usually negative consequences,” says Pittman. </p>
<blockquote>
<p>If you’re on the fence about a purchase and you’re under cognitive load and you see a lot of likes or a lot of comments, or maybe it’s very attractive people in the ad that look happy … click, I’m gonna purchase it.</p>
</blockquote>
<p>Pittman found that those who weren’t under <a href="https://doi.org/10.1007/s11251-009-9110-0">cognitive load</a> made more balanced purchasing decisions. But the group that they told to scroll through their Instagram feed for 30 seconds and then look at an advert was more susceptible to cues such as the comments and likes associated with it. </p>
<p>When asked to describe their rationale for buying a product, Pittman was surprised that those under a high mental load had diminished sentence and language capabilities. He found that Instagram put subjects in a mentally exhausted state because they were consuming different types of text, photos and posts.</p>
<blockquote>
<p>People that were not under cognitive load gave grammatically normal sentences that flowed logically, such as this ice cream looked tasty, or I liked the colors, but when people were under cognitive load, even their sentences were more fractured. Which explains why I can’t explain to my wife why I consistently make stupid purchases.</p>
</blockquote>
<p>Listen to the full episode of <a href="https://podfollow.com/the-conversation-weekly/view">The Conversation Weekly</a> to hear the different ways social media impacts our processing abilities and decision-making. </p>
<p>A <a href="https://cdn.theconversation.com/static_files/files/3015/Social_Media_and_Cognitive_Load_Transcript.docx.pdf?1706201893">transcript of this episode</a> is now available.</p>
<hr>
<p><em>Matthew Pittman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</em></p>
<p><em>This episode was written and produced by Mend Mariwany with production assistance from Katie Flood and our intern Jusneel Mahal. Eloise Stevens does our sound design, and our theme music is by Neeta Sarl. The executive producer is Gemma Ware.</em> </p>
<p><em>You can find us on Twitter <a href="https://twitter.com/TC_Audio">@TC_Audio</a>, on Instagram at <a href="https://www.instagram.com/theconversationdotcom/">theconversationdotcom</a> or <a href="mailto:podcast@theconversation.com">via email</a>. You can also sign up to The Conversation’s <a href="https://theconversation.com/newsletter">free daily email here</a>.</em> </p>
<p><em>Listen to “The Conversation Weekly” via any of the apps listed above, download it directly via our <a href="https://feeds.acast.com/public/shows/60087127b9687759d637bade">RSS feed</a> or find out <a href="https://theconversation.com/how-to-listen-to-the-conversations-podcasts-154131">how else to listen here</a>.</em></p><img src="https://counter.theconversation.com/content/211726/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Matthew Pittman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New research shows that scrolling through Instagram can effect our processing and language capabilities. Listen to The Conversation Weekly podcast.Mend Mariwany, Producer, The Conversation Weekly Podcast, The ConversationLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2177982023-11-27T02:08:28Z2023-11-27T02:08:28ZWhat is the ‘sunk cost fallacy’? Is it ever a good thing?<figure><img src="https://images.theconversation.com/files/560987/original/file-20231122-24-d8qfce.jpg?ixlib=rb-1.1.0&rect=98%2C151%2C4746%2C3280&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.pexels.com/photo/person-holding-burning-paper-in-dark-room-33930/">Eugene Shelestov/Pexels</a></span></figcaption></figure><p>Have you ever encountered a subpar hotel breakfast while on holiday? You don’t really like the food choices on offer, but since you already paid for the meal as part of your booking, you force yourself to eat something anyway rather than go down the road to a cafe.</p>
<p><a href="https://www.sciencedirect.com/science/article/pii/0167268180900517">Economists</a> and <a href="https://www.sciencedirect.com/science/article/pii/0749597885900494">social scientists</a> argue that such behaviour can happen due to the “sunk cost fallacy” – an inability to ignore costs that have already been spent and can’t be recovered. In the hotel breakfast example, the sunk cost is the price you paid for the hotel package: at the time of deciding where to eat breakfast, such costs are unrecoverable and should therefore be ignored.</p>
<p>Similar examples range from justifying finishing a banal, half-read book (or half-watched TV series) based on prior time already “invested” in the activity, to being less likely to quit exclusive groups such as sororities and sporting clubs the more <a href="https://psycnet.apa.org/record/1960-02853-001">effort it took to complete the initiation ritual</a>.</p>
<p>While these behaviours are not rational, they’re all too common, so it helps to be aware of this tendency. In some circumstances, you might even use it for your benefit.</p>
<h2>Sunk costs can affect high-stakes decisions</h2>
<p>While the examples above may seem relatively trivial, they show how common the sunk cost fallacy is. And it can affect decisions with much higher stakes in our lives. </p>
<p>Imagine that Bob previously bought a house for $1 million. Subsequently, there’s a nationwide housing market crash. All houses are now cheaper by 20% and Bob can only sell his house for $800,000. Bob’s been thinking of upgrading to a bigger house (and they are now cheaper!), but will need to sell his existing house to have funds for a downpayment.</p>
<p>However, he refuses to upgrade because he perceives a loss of $200,000 relative to the original price he paid of $1 million. Bob is committing the sunk cost fallacy by letting the original price influence his decision making – only the house’s current and projected price should matter.</p>
<p>Bob might be acting irrationally, but he’s only human. Part of the reason we may find it difficult to ignore such losses is because losses are psychologically more salient relative to gains – this is known as <a href="https://psycnet.apa.org/record/1985-05780-001">loss aversion</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person attempting to build a crooked bird house with tools strewn across a table" src="https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/560984/original/file-20231122-25-fx4vfh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">It’s okay to quit a crafting project if it’s not looking salvageable any more.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/man-frustrated-angry-building-bad-birdhouse-151033373">Tim Masters/Shutterstock</a></span>
</figcaption>
</figure>
<p>While most of the evidence for the sunk cost fallacy comes from <a href="https://link.springer.com/article/10.1007/s40685-014-0014-8">individual decisions</a>, it may also influence the decisions of groups. In fact, it is sometimes referred to as the <a href="https://www.nature.com/articles/262131a0">Concord fallacy</a>, because the French and British governments continued funding the doomed supersonic airliner long after it was likely it would not be commercially viable.</p>
<p>Another example is drawn-out armed conflict that involves a large loss of lives for the losing side. Some may think it impossible to capitulate because the casualties will have “died in vain”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/supersonic-flights-are-set-to-return-heres-how-they-can-succeed-where-concorde-failed-162268">Supersonic flights are set to return – here's how they can succeed where Concorde failed</a>
</strong>
</em>
</p>
<hr>
<h2>Knowing about sunk costs can help you</h2>
<p>If you find yourself justifying behaviour due to costs you’ve paid in the past rather than circumstances of the present, or predictions of the future, it’s worth checking yourself.</p>
<p>Identifying sunk costs allows you to cut your losses early and move on, rather than perpetuating larger losses. This is apparent in the housing example: the larger the crash, the cheaper the bigger house; and yet the larger the crash, the greater the perceived loss from selling the existing house. Hence, the greater the loss in opportunity inflicted by the sunk cost fallacy.</p>
<p>If you find it difficult to overcome the sunk cost fallacy, it may help to delegate such decisions to others. This may include the decision of whether to <a href="https://direct.mit.edu/rest/article-abstract/93/1/193/57894/The-Flat-Rate-Pricing-Paradox-Conflicting-Effects">go to a buffet</a> or subscribe to Netflix, with the latter potentially being a double whammy: one may feel compelled to binge-watch due to the flat fee structure and, as mentioned earlier, to finish mediocre series once halfway through.</p>
<h2>Use sunk costs to your advantage</h2>
<p>A second, less obvious benefit is actively using the fallacy to your advantage. For example, many gym memberships require upfront payments regardless of how much you use the facilities. If you find it hard to ignore sunk costs, choosing gym memberships that have large upfront fees and minimal pay-per-usage fees may be a way to <a href="https://pubsonline.informs.org/doi/abs/10.1287/mnsc.2018.3032">commit yourself</a> to a regular gym habit.</p>
<p>This can also apply to other activities that involve short-term pain for long-term gain – for example, paying for an online course will make you more likely to stick with it than if you found a free course.</p>
<p>But be warned, this doesn’t work for everything: it seems that spending wildly on a <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/ecin.12206">wedding ceremony or engagement ring</a> doesn’t have a “sunk cost” effect – it fails to increase the likelihood of staying married.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/gym-membership-how-to-get-the-most-out-of-it-according-to-a-sports-scientist-107551">Gym membership: how to get the most out of it, according to a sports scientist</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/217798/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Aaron Nicholas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>When we invest money, time or another resource we can’t get back, factoring that sunk cost into our future decisions can be a trap.Aaron Nicholas, Senior Lecturer in Economics, Deakin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2040882023-05-03T20:36:17Z2023-05-03T20:36:17ZAggression in kids is related to how they read others’ emotions<figure><img src="https://images.theconversation.com/files/522843/original/file-20230425-2136-t5th2l.png?ixlib=rb-1.1.0&rect=27%2C22%2C3016%2C2089&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Overtly hostile behavior tends to diminish with age except for a minority of children who are at risk of later criminality. This makes childhood a critical time for steering those most in-need away from difficult life paths.</span> <span class="attribution"><span class="source">(Erinn Acland)</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/aggression-in-kids-is-related-to-how-they-read-others--emotions" width="100%" height="400"></iframe>
<p>It may be surprising to hear that <a href="https://doi.org/10.1007/s10802-005-9001-z">toddlers and preschoolers are the most physically aggressive age demographic</a>. Luckily, they lack coordination and strength, making their attacks less dangerous than those of adults. </p>
<p>Overtly hostile behaviour tends to <a href="https://doi.org/10.1111/1469-7610.00744">diminish with age</a> — except for a minority of children who are at risk of <a href="https://doi.org/10.1371/journal.pone.0062594">later criminality</a>. This makes childhood a critical time for steering those most in need of support away from difficult life paths.</p>
<p>Being blind to others’ <a href="https://doi.org/10.1016/j.neubiorev.2012.08.006">negative emotions</a> (anger, fear, sadness) is linked to callous-unemotional traits in childhood. These traits include a <a href="https://doi.org/10.1007/s10802-022-00909-1">lack of guilt</a> for harming others, <a href="https://doi.org/10.1016/j.cpr.2019.101809">a lack of empathy</a> and generally being <a href="https://doi.org/10.1177/1073191106287354">unemotional</a>. A poor ability to detect others’ negative emotions is also uniquely tied to <a href="https://doi.org/10.1002/ab.21989">aggression</a>.</p>
<p>If a child hurts someone, but can’t tell they’ve upset them, it means they won’t see the emotional consequences of their actions. The theory is this could make it easier for them to <a href="http://dx.doi.org/10.1136/jnnp.71.6.727">continue harming others</a>.</p>
<p>But the caveat here is that not all aggression is equal. </p>
<h2>Types of aggression</h2>
<p>There are two types of aggression that represent differing emotional temperatures: cold-calculated and hot-reactive. </p>
<p>Cold-calculated aggression is when force is used to get a desired outcome. For example, a child hitting a peer to steal their candy without provocation. This type of “cold-hearted” aggression is tied to <a href="https://doi.org/10.1017/S0954579423000317">callous-unemotional traits</a>. </p>
<p>Hot-reactive aggression involves harming others in response to provocation. Children who engage in reactive aggression tend to be more “hot-headed.” They have higher <a href="https://doi.org/10.1007/s10802-019-00533-6">emotionality</a>, <a href="https://doi.org/10.1007/s10802-018-0498-3">unregulated anger</a> and tend to assume <a href="https://doi.org/10.1016/j.avb.2018.01.005">hostile intent from others</a>. If a reactive aggressor is bumped by a passerby, for example, they are more likely to assume it was on purpose and hit them in retaliation. </p>
<p>Although these types of aggression seem opposite, someone who is a cold-calculated aggressor in one situation can also be a hot-reactive aggressor in another. The <a href="https://doi.org/10.1007/s10802-021-00813-0">type of aggression</a> a child uses the most results in them being categorized as one or the other.</p>
<p>Until now, it was unclear how children’s abilities to read facial expressions might differ between these “hot” and “cold” types of aggression.</p>
<h2>Difficulty recognizing emotions</h2>
<p>Our <a href="https://doi.org/10.1017/S0954579423000342">recently published paper</a> assessed two diverse samples of children — one of 300 children, the other of 374.</p>
<p>Children were shown pictures of faces that expressed differing intensities of sadness, anger, fear and happiness in a random order. They were asked to identify which emotion was expressed or whether no emotion was present. We considered caregivers’ education level, child age and child gender in our analyses.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two rows of five faces of a woman's face gradually changing expression from mildly sad to very sad. In the first row, the first three faces have 'Neutral' written across them and the word 'Insensitivity' written across the top. The next two faces have 'Sad' written across them. In the second row, the first two faces have 'Anger' written across them and 'Bias' written across the top. The last three faces have 'Sad' written across them." src="https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=322&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=322&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=322&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=405&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=405&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522838/original/file-20230425-2111-xw5dl3.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=405&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Children were shown pictures of faces that expressed differing emotions in a random order. Their ability to recognize a particular emotion was determined by the number of faces they identified correctly.</span>
<span class="attribution"><span class="source">(Cambridge University Press)</span></span>
</figcaption>
</figure>
<p>We found that blindness to others’ anger, fear and sadness was consistently related to using cold-calculated aggression. In other words, children who have difficulty understanding that they upset someone are more likely to harm others to get what they want.</p>
<p>Interestingly, we found that the way children misrecognized angry expressions mattered. Cold-calculated aggression was tied to anger insensitivity. In other words, thinking angry expressions looked emotionless rather than another emotion. </p>
<p>This implies that children who harm others to get what they want aren’t as sensitive to social threats in their environment. This would allow them to remain calm in potentially dangerous situations. </p>
<p>Children who show more callous-unemotional traits and behavioural problems tend to be <a href="https://doi.org/10.1111/j.1469-7610.2011.02397.x">more fearless and less deterred by punishment</a>, perhaps as a consequence of being more blind to threats.</p>
<p>We predicted that hot-reactive aggression would link to seeing anger in faces, regardless of whether the faces were actually angry. But surprisingly, that isn’t what we found.</p>
<p>Instead, thinking negative expressions looked happy was consistently linked to more hot-reactive aggression, but only in early childhood. </p>
<p>Youth who engage in more hot-reactive aggression have been reported to experience <a href="https://doi.org/10.1007/s10802-019-00533-6">lower happiness on a daily basis</a>, but are happier than their peers in response to positive events. So, perhaps young reactive aggressors are particularly sensitive to rewarding emotions. This may lead them to see happiness when it isn’t there. </p>
<p>Trouble figuring out the valence of an emotion (mistaking negative for positive emotions) could also be causing social blunders that result in conflict. Think about it: if you believe your friend is feeling happy, you have the green light to keep teasing or joking with them. But, if they are actually upset, this could stir up some serious friction. </p>
<p>This novel, unexpected link still needs to be teased apart in further research for us to understand what exactly is happening here.</p>
<h2>What causes aggression in children?</h2>
<p>Our study was correlational, meaning we can’t say for sure whether reduced emotion recognition causes aggression in children — only that these two things seem to be related.</p>
<p>However, a <a href="https://doi.org/10.1016/j.psychres.2012.04.033">2012 study does provide some support</a> for a causal link. Researchers found that improving emotion recognition in callous-unemotional youth through training reduced behavioural problems and increased empathy for others’ feelings, when compared to treatment-as-usual. This means that when callous youth were helped to identify how others feel, some of their behavioural issues resolved.</p>
<p>In our study, children’s ability to recognize emotions explained five per cent or less of their aggression, depending on their age. So, targeting this social skill alone is likely not sufficient to resolve serious aggression. </p>
<p>Addressing systemic causes of violence (e.g., <a href="https://doi.org/10.1016/j.jcrimjus.2016.02.011">poverty</a>) and investing in <a href="https://doi.org/10.1007/s10567-014-0167-1">tailored</a> <a href="https://doi.org/10.1257/pol.6.4.135">early interventions</a> that target multiple areas of child development and family well-being are necessary for promoting meaningful changes in children’s aggression.</p><img src="https://counter.theconversation.com/content/204088/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Erinn Acland has previously received funding from the Quebec Network on Suicide, Mood Disorders and Related Disorders, NSERC, and the University of Toronto Centre for the Study of Pain.</span></em></p><p class="fine-print"><em><span>Joanna Peplak has previously received funding from the Social Sciences and Humanities Research Council of Canada.</span></em></p>Faces hold crucial clues on what others are thinking and feeling. So, does missing that key social information impact children’s unkind behaviour?Erinn Acland, Postdoctoral Fellow, Developmental Psychology, Université de MontréalJoanna Peplak, Postdoctoral Scholar, University of California, IrvineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2046072023-05-02T12:13:12Z2023-05-02T12:13:12ZThe thinking error that makes people susceptible to climate change denial<figure><img src="https://images.theconversation.com/files/523279/original/file-20230427-24-j9qvgl.jpg?ixlib=rb-1.1.0&rect=422%2C15%2C4418%2C2783&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Expecting black-and-white answers can make it hard to see the truth.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/gears-in-the-mind-royalty-free-illustration/892833704">bubaone via Getty Images</a></span></figcaption></figure><p>Cold spells often bring climate change deniers out in force on social media, with hashtags like <a href="https://news.yahoo.com/nasa-yes-its-freezing-cold-no-that-doesnt-mean-climate-change-is-a-hoax-182933369.html">#ClimateHoax and #ClimateScam</a>. Former President Donald Trump often chimes in, <a href="https://www.washingtonpost.com/climate-environment/2019/01/29/trump-always-dismisses-climate-change-when-its-cold-not-so-fast-experts-say/">repeatedly claiming</a> that each cold snap disproves the existence of global warming.</p>
<p>From a scientific standpoint, these claims of disproof are absurd. Fluctuations in the weather don’t refute clear <a href="https://www.ncei.noaa.gov/access/monitoring/monthly-report/global/202301">long-term trends in the climate</a>. </p>
<p>Yet many people believe these claims, and the political result has been reduced willingness to take action to mitigate climate change.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3E0a_60PMR8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Sen. James Inhofe brought a snowball to the Senate floor in February 2015 to argue that because it was cold enough to snow in Washington, D.C., climate change wasn’t real. That year became the hottest on record and has since been surpassed.</span></figcaption>
</figure>
<p>Why are so many people susceptible to this type of disinformation? <a href="https://psychsciences.case.edu/people/other-faculty/">My field</a>, psychology, can help explain – and help people avoid being misled.</p>
<h2>The allure of black-and-white thinking</h2>
<p>Close examination of the arguments made by climate change deniers reveals the same mistake made over and over again. That mistake is the cognitive error known as black-and-white thinking, also called dichotomous and all-or-none thinking. As I explain in my book “<a href="https://www.amazon.com/Finding-Goldilocks-Creating-Personal-Relationships/dp/B08M8DS76S">Finding Goldilocks</a>,” black-and-white thinking is a source of dysfunction in mental health, relationships – and politics.</p>
<p>People are often susceptible to it because in many areas of life, dichotomous thinking does something helpful: It simplifies the world.</p>
<p>Binaries are easy to handle because there are only two possibilities to consider. When people face a spectrum of possibilities and nuance, they have to exert more mental effort. But when that spectrum is polarized into pairs of opposites, choices are clear and dramatic.</p>
<figure class="align-center ">
<img alt="Image of a person showing arrows pointing in opposite directions the person might take." src="https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523335/original/file-20230427-30-ykiwpt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Most things don’t fall neatly into only two choices.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/standing-man-with-two-choices-royalty-free-image/155131774">eyetoeyePIX via Getty Images</a></span>
</figcaption>
</figure>
<p>This mental labor-saving device is practical in many everyday situations, but it is a poor tool for understanding complicated realities – and the climate is complicated.</p>
<p>Sometimes, people divide the spectrum in asymmetric ways, with one side much larger than the other. For example, perfectionists often categorize their work as either perfect or unsatisfactory, so even good and very good outcomes are <a href="https://www.guilford.com/books/Cognitive-Behavioral-Treatment-of-Perfectionism/Egan-Wade-Shafran-Antony/9781462527649/authors">lumped together with poor ones</a> in the unsatisfactory category. In dichotomous thinking like this, a single exception can tip a person’s view to one side. It’s like a pass/fail grading system in which 100% earns a pass and everything else gets an F.</p>
<p>With a grading system like this, it’s not surprising that opponents of climate action have found ways to reject global warming research, despite the overwhelming evidence.</p>
<p>Here’s how they do it:</p>
<h2>The all-or-nothing problem</h2>
<p>Climate change deniers simplify the spectrum of possible scientific consensus into two categories: 100% agreement or no consensus at all. If it’s not one, it’s the other.</p>
<p>A 2021 review of thousands of climate science papers and conference proceedings concluded that over 99% of studies have found that burning <a href="https://doi.org/10.1088/1748-9326/ac2966">fossil fuels warms the planet</a>. That’s not good enough for some skeptics. If they find one contrarian scientist somewhere, they categorize the idea of human-caused global warming as controversial and <a href="https://e360.yale.edu/features/freeman_dyson_takes_on_the_climate_establishment">conclude that there is no basis for action</a>.</p>
<p>Powerful economic interests are at work here: The fossil fuel industry has funded disinformation campaigns for years to <a href="https://news.harvard.edu/gazette/story/2021/09/oil-companies-discourage-climate-action-study-says">create this kind of doubt about climate change</a>, despite <a href="https://theconversation.com/what-big-oil-knew-about-climate-change-in-its-own-words-170642">knowing that their products cause it and the consequences</a>. Members of Congress have <a href="https://www.eenews.net/articles/trumps-climate-denial-shapes-house-gop-backbench/">used that disinformation</a> to block or weaken federal policies that could slow climate change.</p>
<h2>Expecting a straight line in a variable world</h2>
<p>In another example of black-and-white thinking, deniers argue that if global temperatures are not increasing at a perfectly consistent rate, there is no such thing as global warming. </p>
<p>However, complex variables never change in a uniform way; they wiggle up and down in the short term even when exhibiting long-term trends. Most business data, such as revenues, profits and stock prices, do this too, with short-term fluctuations contained in long-term trends.</p>
<figure class="align-center ">
<img alt="Charts showing Apple's changing stock price and global temperatures over time. Both have a saw-tooth pattern." src="https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=437&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=437&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=437&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=550&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=550&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523304/original/file-20230427-18-w7d3zk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=550&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">These two graphs have the same form: a long-term trend of major increase within which there are short-term fluctuations.</span>
<span class="attribution"><a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Mistaking a cold snap for disproof of climate change is like mistaking <a href="https://www.macrotrends.net/stocks/charts/AAPL/apple/market-value">a bad month for Apple stock</a> for proof that Apple isn’t a good long-term investment. This error results from homing in on a tiny slice of the graph and ignoring the rest.</p>
<h2>Failing to examine the gray area</h2>
<p>Climate change deniers also mistakenly cite correlations below 100% as evidence against human-caused global warming. They triumphantly point out that sunspots and volcanic eruptions also affect the climate, even though evidence shows both have <a href="https://nca2018.globalchange.gov/chapter/2/">very little influence on long-term temperature rise</a> in comparison to greenhouse gas emissions.</p>
<p>In essence, deniers argue that if fossil fuel burning is not all-important, it’s unimportant. They miss the gray area in between: Greenhouse gases are indeed just one factor warming the planet, but they’re the most important one and the factor humans can influence.</p>
<figure class="align-center ">
<img alt="Charts showing impact of different forces on temperature. Natural sources have little variation, but the upward swing of temperatures corresponds closely with rising greenhouse gas emissions." src="https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=738&fit=crop&dpr=1 600w, https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=738&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=738&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=927&fit=crop&dpr=1 754w, https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=927&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/523071/original/file-20230426-1510-itdelq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=927&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Influences on global temperature over time.</span>
<span class="attribution"><a class="source" href="https://nca2018.globalchange.gov/chapter/2/">4th National Climate Assessment</a></span>
</figcaption>
</figure>
<h2>‘The climate has always been changing’ – but not like this</h2>
<p>As increases in global temperatures have become obvious, some climate change skeptics have switched from denying them to reframing them.</p>
<p>Their oft-repeated line, “The climate has always been changing,” typically delivered with an air of patient wisdom, is based on a striking lack of knowledge about the <a href="https://climate.nasa.gov/evidence/">evidence from climate research</a>.</p>
<p>Their reasoning is based on an invalid binary: Either the climate is changing or it’s not, and since it’s always been changing, there is nothing new here and no cause for concern.</p>
<p>However, the current warming is on par with <a href="https://www.science.org/doi/abs/10.1126/science.1228026">nothing humans have ever seen</a>, and intense warming events in the distant past were planetwide <a href="https://www.washington.edu/news/2018/12/06/biggest-extinction-in-earths-history-caused-by-global-warming-leaving-ocean-animals-gasping-for-breath/">disasters that caused massive extinctions</a> – something we do not want to repeat.</p>
<p>As humanity faces the challenge of global warming, we need to use all our cognitive resources. Recognizing the thinking error at the root of climate change denial could disarm objections to climate research and make science the basis of our efforts to preserve a hospitable environment for our future.</p><img src="https://counter.theconversation.com/content/204607/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeremy P. Shapiro does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A psychologist explains how opponents of climate policies use a common thinking error to manipulate the public – and why people are so susceptible.Jeremy P. Shapiro, Adjunct Assistant Professor of Psychological Sciences, Case Western Reserve UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2029602023-04-24T20:05:40Z2023-04-24T20:05:40Z3 sales tactics rife in the real estate industry, and why they work<figure><img src="https://images.theconversation.com/files/520129/original/file-20230411-14-cp2zm8.jpg?ixlib=rb-1.1.0&rect=0%2C907%2C5615%2C2824&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Buying a home is likely to be the biggest financial transaction you will ever make, and you’re at a distinct disadvantage. You’re an amateur up against professionals – real estate agents – versed in psychological tricks to get you excited about owning a property and paying more than you planned.</p>
<p>These tricks start with comparatively simple things such as making rooms look bigger in adverts by using a wide-angle photography. They extend all the way to the point of sale. </p>
<p>None of these tactics necessarily involve outright lying – there are laws against false and misleading conduct. But they are manipulative, exploiting the fact that humans are emotional beings with many “cognitive biases” – a perception of reality that is more emotional ratther than rational.</p>
<p>The three most common tactics come down to manipulating your confidence in your own decisions. Close to <a href="https://www.worldscientific.com/doi/abs/10.1142/S0217590816500156">80 studies</a> suggest overconfidence is one of the most significant cognitive biases influencing behaviour in the real estate market.</p>
<h2>1. Underquote, entice the bargain hunters</h2>
<p>You see a property in your price range that’s everything you want. You call the agent, inspect the property, then prepare for the auction. It sells for $200,000 more. </p>
<p>Underquoting involves deliberately advertising a property significantly lower than its likely sales price. While the prevalence of the practice is disputed, with industry representatives saying most agents do the right thing, <a href="https://www.theage.com.au/property/news/new-3-8-million-crackdown-on-underquoting-by-victorian-real-estate-agents-20220914-p5bhzq.html">anecdotal evidence</a> points to underquoting being very common. </p>
<p>Underquoting is effective because it attracts more interested buyers and increases the number and intensity of bidding. It exploits two of the most ubiquitous cognitive biases – herd behaviour and irrational exuberance. </p>
<p>More interest doesn’t just increase competition. A real estate agent will communicate that interest to us, confirming our desire in the property is justified. </p>
<p>This tendency to “follow the herd” and imitate others, as US economist Robert Shiller noted in an influential <a href="https://www.jstor.org/stable/2117915">1995 paper</a>, is built on the assumption others have information that justifies their actions. </p>
<p>This helps explain pretty much every stockmarket bubble since <a href="https://theconversation.com/tulip-mania-the-classic-story-of-a-dutch-financial-bubble-is-mostly-wrong-91413">tulipmania in the 17th century</a>, including the <a href="https://lsecentralbanking.medium.com/how-did-herd-behaviour-contribute-to-the-global-financial-crisis-3b0024a4755e">Global Financial Crisis of 2007-8</a> and <a href="https://www.sciencedirect.com/science/article/abs/pii/S1544612318303647">speculation on cryptocurrency</a>. We are emotionally swayed by the decisions of others, assuming their decisions are rational, even when they are not. This is fertile ground for our own decisions to be manipulated.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-tulips-and-scrips-to-bitcoin-and-meme-stocks-how-the-act-of-speculating-became-a-financial-mania-158406">From tulips and scrips to bitcoin and meme stocks – how the act of speculating became a financial mania</a>
</strong>
</em>
</p>
<hr>
<h2>2. Hide reality, inflate expectations</h2>
<p>Real estate agents will generally favour auctions to extract the <a href="https://www.domain.com.au/news/selling-at-auction-in-melbourne-earns-vendors-tens-of-thousands-in-extra-cash-1072565/">maximum sales price</a>, for the reasons outlined above and the prospect of <a href="https://www.researchgate.net/publication/220505543_Understanding_auction_fever_A_framework_for_emotional_bidding">auction fever</a> – when carefully decided limits are forgotten in the thrill of the moment. </p>
<p>But that’s not always the case. In a soft market with few buyers, agents may instead opt for a private sale, sometimes called a “<a href="https://attwoodmarshall.com.au/the-silent-auction/">silent auction</a>”. The goal here is to cause you to overestimate the degree of competition and thus make a bigger offer. </p>
<p>An agent might assist this perception by instead supplying you with information from previous public auctions of similar properties more favourable to their preferred narrative.</p>
<p>The value of hiding information also explains why you may come across so many sold listings with <a href="https://www.smh.com.au/property/news/should-you-be-able-to-know-how-much-your-neighbours-sold-their-house-for-20220223-p59z2t.html">labels</a> such as “price not disclosed” or “price withheld.” The reason for this may well be that the property sold for less than hoped.</p>
<p>Hiding information the agent doesn’t want you to think about depends principally on exploiting our cognitive bias towards <a href="https://www.sciencedirect.com/topics/psychology/overconfidence">overconfidence</a> – assuming we are smarter, more knowledgeable or better skilled than we actually are.</p>
<p>In lieu of that negative information, you are more likely to focus on the available information – particularly if it suits what you want to believe. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/when-your-house-has-a-disturbing-history-what-should-buyers-be-told-about-its-past-132766">When your house has a (disturbing) history, what should buyers be told about its ‘past’?</a>
</strong>
</em>
</p>
<hr>
<h2>3. Talk up nominal gains</h2>
<p>You may have heard the <a href="https://www.smh.com.au/property/news/do-house-prices-really-double-every-10-years-20211203-p59eif.html">old saying</a> that property values double every 10 years. Stressing what a property is likely to be worth in a decade <a href="https://www.realestate.com.au/news/suburbs-you-shouldve-bought-a-home-in-10-years-ago-and-how-much-your-area-has-grown/">based on what it was worth a decade ago</a> can be a powerful motivator to bid more.</p>
<p>As Robert Shiller noted in his 2013 book <a href="https://press.princeton.edu/books/paperback/9780691156323/the-subprime-solution">The Subprime Solution</a> (about the property-buying mania that led to the Global Financial Crisis), homes are such significant investments that we tend to recall their prices from the distant past (unlike, say, like a loaf of bread or bottle of milk).</p>
<p>This tendency results in an unconscious focus on nominal values rather than <a href="https://www.fool.com/investing/general/2012/04/12/the-illusion-of-housing-as-a-great-investment.aspx">real (inflation-adjusted) values</a>. This cognitive bias is known as the <a href="https://www.emerald.com/insight/content/doi/10.1108/14635789810212931/full/html">money illusion</a>, a mental miscalculation that may increase your willingness to pay more for the property. </p>
<h2>In conclusion…</h2>
<p>There’s a case for laws to <a href="https://www.realestate.com.au/news/push-to-end-home-sale-price-confusion-in-victorian-property-industry-review/">increase transparency</a> and the accuracy of information available in the real estate market. </p>
<p>But in the meantime, if you’re buying a home, it’s wise to acknowledge your limitations. Do your homework, seek out independent advice and even consider hiring a professional advocate with the knowledge and experience to balance emotional and rational thoughts.</p><img src="https://counter.theconversation.com/content/202960/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peyman Khezr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Overconfidence and other cognitive biases help to drive up real estate prices. Here are three techniques used by real estate agents to exploit those biases.Peyman Khezr, Senior Lecturer in Economics and Director of Behavioural Business Lab, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2034362023-04-20T15:08:33Z2023-04-20T15:08:33ZHow the brain stops us learning from our mistakes – and what to do about it<figure><img src="https://images.theconversation.com/files/521261/original/file-20230417-24-ozwhcp.jpg?ixlib=rb-1.1.0&rect=46%2C65%2C6165%2C4035&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/stressed-businessman-worked-laptop-computer-having-1663124812">Thinnapob Proongsak/Shutterstock</a></span></figcaption></figure><p>You learn from your mistakes. At least, most of us have been told so. But science shows that we often fail to learn from past errors. Instead, we are likely to keep repeating the same mistakes. </p>
<p>What do I mean by mistakes here? I think we would all agree that we quickly learn that if we put our hand on a hot stove, for instance, we get burned, and so are unlikely to repeat this mistake again. That’s because our brains create a threat-response to the physically painful stimuli based on past experiences. But when it comes to thinking, behavioural patterns and decision making, we often repeat mistakes – such as being late for appointments, leaving tasks until the last moment or judging people based on first impressions. </p>
<p>The reason can be found in the way our brain processes information and creates templates that we refer to again and again. These templates are essentially shortcuts, which help us make decisions in the real world. But these shortcuts, known as heuristics, can also make us repeat our errors. </p>
<p>As I discuss in my book <a href="https://www.drpragyaagarwal.co.uk/sway-press">Sway: Unravelling Unconscious Bias</a>, humans are not naturally rational, even though we would like to believe that we are. Information overload is exhausting and confusing, so we filter out the noise. </p>
<hr>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=601&fit=crop&dpr=1 600w, https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=601&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=601&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=755&fit=crop&dpr=1 754w, https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=755&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/522168/original/file-20230420-28-pu1618.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=755&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><br><em>This article is run in partnership with <a href="https://howthelightgetsin.org/festivals/hay?utm_source=Media+Partner&utm_medium=Article+preview&utm_campaign=Hay+2023&utm_id=Conversation">HowTheLightGetsIn</a>, the world’s largest philosophy and music festival, Hay-on-Wye 26-29 May. Pragya Agarwal and Anders Sandberg will be talking to editors Miriam Frankel and Matt Warren about how our understanding of cognitive biases can help us correct some of our mistakes. Tickets <a href="https://howthelightgetsin.org/festivals/hay/festival-passes?utm_source=Media+Partner&utm_medium=Article+preview&utm_campaign=Hay+2023&utm_id=Conversation">here</a>: 20% off with code CONVERSATION23</em></p>
<hr>
<p>We only see parts of the world. We tend to notice things that are repeating, whether there are any patterns or not, and we tend to preserve memory by generalising and resorting to type. We also draw conclusions from sparse data and use cognitive shortcuts to create a version of reality that we implicitly want to believe in. This creates a reduced stream of incoming information, which helps us connect dots and fill in gaps with stuff we already know. </p>
<p>Ultimately, our brains are lazy and it takes a lot of cognitive effort to change the script and these shortcuts that we have already built up. And so we are more likely to fall back on the same patterns of behaviours and actions, even when we are conscious of repeating our mistakes. This is called confirmation bias – our tendency to confirm what we already believe in, rather than shift our mindset to incorporate new information and ideas. </p>
<p>We also often deploy “<a href="https://theconversation.com/is-it-rational-to-trust-your-gut-feelings-a-neuroscientist-explains-95086">gut instinct</a>” - an automatic, subconscious type of thinking that draws on our accumulation of past experiences while making judgements and decisions in new situations. </p>
<p>Sometimes we stick with certain behaviour patterns, and repeat our mistakes because of an “<a href="https://www.sciencedirect.com/science/article/pii/S0014292121002786">ego effect</a>” that compels us to stick with our existing beliefs. We are likely to selectively choose the information structures and feedback that help us protect our egos. </p>
<p>One experiment found that when people were reminded of their successes of the past, they were more likely to <a href="https://www.sciencedirect.com/science/article/abs/pii/S1057740815000728">repeat those successful behaviours</a>. But when they were conscious of or actively made aware of their failures from the past, they were less likely to overturn the pattern of behaviour that led to failure. So people were in fact still likely to repeat that behaviour.</p>
<p>That’s because, when we think of our past failures, we are likely to feel down. And in those moments, we are more likely to indulge in behaviour that makes us feel comfortable and familiar. Even when we think carefully and slowly, our brains have a bias towards the information and templates we had used in the past, regardless of whether these resulted in errors. This is called the <a href="https://www.psychologytoday.com/gb/blog/mind-my-money/200807/familiarity-bias-part-i-what-is-it">familiarity bias</a>.</p>
<p>We can learn from mistakes though. In one experiment, monkeys and humans had to watch noisy, moving dots on a screen and judge their net direction of movement. The researchers found that both slowed down after an error. The larger the error, the longer the post-error slowing, showing more information was being accumulated. However, the quality of this information <a href="https://www.eurekalert.org/news-releases/809286">was low</a>. Our cognitive shortcuts can force us to override any new information that could help prevent repeating mistakes. </p>
<p>In fact, if we make mistakes while performing a certain task, “frequency bias” makes us likely to repeat them whenever we do the task again. Simplistically speaking, our brains start assuming that the errors we’ve previously made are the correct way to perform a task – creating a habitual <a href="https://link.springer.com/article/10.3758/pbr.15.1.156">“mistake pathway”</a>. So the more we repeat the same tasks, the more likely we are to traverse the mistake pathway, until it becomes so deeply embedded that it becomes a set of permanent cognitive shortcuts in our brains. </p>
<h2>Cognitive control</h2>
<p>It sounds bleak, so what can be done?</p>
<p>We do have a mental ability that can override heuristic shortcuts, known as “cognitive control”. And there are some <a href="https://www.cell.com/neuron/fulltext/S0896-6273(21)00075-1">recent studies in neuroscience with mice</a> that are giving us a better idea of what parts of our brains are involved in that.</p>
<figure class="align-center ">
<img alt="Fail falling from a skateboard." src="https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/521277/original/file-20230417-26-umtuf6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Embrace mistakes.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/fail-falling-skateboard-street-style-arab-1377958739">AS photo family/Shutterstock</a></span>
</figcaption>
</figure>
<p>Researchers have also <a href="https://www.cell.com/neuron/fulltext/S0896-6273(18)31007-9">identified two brain regions</a> with “self-error monitoring neurons” – brain cells which monitor errors. These areas are in the frontal cortex and appear to be part of a sequence of processing steps – from refocusing to learning from our mistakes.</p>
<p>Researchers are exploring whether a better understanding of this could help with development of better treatments and support for Alzheimer’s, for example, as preserved cognitive control is crucial for <a href="https://www.frontiersin.org/articles/10.3389/fnagi.2020.00198/full">wellbeing in later life</a>. </p>
<p>But even if we don’t have a perfect understanding of the brain processes involved in cognitive control and self-correction, there are simpler things we can do. </p>
<p>One is to become more comfortable with making mistakes. We might think that this is the wrong attitude towards failures, but it is in fact a more positive way forward. Our society denigrates failures and mistakes, and consequently we are likely to feel shame for our mistakes, and try and hide them.</p>
<p>The more guilty and ashamed we feel, and the more we try and hide our mistakes from others, the more likely we are to repeat them. When we not feeling so down about ourselves, we are more likely to be better at taking on new information that can help us correct our mistakes.</p>
<p>It can also be a good idea to take a break from performing a task that we want to learn how to do better. Acknowledging our failures and pausing to consider them can help us reduce frequency bias, which will make us less likely to repeat our mistakes and reinforce the mistake pathways.</p>
<p><em>HowTheLightGetsIn follows the theme of Error and Renaissance, identifying fundamental errors that we have made in our theories, our organisation of society and in world affairs – and explores new forms of thought and action. More information <a href="https://howthelightgetsin.org/festivals/hay?utm_source=Media+Partner&utm_medium=Article+preview&utm_campaign=Hay+2023&utm_id=Conversation">here</a>. Come and see Conversation editors Miriam Frankel and Matt Warren with special guests Pragya Agarwal, professor of social inequities, Loughborough University, and Anders Sandberg, from the Future of Humanity Institute, Oxford University, talk about how we can overcome cognitive bias to think about the world differently. Hay-on-Wye 26-29 May. 20% discount <a href="https://howthelightgetsin.org/festivals/hay/festival-passes?utm_source=Media+Partner&utm_medium=Article+preview&utm_campaign=Hay+2023&utm_id=Conversation">on tickets</a> using the code CONVERSATION23.</em></p><img src="https://counter.theconversation.com/content/203436/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pragya Agarwal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Ultimately, our brains are lazy and it takes a lot of cognitive effort to change the script and these shortcuts that we have already built up.Pragya Agarwal, Visiting Professor of Social Inequities and Injustice, Loughborough UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1992522023-02-10T13:51:17Z2023-02-10T13:51:17ZHow video evidence is presented in court can hold sway in cases like the beating death of Tyre Nichols<figure><img src="https://images.theconversation.com/files/509006/original/file-20230208-15-izfcn2.jpg?ixlib=rb-1.1.0&rect=30%2C15%2C5081%2C2858&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Video footage of the fatal beating of Tyre Nichols may be key to any criminal trial.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CongressPoliceReform/7927280ae6504bbf8ba431ff3332576c/photo?Query=footage%20Tyre%20Nichols&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=Anytime&totalCount=6&currentItemNo=0">City of Memphis via AP</a></span></figcaption></figure><p>Body camera and surveillance footage depicting the <a href="https://www.nytimes.com/interactive/2023/02/07/us/memphis-officers-tyre-nichols.html">Jan. 7, 2023, fatal beating of Tyre Nichols</a> was key in raising national awareness and prompting protests for police reform. It may now play a crucial part in any prosecution of those accused in his death.</p>
<p>Five Memphis police officers have been <a href="https://www.actionnews5.com/2023/01/27/court-date-set-5-former-mpd-officers-charged-with-murder-tyre-nichols/">charged with murder</a> and are <a href="https://www.actionnews5.com/2023/01/27/court-date-set-5-former-mpd-officers-charged-with-murder-tyre-nichols/">set to appear in court</a> on Feb. 17. Additionally, the U.S. Justice Department has opened <a href="https://www.justice.gov/usao-wdtn/pr/statement-united-states-attorney-kevin-g-ritz">a civil rights investigation</a> into Nichols’ death. </p>
<p>For over a decade, <a href="https://mitpress.mit.edu/9780262542531/seeing-human-rights/">I have studied</a> how video evidence has helped civil rights and human rights claims get recognition and restitution in the U.S. and around the world. As a <a href="https://www.colorado.edu/cmci/people/media-studies/sandra-ristovska">media scholar</a>, I am especially interested in understanding the power and limitation of video evidence inside the courtroom, especially as video is now estimated to form a part of <a href="https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/final-video-evidence-primer-for-prosecutors.pdf">four in every five criminal cases</a>. </p>
<p>I have found that video does not provide a unified, objective window onto the truth. Rather, <a href="https://psycnet.apa.org/record/2017-42392-001?doi=1">jurors may perceive the depicted events differently</a> – based, among other factors, on how the video is presented in court. </p>
<h2>How video’s presentation can influence perception</h2>
<p>Video can turn its viewers into witnesses, giving them the impression that they are transported directly to the event in question. Even judges may believe that the opportunity to see a video is equivalent to those in court seeing the real event. In the words of one district judge, it is as if the court had “<a href="https://casetext.com/case/mcdowell-v-sherrer">witnessed with its own eyes</a>.” Yet a growing body of <a href="https://academic.oup.com/book/2869">interdisciplinary research</a> has shown that there are many influences on how people perceive events recorded on video. </p>
<p><a href="https://www.pnas.org/doi/full/10.1073/pnas.1603865113">The speed at which video is played in court</a>, for example, can affect people’s judgments. Videos played in slow motion, compared with normal speed, result in greater judgment of the intention of the person in the depicted action. Sports replays are an easy way to understand this point – slowing down events can make a foul in soccer or football seem more egregious. </p>
<p>Additionally, even the type of video people see can change their perception of what it shows. Across <a href="https://doi.org/10.1073/pnas.1805928116">eight different experiments</a>, viewers of body camera footage were less likely to judge the police officer as having acted intentionally than those who watched the same incident captured on a dashboard camera.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man's finger pressing a button on a device placed on a blue police uniform." src="https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=466&fit=crop&dpr=1 600w, https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=466&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=466&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=585&fit=crop&dpr=1 754w, https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=585&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/509033/original/file-20230208-29-j7xrit.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=585&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A police officer starting a body camera recording.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/west-valley-city-patrol-officer-gatrell-starts-a-body-news-photo/464977016?phrase=body%20camera%20police&adppopup=true">George Frey/Getty Images</a></span>
</figcaption>
</figure>
<p>The variations in the perception of intent were driven, in part, by the distinctive camera perspective. A body camera records from the police officer’s point of view, so it is unable to show the officer. On the other hand, a dashboard camera is mounted on a police car, thus it can show the officer’s actions from a wider angle and not necessarily from their viewpoint. </p>
<h2>Confirmation bias</h2>
<p>The discrepancies in perception and the judgments that ensue from the type and presentation of video are significant: They can be highly consequential in a criminal court trial where intent needs to be proved beyond reasonable doubt. </p>
<p>Furthermore, these cognitive biases may be particularly pernicious to people of color within <a href="https://doi.org/10.1111/josi.12355">a legal system that already discriminates against them</a>. The perspective of body cameras, for example, may worsen racial biases in viewers of videos depicting police use of force. <a href="https://doi.org/10.1093/joc/jqab002">A study</a> shows that white viewers perceived dark-skinned civilians more negatively than light-skinned individuals when the body camera made them the subject of primary focus. </p>
<p>A common assumption is that repeated viewing can assist people to focus on information they may have missed on the first viewing, seemingly helping them better evaluate the depicted event. During trial, jurors indeed have multiple opportunities to see the same video. </p>
<p>However, <a href="https://doi.org/10.1080/23743603.2022.2026214">an eye-tracking study</a> demonstrates how people engage in visual confirmation bias: Their eyes follow a very similar pattern of visual attention, making them overconfident about their initial perception of the video in question. In other words, multiple viewing opportunities are ultimately unlikely to reduce biases that may already exist. </p>
<p>The proliferation of video is therefore challenging the existing legal practices regarding its presentation and use in court. </p>
<h2>Equal and fair justice in an age of video</h2>
<p><a href="https://bja.ojp.gov/sites/g/files/xyckuh186/files/media/document/final-video-evidence-primer-for-prosecutors.pdf">The Bureau of Justice Assistance</a> at the U.S. Department of Justice estimates that video now appears in about 80% of criminal cases. Yet U.S. courts, from state and federal all the way to the Supreme Court, lack clear guidelines on how video can be used and presented as evidence. </p>
<p>As a result, the U.S. legal system provides substantial discretion in evaluating video evidence by ignoring <a href="https://psycnet.apa.org/record/2014-38574-001">a range of biases</a> that may shape visual perception and judgment in court. </p>
<p>The footage of Tyre Nichols is yet another reminder that video can help people bear witness to traumatic events. However, the way video is presented in court can greatly influence jurors’ perceptions. </p>
<p>As more and more encounters with police officers that are proving deadly are making their way into criminal and civil courts, I believe, the legal system needs mechanisms that can ensure consistency and fairness in the presentation and evaluation of video as evidence.</p><img src="https://counter.theconversation.com/content/199252/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sandra Ristovska is the recipient of a Mellon/ACLS Scholars & Society Fellowship (2021-2023). For her work on video evidence, she also received a Research and Innovation Office (RIO) Seed Grant from the University of Colorado Boulder in 2020-2021. </span></em></p>Jurors can perceive events in a video in different ways – one of which depends on how the evidence is presented in court, a media scholar explains.Sandra Ristovska, Assistant Professor in Media Studies, University of Colorado BoulderLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1972542023-01-06T13:12:34Z2023-01-06T13:12:34ZNew year resolutions: why your brain isn’t wired to stick to them – and what to do instead<figure><img src="https://images.theconversation.com/files/503312/original/file-20230105-14-apdole.jpg?ixlib=rb-1.1.0&rect=100%2C181%2C6559%2C4275&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You still going?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/lazy-young-woman-sport-equipment-junk-1680287398">New Africa/Shutterstock</a></span></figcaption></figure><p>New year, new resolutions. It is that time once again. A recent survey shows that <a href="https://www.finder.com/uk/new-years-resolution-statistics">almost 58% of the UK population</a> intended to make a new year’s resolution in 2023, which is approximately 30 million adults. More than a quarter of these resolutions <a href="https://discoverhappyhabits.com/new-years-resolution-statistics/">will be about</a> making more money, personal improvement and losing weight. </p>
<p>But will we succeed? Sadly, a survey of over 800 million activities by the app Strava, which tracks people’s physical exercise, predicts most of these resolutions <a href="https://www.inc.com/jeff-haden/a-study-of-800-million-activities-predicts-most-new-years-resolutions-will-be-abandoned-on-january-19-how-you-cancreate-new-habits-that-actually-stick.html">will be abandoned</a> by January 19. </p>
<p>One of the main reasons why promises fail before the end of January is because they are vague. They focus on immeasurable qualities such as being healthier, happier (without defining what that means) or earning more money (without coming up with an amount or plan).</p>
<p>Vague goals do not provide us with sufficient direction. If we do not know exactly where we are going, it is difficult to know which path to take. It is impossible to know how far we have to go to reach our destination, what barriers we will have to overcome and how to prepare for them. </p>
<p>We also often set ourselves unattainable goals because we want to challenge ourselves. There is an <a href="https://www.sciencedirect.com/science/article/abs/pii/S1364661318300202">inherent paradox</a> - dubbed the “effort paradox” - in how much our brains love the idea of effort while in reality finding it uncomfortable. We want to think that we will feel more fulfilled if we challenge ourselves to achieve a difficult goal. </p>
<p>Another reason for this is that we experience a <a href="https://theconversation.com/procrastination-the-cognitive-biases-that-enable-it-and-why-its-sometimes-useful-195845">disconnect from our future selves</a> – we are biased towards the present. That means we find it difficult to imagine the kind of difficulties our future selves will face in trying to achieve these resolutions. </p>
<p>We think of the end point that we want now, in the present, but not the process or journey to get there. With such a narrow focus, it is easy to visualise this end point as closer than what it is when we start working towards it. </p>
<h2>The lazy brain</h2>
<p>To navigate the world, we form mental shortcuts – creating habits. When these cognitive shortcuts have been hardwired in place, our brains find it easier to act without much conscious effort or control. The longer we have had these habits, the more deeply entrenched the cognitive shortcuts behind them are. </p>
<p>For example, we may unthinkingly reach for the jar of biscuits when we park ourselves in front of the telly at night – it becomes a routine. Or we hit the snooze button when the alarm goes off in the morning. </p>
<p>Our brains are lazy and want to minimise cognitive load – meaning we repeat what we find pleasurable rather than consider many different and new options, which may be more or less pleasurable. It is simply easier to take these shortcuts that don’t offer much resistance or discomfort. That said, <a href="https://www.researchgate.net/publication/316344832_Creature_of_Habit_A_self-report_measure_of_habitual_routines_and_automatic_tendencies_in_everyday_life">some people rely more on habits than others</a> and they may find it harder to break them.</p>
<p>To achieve our resolutions, however, we often need to change these deep-seated habits and alter the neural pathways responsible. But as our brains resist this discomfort, we are tempted to go back to a more comfortable place. That’s a reason why we give up our resolutions. An aspect of this is known as the status quo bias. We are more likely to stay with status quo - our existing mindsets - rather than persist with changing these habits which takes time and effort. </p>
<p>The more we focus on the goal rather than the incremental steps needed to achieve that goal, the more likely we are to find it difficult to change our mindsets and create the habits needed to achieve it. It becomes a vicious circle because the more we get stressed about something, the more likely we are to fall back into <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5146206/">a place of comfort</a>, with our cognitive shortcuts. </p>
<p>When we engage in habitual behaviour, <a href="https://www.frontiersin.org/articles/10.3389/fnsys.2019.00028/full#:%7E:text=repeated%20instrumental%20learning.-,The%20Dorsolateral%20Striatum%20Plays%20a%20Key%20Role%20in%20Habit%20Formation,which%20receive%20substantial%20cortical%20inputs.">areas at the back of the brain</a>, to do with automatic behaviour, are typically engaged. But to actively alter our neural pathways away from such activation, we need to engage several areas of the brain – including the prefrontal cortex, which is involved in highly complex cognitive tasks. </p>
<p>A <a href="https://www.scientificamerican.com/article/the-neuroscience-of-changing-your-mind/">study using neuro-imaging</a> revealed that altering our behaviour involves coordinated cross-talk between several brain regions, including speedy communication between two specific zones within the prefrontal cortex and another nearby structure called the frontal eye field, an area involved in controlling eye movements and visual awareness. </p>
<p>This is hugely more cognitively taxing for our brain, and so we try to avoid it.</p>
<h2>Better approaches</h2>
<p>Changing habits requires being aware of the patterns of behaviour that we have learnt over the years and knowing how difficult it is to change them. And that’s impossible if you are blinded by visions of the new, perfect you. But to succeed at changing yourself, you need to know the real you.</p>
<figure class="align-center ">
<img alt="Comical funny unfit retro looking young men training with resistance bands." src="https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&rect=97%2C22%2C3736%2C2132&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503311/original/file-20230105-1865-qy0t2l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Accept yourself.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/sports-parody-comical-funny-unfit-retro-1799407402">Fractal Pictures/Shutterstock</a></span>
</figcaption>
</figure>
<p>It is also helpful to set clear, achievable goals – such as devoting an extra hour a week to your favourite hobby or banning biscuits in the evenings only, perhaps replacing them with a nice, herbal tea.</p>
<p>What’s more, we need to appreciate and celebrate the process of achieving our goals. Many of us are more inclined to focus on the negative aspects of the experience, leading to stress and anxiety. But bad emotions demand more attention - <a href="https://www.drpragyaagarwal.co.uk/sway-press">this is called negativity bias</a>. And the more we focus on negative things in our lives, and the negative aspects about ourselves, the more we are likely we are to feel down while missing the positive things.</p>
<p>The more we focus on the positive aspects of ourselves, the more likely we are to be able to change our mindsets.</p>
<p>So if you want to change, accept yourself the way you are – and understand why. Though if you do that, you may even find you’d rather stick to the motto “new year, same old me”. There’s nothing wrong with that.</p><img src="https://counter.theconversation.com/content/197254/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pragya Agarwal does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We need to understand our brains to achieve true change.Pragya Agarwal, Visiting Professor of Social Inequities and Injustice, Loughborough UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1901222022-09-21T16:24:20Z2022-09-21T16:24:20ZConspiracy theories are dangerous even if they don’t affect behaviour<figure><img src="https://images.theconversation.com/files/484462/original/file-20220914-4854-5i4coy.jpg?ixlib=rb-1.1.0&rect=0%2C104%2C4077%2C2029&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A man holds a QAnon sign outside the White House. Even if most people don't act on their conspiratorial beliefs, such theories can still pose very real dangers.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/conspiracy-theories-are-dangerous-even-if-they-don-t-affect-behaviour" width="100%" height="400"></iframe>
<p>Much has been made in recent years of politicians like Donald Trump and their <a href="https://doi.org/10.1038/d41586-021-00257-y">use of conspiracy theories</a>. In Canada, a number of conservative politicians have <a href="https://www.thestar.com/politics/political-opinion/2022/06/04/davos-klaus-schwab-the-great-reset-why-pierre-poilievre-and-some-tory-leadership-hopefuls-are-invoking-world-economic-forum-conspiracy-theories.html">voiced support</a> for conspiracy theories. </p>
<p>However, some evidence suggests that those who are most vocal about conspiracy theories <a href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday">do not necessarily take them seriously</a> — or at the very least, are unlikely to alter their behaviour to accommodate their conspiratorial beliefs. But despite this, conspiracy theories and those who endorse them can pose serious risks to public safety.</p>
<h2>Conspiratorial beliefs</h2>
<p>In <a href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday">his recent book</a>, cognitive scientist Hugo Mercier suggests that we don’t endorse all of our beliefs in the same way.</p>
<p>Cognitive scientists have <a href="https://doi.org/10.1111/j.1468-0017.1997.tb00062.x">theorized that beliefs come in two forms</a>: intuitive and reflective. </p>
<p><em>Intuitive</em> beliefs are closely linked to our behaviour. <em>Reflective</em> beliefs are higher order beliefs, and are therefore further removed from action. This difference in belief type leads to an interesting consequence: it is possible for us to hold beliefs that do not rationally affect our behaviour.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Cover of Not Born Yesterday by Hugo Mercier. Text with three eyes is red, yellow and green." src="https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=927&fit=crop&dpr=1 600w, https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=927&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=927&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1165&fit=crop&dpr=1 754w, https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1165&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/484666/original/file-20220914-4859-gb43y3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1165&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Cover of Not Born Yesterday: The Science of Who We Trust and What We Believe by Hugo Mercier.</span>
<span class="attribution"><a class="source" href="https://press.princeton.edu/books/hardcover/9780691178707/not-born-yesterday">(Princeton University Press)</a></span>
</figcaption>
</figure>
<p>Related research suggests that <a href="https://doi.org/10.1080/09515089.2017.1291929">beliefs can operate as signals</a>. We might think beliefs are primarily useful since they are action-guiding, but this is not their only function. Beliefs function as useful indicators for social interactions and status. </p>
<p>Some philosophers and psychologists suggest social influence on beliefs is, in part, <a href="https://doi.org/10.1177/1069397103037002003">an evolutionary feature of group cohesion</a>, since false beliefs are often acquired thanks to <a href="https://doi.org/10.1057/978-1-137-57895-2">social incentives, rather than individual deliberation</a>. </p>
<p>When another person tells me which political party they voted for, or their stance on a social issue, I recognize that those beliefs <a href="https://kiej.georgetown.edu/fake-news-partisan-epistemology/">also communicate a set of values</a>. In this way, people can be vocal about their beliefs to vie for social belonging. At the same time, these interactions inform how we acquire our beliefs. This means I am likely to come to endorse the beliefs espoused in my social community.</p>
<h2>Pizzagate: the impact of conspiracy</h2>
<p>Mercier explains some of these implications through the <a href="https://www.bbc.com/news/blogs-trending-38156985">“Pizzagate” conspiracy theory</a>. Pizzagate gained popularity during the 2016 U.S. presidential election after one of Hillary Clinton’s campaign chairmen was hacked in an email phishing attack. Rumours quickly gained traction on right-wing platforms claiming the leak revealed Clinton was a pedophile involved in a sex-trafficking ring run out of the basement of a Washington D.C. pizzeria. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/buying-into-conspiracy-theories-can-be-exciting-thats-what-makes-them-dangerous-184623">Buying into conspiracy theories can be exciting – that’s what makes them dangerous</a>
</strong>
</em>
</p>
<hr>
<p>After learning about the conspiracy, and coming to believe it was true, a man named <a href="https://www.rollingstone.com/feature/anatomy-of-a-fake-news-scandal-125877/">Edgar Maddison Welch made plans to free those being held in the pizzeria</a>. He drove to Washington, armed with assault weapons, and threatened staff at the restaurant to let the victims go. There were, of course, no victims to be found and Welch was later arrested. </p>
<p>Welch believed the Pizzagate allegations were true, and his behaviour was directly affected. He did everything in his power to free the victims he believed to be in the basement of the pizzeria. This is an example of intuitive beliefs that tangibly affected a person’s behaviour. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man sits on a bench in a park holding a sign that reads: fake news decide for yourself Pizzagate." src="https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/484464/original/file-20220914-4313-jz27dp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A supporter of ‘Pizzagate’ holds up a sign supporting the conspiracy theory. Conspiratorial beliefs are often acquired thanks to social incentives rather than individual deliberation.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>What might a reflective endorsement of the Pizzagate conspiracy look like? <a href="https://www.youtube.com/watch?v=kxasjn1YFSk&t=605s">Mercier draws attention to a negative online review of the Pizzeria</a>. The reviewer claimed the pizza was bad and also endorsed the conspiracy theory. They also mentioned they had brought their children to the restaurant. </p>
<p>Despite believing the conspiracy, that belief did not rationally impact their behaviour. A rational response to hearing a pizzeria is hosting a sex-trafficking ring would not be to bring your own children there. This suggests the reviewer endorsed the conspiracy theory reflectively: their professed belief did not impact their behaviour. </p>
<p>This tells us that human beings are not as gullible as some psychological studies might suggest. </p>
<h2>Conspiracy theories still dangerous</h2>
<p>While this tells us something interesting about human reasoning, I think there is a dangerous implication that looms. Because conspiracy theories do not necessarily lead to massive behavioural changes among those who endorse them, we might want to conclude that they are not as dangerous as they are made out to be. We should resist this impulse. </p>
<p>In the case of Pizzagate, <a href="https://www.publicpolicypolling.com/polls/trump-remains-unpopular-voters-prefer-obama-on-scotus-pick/">there were supposedly millions of people who endorsed the conspiracy</a>, but only one who stormed the pizzeria. Even if conspiracies are not tangibly affecting behaviour, we still have reason to combat them. </p>
<p>When conspiracy theories spread racist, sexist, and other problematic ideas, the perpetuation of the theory (even if there is no behaviour attached) amounts to hate speech. Conspiracy theories can target marginalized groups, even when those who profess conspiratorial beliefs are not themselves committing hate crimes.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/satanic-worship-sodomy-and-even-murder-how-stranger-things-revived-the-american-satanic-panic-of-the-80s-186292">'Satanic worship, sodomy and even murder': how Stranger Things revived the American satanic panic of the 80s</a>
</strong>
</em>
</p>
<hr>
<p>The consistent articulation of conspiracy theories also has a <a href="https://doi.org/10.1007/s12144-021-01898-y">negative impact on the public’s level of trust in experts</a>. This can lead to safety issues, including under-vaccination and climate science denial, leaving people unable to recognize what is in their best interest. </p>
<p>Imagine if the vast majority of people passively endorse an anti-medicine conspiracy theory. They become vocal about their belief, since it acts as a signal of their social allegiances. But when they become sick or need medical attention, they ignore their conspiratorial beliefs and listen to medical professionals anyway. We might think this isn’t a big problem. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A Trump supporter waves a MAGA flag in front of the U.S. Capitol building." src="https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/484468/original/file-20220914-5031-b02o5b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many of those who stormed the U.S Capitol building on Jan. 6, 2021 believed in conspiracy theories endorsed by former U.S. president Donald Trump.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>But now suppose there is a subset of this population who intuitively endorse the conspiracy and remain steadfast in their anti-medicine beliefs. We can see how these people <a href="https://doi.org/10.1177/2333794X19862949">can put the rest of the population at risk</a> when real threats to public health arise. More than this, those signalling support are nonetheless still doing harm. If a conspiracy becomes widespread, people’s beliefs are consistently psychologically reinforced, leading to an <a href="https://doi.org/10.1017/epi.2018.32">inflated sense of confidence about their mistaken beliefs and behaviour</a>. </p>
<p>All this indicates that real dangers come along with endorsing conspiracy theories — even when it doesn’t involve a large number of people allowing conspiracies to guide their behaviour. Conspiracy theories and those who endorse them can pose serious risks. Signalling conspiratorial beliefs can be reckless. We should be invested in acquiring true beliefs.</p><img src="https://counter.theconversation.com/content/190122/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lara Millman receives funding from the Social Sciences and Humanities Research Council of Canada.</span></em></p>Many of those who believe conspiracy theories do not necessarily act on those beliefs. Nevertheless, conspiracy theories can still spread dangerous misinformation that can cause harm.Lara Millman, PhD Student, Philosophy, Dalhousie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1865302022-08-11T12:14:03Z2022-08-11T12:14:03ZCognitive biases and brain biology help explain why facts don’t change minds<figure><img src="https://images.theconversation.com/files/478603/original/file-20220810-15-x8t51l.jpg?ixlib=rb-1.1.0&rect=179%2C300%2C4503%2C3436&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It can feel safer to block out contradictory information that challenges a belief.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/shaven-headed-man-with-fingers-in-ears-royalty-free-image/84437014">Peter Dazeley/The Image Bank via Getty Images</a></span></figcaption></figure><p>“<a href="https://www.cnn.com/factsfirst/politics">Facts First</a>” is the tagline of a CNN branding campaign which contends that “<a href="https://www.cnncreativemarketing.com/project/cnn_factsfirst/">once facts are established, opinions can be formed</a>.” The problem is that while it sounds logical, this appealing assertion is a fallacy not supported by research.</p>
<p>Cognitive psychology and neuroscience studies have found that the <a href="https://doi.org/10.1111/pops.12394">exact opposite is often true when it comes to politics</a>: People form opinions based on emotions, such as fear, contempt and anger, rather than relying on facts. New facts often do not change people’s minds.</p>
<p><a href="https://scholar.google.com/citations?user=LB6MiT4AAAAJ&hl=en&oi=ao">I study human development, public health and behavior change</a>. In my work, I see firsthand how hard it is to change someone’s mind and behaviors when they encounter new information that runs counter to their beliefs.</p>
<p>Your worldview, including beliefs and opinions, starts to form during childhood as you’re socialized within a particular cultural context. It gets reinforced over time by the social groups you keep, the media you consume, even how your brain functions. It influences how you think of yourself and how you interact with the world.</p>
<p>For many people, a challenge to their worldview feels like an attack on their personal identity and can cause them to harden their position. Here’s some of the research that explains why it’s natural to resist changing your mind – and how you can get better at making these shifts.</p>
<h2>Rejecting what contradicts your beliefs</h2>
<p>In an ideal world, rational people who encounter new evidence that contradicts their beliefs would evaluate the facts and change their views accordingly. But that’s generally not how things go in the real world. </p>
<p>Partly to blame is a cognitive bias that can kick in when people encounter evidence that runs counter to their beliefs. Instead of reevaluating what they’ve believed up until now, people tend to <a href="https://doi.org/10.1111/j.1540-5907.2006.00214.x">reject the incompatible evidence</a>. Psychologists call this phenomenon belief perseverance. Everyone can fall prey to this ingrained way of thinking. </p>
<p>Being presented with facts – whether via the news, social media or one-on-one conversations – that suggest their current beliefs are wrong causes people to feel threatened. This reaction is particularly strong when the beliefs in question are aligned with your political and personal identities. It can feel like an attack on you if one of your strongly held beliefs is challenged.</p>
<p>Confronting facts that don’t line up with your worldview may trigger a “<a href="https://doi.org/10.1073/pnas.1804840115">backfire effect</a>,” which can end up strengthening your original position and beliefs, particularly with politically charged issues. Researchers have identified this phenomenon in a number of studies, including ones about <a href="https://doi.org/10.1177/0093650211416646">opinions toward climate change mitigation policies</a> and <a href="https://doi.org/10.1542/peds.2013-2365">attitudes toward childhood vaccinations</a>. </p>
<h2>Focusing on what confirms your beliefs</h2>
<p>There’s another cognitive bias that can get in the way of changing your mind, called confirmation bias. It’s the natural tendency to seek out information or interpret things in a way that <a href="https://doi.org/10.1111/j.1540-5907.2006.00214.x">supports your existing beliefs</a>. <a href="https://doi.org/10.1371/journal.pone.0210423">Interacting with like-minded people and media</a> reinforces confirmation bias. The problem with confirmation bias is that it <a href="https://doi.org/10.1037/0033-295X.100.2.298">can lead to errors in judgment</a> because it keeps you from looking at a situation objectively from multiple angles. </p>
<p>A 2016 Gallup poll provides a great example of this bias. In just one two-week period spanning the 2016 election, both Republicans and Democrats <a href="https://news.gallup.com/poll/197474/economic-confidence-surges-election.aspx">drastically changed their opinions</a> about the state of the economy – in opposite directions.</p>
<p><iframe id="J9K34" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/J9K34/4/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>But nothing was new with the economy. What had changed was that a new political leader from a different party had been elected. The election outcome changed survey respondents’ interpretation of how the economy was doing – a confirmation bias led Republicans to rate it much higher now that their guy would be in charge; Democrats the opposite.</p>
<h2>Brain’s hard-wiring doesn’t help</h2>
<p>Cognitive biases are predictable patterns in the way people think that can keep you from objectively weighing evidence and changing your mind. Some of the basic ways your brain works can also work against you on this front.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="woman with smirking look holding a cellphone" src="https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478604/original/file-20220810-6805-26kwy4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">It can feel really satisfying to get the better of an opponent, even if you’re not actually right.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/studio-portrait-of-businesswoman-text-messaging-royalty-free-image/136597526">Rob Lewine/Tetra images via Getty Images</a></span>
</figcaption>
</figure>
<p>Your brain is hard-wired to protect you – which can lead to reinforcing your opinions and beliefs, even when they’re misguided. Winning a debate or an argument triggers a flood of hormones, including dopamine and adrenaline. In your brain, they contribute to the feeling of pleasure you get during sex, eating, roller-coaster rides – and yes, <a href="https://us.macmillan.com/books/9781250013644/the-winner-effect">winning an argument</a>. That rush makes you feel good, maybe even invulnerable. It’s a feeling many people want to have more often.</p>
<p>Moreover, in situations of high stress or distrust, your body releases <a href="https://www.ncbi.nlm.nih.gov/books/NBK538239/">another hormone, cortisol</a>. It can <a href="https://doi.org/10.1001/archpsyc.64.7.810">hijack your advanced thought processes, reason and logic</a> – what psychologists call the executive functions of your brain. Your brain’s amygdala becomes more active, which <a href="https://doi.org/10.3390/biom11060823">controls your innate fight-or-flight reaction</a> when you feel under threat.</p>
<p>In the context of communication, people tend to raise their voice, push back and stop listening when these chemicals are coursing through their bodies. Once you’re in that mindset, it’s hard to hear another viewpoint. The desire to be right combined with the brain’s protective mechanisms make it that much harder to change opinions and beliefs, even in the presence of new information.</p>
<h2>You can train yourself to keep an open mind</h2>
<p>In spite of the cognitive biases and brain biology that make it hard to change minds, there are ways to short-circuit these natural habits. </p>
<p>Work to keep an open mind. Allow yourself to learn new things. Search out perspectives from multiple sides of an issue. Try to form, and modify, your opinions based on evidence that is accurate, objective and verified.</p>
<p>Don’t let yourself be swayed by outliers. For example, give more weight to the numerous doctors and public health officials who describe the preponderance of evidence that vaccines are safe and effective than what you give to one fringe doctor on a podcast who suggests the opposite.</p>
<p>Be wary of repetition, as repeated statements are often <a href="https://doi.org/10.1186/s41235-021-00301-5">perceived as more truthful</a> than new information, no matter how false the claim may be. Social media manipulators and politicians know this all too well. </p>
<p>Presenting things in a nonconfrontational way allows people to evaluate new information without feeling attacked. Insulting others and suggesting someone is ignorant or misinformed, no matter how misguided their beliefs may be, will cause the people you are trying to influence to reject your argument. Instead, try asking questions that lead the person to question what they believe. While opinions may not ultimately change, the <a href="https://doi.org/10.1177/1529100612451018">chance of success is greater</a>.</p>
<p>Recognize we all have these tendencies and respectfully listen to other opinions. Take a deep breath and pause when you feel your body ramping up for a fight. Remember, it’s OK to be wrong at times. Life can be a process of growth.</p><img src="https://counter.theconversation.com/content/186530/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Keith M. Bellizzi receives funding from the National Institutes of Health. </span></em></p>Here are some reasons for the natural human tendency to avoid or reject new information that runs counter to what you already know – and some tips on how to do better.Keith M. Bellizzi, Professor of Human Development and Family Sciences, University of ConnecticutLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1850992022-06-24T11:53:00Z2022-06-24T11:53:00ZGoogle’s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought<figure><img src="https://images.theconversation.com/files/470388/original/file-20220622-7895-m4o7lp.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C4928%2C3245&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Words can have a powerful effect on people, even when they're generated by an unthinking machine.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/words-this-is-my-story-typed-on-paper-with-a-royalty-free-image/1359861887">iStock via Getty Images</a></span></figcaption></figure><p>When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text. </p>
<p>People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do. </p>
<p>Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and <a href="https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/">the subsequent media coverage</a> led to a <a href="https://www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/">number</a> of rightly skeptical <a href="https://www.theguardian.com/commentisfree/2022/jun/14/human-like-programs-abuse-our-empathy-even-google-engineers-arent-immune">articles</a> and <a href="https://garymarcus.substack.com/p/nonsense-on-stilts?s=r">posts</a> about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing. </p>
<p>The question of what it would mean for an AI model to be sentient is complicated (<a href="https://threadreaderapp.com/thread/1536829311562354688.html">see, for instance, our colleague’s take</a>), and our goal here is not to settle it. But as <a href="https://scholar.google.com/citations?user=XUmFLVUAAAAJ&hl=en">language</a> <a href="https://scholar.google.com/citations?user=hBUjCB0AAAAJ&hl=en">researchers</a>, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent.</p>
<h2>Using AI to generate humanlike language</h2>
<p>Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a screenshot showing a text dialog" src="https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=328&fit=crop&dpr=1 600w, https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=328&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=328&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=413&fit=crop&dpr=1 754w, https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=413&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/470359/original/file-20220622-12-qbrh9n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=413&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The first computer system to engage people in dialogue was psychotherapy software called Eliza, built more than half a century ago.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/rosenfeldmedia/49467507798">Rosenfeld Media/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Early versions dating back to at least the 1950s, known as n-gram models, simply counted up occurrences of specific phrases and used them to guess what words were likely to occur in particular contexts. For instance, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapples.” If you have enough English text, you will see the phrase “peanut butter and jelly” again and again but might never see the phrase “peanut butter and pineapples.”</p>
<p>Today’s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. First, they are trained on essentially the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they are tuned by a huge number of internal “knobs” – so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.</p>
<p>The models’ task, however, remains the same as in the 1950s: determine which word is likely to come next. Today, they are so good at this task that almost all sentences they generate seem fluid and grammatical.</p>
<h2>Peanut butter and pineapples?</h2>
<p>We asked a large language model, <a href="https://theconversation.com/a-language-generation-programs-ability-to-write-articles-produce-code-and-compose-poetry-has-wowed-scientists-145591">GPT-3</a>, to complete the sentence “Peanut butter and pineapples___”. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.</p>
<p>But how did GPT-3 come up with this paragraph? By generating a word that fit the context we provided. And then another one. And then another one. The model never saw, touched or tasted pineapples – it just processed all the texts on the internet that mention them. And yet reading this paragraph can lead the human mind – even that of a Google engineer – to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/a6jt3Vufa9U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Large AI language models can engage in fluent conversation. However, they have no overall message to communicate, so their phrases often follow common literary tropes, extracted from the texts they were trained on. For instance, if prompted with the topic “the nature of love,” the model might generate sentences about believing that love conquers all. The human brain primes the viewer to interpret these words as the model’s opinion on the topic, but they are simply a plausible sequence of words.</span></figcaption>
</figure>
<p>The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.</p>
<p>The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. </p>
<p>However, in the case of AI systems, it misfires – building a mental model out of thin air.</p>
<p>A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”</p>
<p>The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. One begins to suspect that GPT-3 has never actually tried peanut butter and feathers.</p>
<h2>Ascribing intelligence to machines, denying it to humans</h2>
<p>A sad irony is that the same cognitive bias that makes people ascribe humanity to GPT-3 can cause them to treat actual humans in inhumane ways. Sociocultural linguistics – the study of language in its social and cultural context – shows that assuming an overly tight link between fluent expression and fluent thinking can lead to bias against people who speak differently. </p>
<p>For instance, people with a foreign accent are often <a href="https://theconversation.com/heres-why-people-might-discriminate-against-foreign-accents-new-research-172539">perceived as less intelligent</a> and are less likely to get the jobs they are qualified for. Similar biases exist against <a href="https://theconversation.com/british-people-still-think-some-accents-are-smarter-than-others-what-that-means-in-the-workplace-126964">speakers of dialects</a> that are not considered prestigious, <a href="https://doi.org/10.1080%2F17470218.2012.731695">such as Southern English</a> in the U.S., against <a href="https://doi.org/10.1177%2F0160597613481731">deaf people using sign languages</a> and against people with speech impediments <a href="https://doi.org/10.1016/j.jfludis.2004.08.001">such as stuttering</a>. </p>
<p>These biases are deeply harmful, often lead to racist and sexist assumptions, and have been shown again and again to be unfounded.</p>
<h2>Fluent language alone does not imply humanity</h2>
<p>Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have <a href="https://news.northeastern.edu/2022/06/16/google-sentient-ai-concerns/">pondered</a> it <a href="https://link.springer.com/article/10.1007/BF00360578">for decades</a>. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.</p><img src="https://counter.theconversation.com/content/185099/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kyle Mahowald receives funding from NSF.</span></em></p><p class="fine-print"><em><span>Evelina Fedorenko receives funding from NIH.</span></em></p><p class="fine-print"><em><span>Joshua B. Tenenbaum receives relevant funding from NSF, the US Department of Defense, IBM, Google, and Microsoft. </span></em></p><p class="fine-print"><em><span>Nancy Kanwisher receives funding from NIH and NSF.</span></em></p><p class="fine-print"><em><span>Anna A. Ivanova and Idan Asher Blank do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Fluent expression is not always evidence of a mind at work, but the human brain is primed to believe so. A pair of cognitive linguistics experts explain why language is not a good test of sentience.Kyle Mahowald, Assistant Professor of Linguistics, The University of Texas at AustinAnna A. Ivanova, PhD Candidate in Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1843572022-06-09T12:41:56Z2022-06-09T12:41:56ZPeople overestimate groups they find threatening – when ‘sizing up’ others, bias sneaks in<figure><img src="https://images.theconversation.com/files/467887/original/file-20220609-24-pcgf27.jpg?ixlib=rb-1.1.0&rect=11%2C1142%2C6690%2C4154&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You might make a quick and exaggerated judgment about what kind of neighborhood you’re in based on the people or flags you see.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/cityscape-with-a-residential-building-decorated-for-royalty-free-image/1251614727?adppopup=true">David Levingstone/DigitalVision via Getty Images</a></span></figcaption></figure><p>Places are not just physical, but also social.</p>
<p>For instance, around the North Carolina campus where we met, we knew certain bars based on the students who frequented them – the “Duke bars” versus the “UNC bars.” Or, when traveling, we may try to guess whether most of the patrons at a restaurant are tourists – and if so, go elsewhere.</p>
<p>This common way of thinking about our environments seemed fairly reasonable to us until a few years ago, when we noticed something that gave us pause.</p>
<p>We’ve overheard one of our alma maters, the University of Pennsylvania, pejoratively referred to as “Jew-niversity of Pennsylvania,” and one of our hometowns, Decatur, Georgia, disparagingly called “Dyke-atur.” These labels are not only deeply offensive … they are also wrong. Neither of these places are actually majority Jewish or gay. And yet, some people seem to hold the belief that these groups dominate these spaces.</p>
<p>Where do these beliefs come from, and why do people make these inaccurate judgments? Perhaps more importantly, why might this matter?</p>
<p>As social psychologists who explore how intergroup dynamics affect <a href="https://www.rebeccaponcedeleon.com/">organizational</a> and <a href="https://bloch.umkc.edu/profiles/faculty-directory/jacqueline-rifkin.html">consumer</a> phenomena, we were fascinated by these questions. Four years ago, we set out to answer them.</p>
<p>Across six studies, we found that people commonly exaggerate the presence of certain groups – including ethnic and sexual minorities – simply <a href="https://doi.org/10.1177/09567976211060009">because they are perceived as ideologically threatening</a>. Psychologists call this feeling – that groups hold different values and worldviews from the mainstream, thereby jeopardizing the status quo – “<a href="https://psycnet.apa.org/record/2000-03917-001">symbolic threat</a>.”</p>
<h2>Symbolic threats loom large</h2>
<p>We began by looking at survey data from the year 2000 that examined 987 non-Black Americans’ beliefs about Black people. We found that the more a survey respondent believed that Black people had different values or a separate lifestyle from their own, the more they believed the population of Black people would increase over time.</p>
<p>We followed this up with several experiments, looking not only at beliefs about Black people, but also other minority groups, including gay people and immigrants. We asked participants to imagine everyday social spaces, including patrons at a bar or residents in a neighborhood. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="blurred Black office workers" src="https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467591/original/file-20220607-20-l4ozmz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Imagining a company with some amount of Black workers, non-Black people seemed to jump easily from feeling threatened to feeling outnumbered.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/busy-african-office-with-people-walking-around-royalty-free-image/471871485">AfricaImages/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>In some studies, we showed participants demographic information about a small portion of employees at a company and asked them to guess the demographics of the entire business. In other studies, we described a group of people congregating in a place and asked participants whether they believed the place was somehow linked with those people – for example, a “Duke bar” or “UNC bar.”</p>
<p>Our volunteers were much more likely to overestimate the groups they found symbolically threatening, such as gay people or immigrants, compared to groups that did not seem so threatening, like those with green eyes.</p>
<p>Specifically, triggering a sense of value conflict made our study subjects both more likely to perceive those groups as more populous in a place, and to believe that the group and place are somehow linked.</p>
<p>This pattern emerged regardless of participants’ own demographic characteristics or political stances and even when we used completely fictitious groups, like a made-up organization called “PDL” with a fake logo. Our findings suggest that these kinds of judgments are universal and may be hard-wired into how people process their environments.</p>
<h2>Better safe than sorry mindset</h2>
<p>Humans have evolved a variety of strategies to protect themselves from harm. One involves being hypervigilant to potential threats. According to what psychologists call “<a href="https://doi.org/10.1037/0022-3514.78.1.81">error management theory</a>,” people tend to err on the side of caution by exaggerating potential threats in their surroundings. When camping in the woods, for instance, it is safer to incorrectly assume a shadow is a big bear than it is to incorrectly assume the shadow is harmless.</p>
<p>While prior work has explored these kinds of snap judgments in potentially dangerous environments, our research uncovers that people give in to these same biases in everyday social spaces. </p>
<p>The tendency to exaggerate potential threats has helped our species navigate new environments and stay safe. But it may be cause for concern when people make these same judgments about others simply because they appear to think and live differently from them. Groups that differ from the mainstream are likely viewed as more pervasive than they actually are or as growing in number. This yields a sad irony: Although these groups are often subjugated and disempowered, they may be perceived as just the opposite — an ever-encroaching threat that must be suppressed. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of Tucker Carlson on Fox News" src="https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/466810/original/file-20220602-14-duw7cr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Tucker Carlson has been a proponent of the ‘Great Replacement’ conspiracy theory.</span>
<span class="attribution"><a class="source" href="https://dynaimage.cdn.cnn.com/cnn/c_fill,g_auto,w_1200,h_675,ar_16:9/https%3A%2F%2Fcdn.cnn.com%2Fcnnnext%2Fdam%2Fassets%2F210412234601-tucker-carlson-0412.jpg">Fox News/'Tucker Carlson Tonight'</a></span>
</figcaption>
</figure>
<p>This kind of rhetoric has unfortunately been in the spotlight of late. For instance, conservative figures like Fox News host <a href="https://www.nytimes.com/interactive/2022/04/30/us/tucker-carlson-tonight.html?chapter=3">Tucker Carlson</a> and Rep. <a href="https://www.newsweek.com/marjorie-taylor-greene-mtg-straight-heterosexual-sexuality-trans-georgia-mtg-live-1711528">Marjorie Taylor Greene</a> have recently lent credibility to <a href="https://www.usatoday.com/story/news/nation/2022/06/01/great-replacement-theory-poll-republicans-democrats/7461913001/?gnt-cfr=1">bigoted conspiracies like</a> the <a href="https://theconversation.com/replacement-theory-isnt-new-3-things-to-know-about-how-this-once-fringe-conspiracy-has-become-more-mainstream-183492">“great replacement” theory</a>, which posits that minority groups are intentionally increasing in order to replace and outvote “mainstream” Americans. <a href="https://apnews.com/article/great-white-replacement-theory-explainer-c86f309f02cd14062f301ce6b9228e33">This rhetoric apparently motivated the white gunman</a> accused of killing 10 Black Americans in Buffalo in May 2022.</p>
<h2>Breaking free of the bias</h2>
<p><a href="https://doi.org/10.1002/9780470752937.ch16">Prior work in psychology</a> suggests that merely being aware of your own biases is the first step toward reducing their influence. Since starting this project, we have even noticed our own tendency to jump to conclusions about the groups in our surroundings and their pervasiveness.</p>
<p>If you notice yourself doing the same thing, it doesn’t make you a bad person. But we encourage you to use these moments to slow down and reconsider your gut instincts. While this way of thinking might help you figure out the best sports bar for cheering on your team, categorizing places based on the people within them can have serious ramifications if left unchecked.</p><img src="https://counter.theconversation.com/content/184357/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social psychology researchers found that people commonly exaggerate the presence of certain groups – including ethnic and sexual minorities – because they perceive them as ideologically threatening.Jacqueline Rifkin, Assistant Professor of Marketing, University of Missouri-Kansas CityRebecca Ponce de Leon, Assistant Professor of Management, Columbia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1769682022-04-19T12:19:46Z2022-04-19T12:19:46ZPandemic decision-making is difficult and exhausting – here’s the psychology that explains why<figure><img src="https://images.theconversation.com/files/458383/original/file-20220418-22-mu1qko.jpg?ixlib=rb-1.1.0&rect=367%2C62%2C4848%2C3409&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">So much uncertainty around risk can make it extra hard to decide what to do.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/young-hipster-woman-using-a-smart-phone-in-her-royalty-free-image/990991128">Richard Drury/DigitalVision via Getty Images</a></span></figcaption></figure><p>You want to sit down for an indoor dinner with friends. A couple of years ago, this was a simple enough activity that required minimal planning. However, that is not the case in today’s world. Many people now face a stream of further considerations about benefits and risks.</p>
<p>Will I enjoy the experience? What are the potential downsides? Am I comfortable with the restaurant’s pandemic-related policies? What’s the ventilation like? Is it very busy there at this time of day? Am I planning to see lots of people, or people with compromised immune systems, in the near future? </p>
<p>This is exhausting! <a href="https://scholar.google.com/citations?user=gFXRTf4AAAAJ&hl=en&oi=ao">As scientists</a> <a href="https://tricomilab.wixsite.com/ldmlab/people">at the</a> <a href="https://tricomilab.wixsite.com/ldmlab">Learning and Decision-Making Lab</a> at Rutgers University-Newark, we’ve noticed how many decision-making processes are affected by the pandemic. The accumulation of choices people are making throughout the day leads to what psychologists call <a href="https://www.researchgate.net/profile/Jean-Twenge/publication/237738528_Decision_Fatigue_Exhausts_Self-Regulatory_Resources_-_But_So_Does_Accommodating_to_Unchosen_Alternatives/links/554b9ee40cf21ed21359ccbd/Decision-Fatigue-Exhausts-Self-Regulatory-Resources-But-So-Does-Accommodating-to-Unchosen-Alternatives.pdf">decision fatigue</a> – you can end up feeling overwhelmed and make bad decisions. The current pandemic can make this situation more pronounced, as even the choices and activities that should be the most simple can now feel tinged with risk and uncertainty. </p>
<p>Risk involves known probabilities – for example, the likelihood of losing a certain hand in poker. But <a href="https://www.penguinrandomhouse.com/books/305826/the-signal-and-the-noise-by-nate-silver/">uncertainty is an unknown probability</a> – you can never really know the exact chance of catching COVID-19 by engaging in certain activities. Human beings tend to be both risk-averse and uncertainty-averse, meaning that you likely avoid both when you can. And when you can’t – as during a confusing phase of a pandemic – it can be draining to try to decide what to do.</p>
<h2>Rules are easy, decisions are hard</h2>
<p>Before the COVID-19 pandemic, most people didn’t think through some basic decisions in the same way they might now. In fact, even early in the pandemic you didn’t really need to. There were rules to follow whether you liked them or not. Capacity was limited, hours were restricted, or shops were closed. People were strongly urged to opt out of activities they’d normally engage in.</p>
<p>This is evident in data we collected from university students in fall 2020 and spring 2021. One question we asked was, “What has been the hardest part of the pandemic for you?” Responses included “Not being able to see my friends and family,” “Having to take classes online,” “Being forced to stay home” and many other similar frustrations. </p>
<p>Many of our survey respondents were either not able to do things they wanted to do or were forced to do things they didn’t want to do. In either case, the guidelines were clear-cut and the decisions were less of a struggle.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="masked cafe worker puts out an 'open' sign" src="https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/458384/original/file-20220418-76603-u14lyb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A pandemic world that is open for business sets the scene for a lot more daily decisions.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/all-set-to-restart-business-royalty-free-image/1272761167">pixdeluxe/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>As restrictions ease and people think about “living with” the coronavirus, the current phase of the pandemic brings with it a new need to make cost-benefit calculations.</p>
<p>It’s important to remember that not everyone has experienced these kinds of decisions in the same way. Throughout the course of the pandemic there have been people who did not have the luxury of choice and needed to go to work regardless of the risk. There have also been those who have taken risks all along. On the other end of the spectrum, some people continue to stay isolated and avoid almost every situation with the potential for contracting COVID-19.</p>
<p>Those who experience the most decision fatigue are those who are in the middle – they want to avoid COVID-19 but also want to get back to the activities they enjoyed before the pandemic.</p>
<h2>Shortcuts can short-circuit decision-making</h2>
<p>Psychologist Daniel Kahneman wrote in his book “<a href="https://us.macmillan.com/books/9780374533557/thinking-fast-and-slow">Thinking, Fast and Slow</a>” that “when faced with a difficult question, we often answer an easier one instead.”</p>
<p>Making decisions about risk and uncertainty is hard. For instance, trying to think through the probability of catching a potentially deadly virus while going to an indoor movie theater is difficult. So people tend to think in terms of binaries – “this is safe” or “this is unsafe” – because it’s easier.</p>
<p>The problem is that answering easier questions instead of trickier ones leaves you vulnerable to cognitive biases, or <a href="https://doi.org/10.1017/CBO9780511808098.002">errors in thought that affect your decision-making</a>.</p>
<p>One of the most prevalent of these biases is the <a href="https://doi.org/10.1016/0010-0285(73)90033-9">availability heuristic</a>. That’s what psychologists call the tendency to judge the likelihood of an event based on how easily it comes to mind. How much a certain event is covered in the media, or whether you’ve seen instances of it recently in your life, can sway your estimate. For example, if you’ve seen stories of a plane crash in the news recently, you may believe the probability of being in a plane crash to be higher than it actually is.</p>
<p>[<em>The Conversation’s science, health and technology editors pick their favorite stories.</em> <a href="https://memberservices.theconversation.com/newsletters/?nl=science&source=inline-science-favorite">Weekly on Wednesdays</a>.]</p>
<p>The effect of the availability heuristic on pandemic-era decision-making often manifests as making choices based on individual cases rather than on overall trends. On one side, people may feel fine going to a crowded indoor concert because they know others in their lives who have done this and have been fine – so they judge the likelihood of catching the coronavirus to be lower as a result. On the other hand, someone who knows a friend whose child caught COVID-19 at school may now think the risks of transmission in schools are much higher than they really are.</p>
<p>Furthermore, the availability heuristic means these days you think much more about the risks of catching COVID-19 than about other risks life entails that receive less media attention. While you’re worrying about the adequacy of a restaurant’s ventilation system, you overlook the danger of getting into a car accident on your way there.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="woman seated in restaurant booth looks out the window pensively" src="https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/458385/original/file-20220418-87032-gbe2cc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">You can’t know for sure whether you’ll get infected after meeting a friend.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/pensive-woman-sitting-by-herself-in-a-restaurant-at-royalty-free-image/1138424247">LeoPatrizi/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>A constant process</h2>
<p>Decisions in general, and during a pandemic in particular, are about weighing risks and benefits and dealing with risk and uncertainty.</p>
<p>Because of the nature of probability, you can’t be sure in advance whether you’ll catch COVID-19 after agreeing to dine at a friend’s house. Furthermore, the outcome does not make your decision right or wrong. If you weigh the risks and benefits and accept that dinner invitation, only to end up contracting COVID-19 at the meal, it doesn’t mean you made the wrong decision – it just means you rolled the dice and came up short.</p>
<p>On the flip side, if you accept the dinner invitation and don’t end up with COVID-19, don’t get too smug; another time, the outcome might be different. All you can do is try to weigh what you know of the costs and benefits and make the best decisions you can.</p>
<p>During this next phase of the pandemic, we recommend remembering that uncertainty is a part of life. Be kind to yourself and others as we all try to make our best choices.</p><img src="https://counter.theconversation.com/content/176968/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>People tend to dislike uncertainty and risk – two things that are hard to avoid completely during a pandemic. That’s part of why it can feel especially draining to make even small decisions these days.Elizabeth Tricomi, Associate Professor of Psychology, Rutgers University - NewarkWesley Ameden, Ph.D. Student in Psychology, Rutgers University - NewarkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1767212022-02-28T13:58:03Z2022-02-28T13:58:03ZJuries are subject to all kinds of biases when it comes to deciding on a trial<figure><img src="https://images.theconversation.com/files/448402/original/file-20220224-52384-10gzp8f.jpg?ixlib=rb-1.1.0&rect=0%2C14%2C3325%2C1979&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/twelve-jurors-sit-jury-box-court-1475824919">Varlamova Lydmila/Shutterstock</a></span></figcaption></figure><p>From <a href="https://www.imdb.com/title/tt0247082/">CSI</a> to <a href="https://www.nbc.com/law-order/about">Law and Order</a>, <a href="https://www.imdb.com/title/tt2303687/">Line of Duty</a> and <a href="https://www.imdb.com/title/tt0118401/">Midsomer Murders</a>, there is huge public fascination with crime and the criminal justice system. Especially when things come to a climactic ending and jurors decide on a defendent’s fate. But how much do jurors get it wrong? Will the jury convict an innocent person, or might they free a guilty person? </p>
<p>Ultimately, who committed the crime is often not easy to know, and jurors have to subjectively evaluate the evidence. But finding out what goes on inside the jury room and the <a href="https://theconversation.com/scotlands-not-proven-verdict-helps-juries-communicate-their-belief-of-guilt-when-lack-of-evidence-fails-to-convict-108286">biases</a> that might influence jurors themselves is of huge interest and importance. </p>
<p>As psychologists, we can delve into jury decision making, as it requires several different areas of psychological research (cognitive psychology, social psychology, and individual differences) to unlock the processes behind the decisions jurors reach. The aim of our <a href="https://www.researchgate.net/publication/358001463_Cognitive_and_human_factors_in_legal_layperson_decision_making_Sources_of_bias_in_juror_decision_making">recent review</a> was to bring together different areas of psychology to identify potential sources of bias that may influence how jurors make decisions.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/uPUMd89LqOA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>The big three biases</h2>
<p>We identified three main sources of bias: <a href="https://psycnet.apa.org/record/2008-10519-003">pre-trial bias</a>; <a href="https://oro.open.ac.uk/66827/1/faith-in-thy-threshold.pdf">cognitive bias</a> and <a href="https://pubs.acs.org/doi/10.1021/acs.analchem.0c00704">bias originating from expert witnesses</a>. </p>
<p>A significant part of the research literature has highlighted that pre-trial biases can influence the judgments of jurors. In 2008 researchers developed the <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1559-1816.2008.00378.x">pre-trial juror attitude questionnaire</a> (PJAQ).</p>
<p>The scale measures biases that might influence juror decision making. For example, it measures biases such as racial biases and system confidence – how much faith (or not) the juror has in the criminal justice system. Through measuring these biases, we can get an indication into how strong a bias a person may have towards either the prosecution or defence. Interestingly, the PJAQ has often been shown to predict the verdict reached by jurors, with those who have a pro-prosecution bias reaching more guilty verdicts.</p>
<p>Due to pre-trial bias, some jurors are unable to take part in a criminal trial with an “innocent until proven guilty” <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1559-1816.2008.00378.x">mindset</a>, even if they try. Jurors, like most humans, are not always rational, and may <a href="https://www.jstor.org/stable/1738360">struggle to process</a> and utilise all the available information in a reasoned manner.</p>
<p>This tendency often leads to biased decision making that can lead to errors. For example, <a href="https://www.jstor.org/stable/1738360">research</a> from 2001 found that jurors may favour particular verdicts as a trial progresses, despite being warned against doing this by a judge.</p>
<p>These preferences can lead to those jurors distorting the evidence against their preferred verdict or giving more weight to the evidence that favours their preference, a phenomenon known as <a href="https://journals.sagepub.com/doi/abs/10.1177/0025802418791062">confirmation bias</a>.</p>
<p>Jurors who enter the courtroom with a bias towards the prosecution are more likely to see the evidence from the prosecution’s perspective, and dismiss the evidence presented from the defence (and vice versa when jurors have a defence bias). So initial pre-trial biases interact with cognitive mechanisms (for example, thinking, perception, memory) to <a href="https://www.researchgate.net/publication/358001463_Cognitive_and_human_factors_in_legal_layperson_decision_making_Sources_of_bias_in_juror_decision_making">cause the effects of bias to snowball</a>. </p>
<p>Another origin of bias in jurors may come from “objective” and <a href="https://www.science.org/doi/epdf/10.1126/science.aat8443">scientific expert witnesses</a>. Researchers such as co-author Itiel Dror have <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1556-4029.14697">shown</a> that expert witnesses are far from objective decision makers and that irrelevant contextual information (provided, potentially, through the police) can bias their judgments and cause errors.</p>
<p>The diagram below shows the <a href="https://pubs.acs.org/doi/10.1021/acs.analchem.0c00704">factors that might influence</a> a forensic expert’s analysis. Through presenting expert testimony, biased conclusions could end up influencing the jury. </p>
<figure class="align-center ">
<img alt="A graphic showing a pyramid shape representing the sources of juror bias in criminal trials." src="https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=426&fit=crop&dpr=1 600w, https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=426&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=426&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=535&fit=crop&dpr=1 754w, https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=535&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/448397/original/file-20220224-21-95m6ha.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=535&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The biases that expert decision makers are subject to, from case-specific information to the effects of human and cognitive factors on their choices.</span>
<span class="attribution"><span class="source">Itiel Dror</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Balancing the bias</h2>
<p>We have made several recommendations in our review. First, we suggest a jury selection procedure, using measures like the PJAQ, where jurors with prejudicial biases are weeded out from the jury pool.</p>
<p>Second, such procedures could also be used to create a jury with a representative pool of biases. As fallible beings, humans are likely to always have some form of bias. If the most negative of biases, such as racial biases, are removed from the jury pool, other biases could be counteracted through a mix of jurors with different beliefs and biases – for example people with confidence in the criminal justice system vs. people with little faith in the system <a href="https://www.tandfonline.com/doi/abs/10.1080/1068316031000116283">deliberating with one another</a> – deliberating with one another. More research is needed though, as very little has been conducted on <a href="https://psycnet.apa.org/record/2017-09577-009">jury deliberations</a>.</p>
<p>A third suggestion is for the criminal justice system to tackle bias by protecting forensic experts from undue influences, so that powerful but biased expert evidence does not influence the jury. For example, an expert’s testimony may be biased if they knew about <a href="https://www.researchgate.net/publication/339135339_An_inconvenient_truth_More_rigorous_and_ecologically_valid_research_is_needed_to_properly_understand_cognitive_bias_in_forensic_decisions">another piece of unrelated evidence</a>, such as a confession, during analysis.</p>
<p>Methods of counteracting bias in forensic examiners include using expert witnesses not associated with either side of the <a href="https://www.science.org/doi/full/10.1126/science.aat8443">adversarial system</a>, and for labs to use techniques such as <a href="https://www.sciencedirect.com/science/article/pii/S2589871X21000310?via%3Dihub">Linear Sequential Unmasking</a> (LSU). </p>
<p>LSU is a technique where forensic experts analyse the information in a specific sequence in isolation from any other reference material. So, for example, first they would analyse the evidence at the crime scene such as fingerprints. But they would not have access at this point to any material pertaining to the “target” suspect, such as their fingerprints. The reference material would then be analysed and later compared to the evidence gathered. LSU ensures sequencing of the relevant contextual information so that the more objective and less biasing information is prioritised.</p>
<p>Bias is a significant issue in the criminal justice system and can lead to miscarriages of justice. Through researching the sources and effects, psychologists can aid the criminal justice system by helping those involved establish procedures that avoid the potential for bias to influence the process.</p><img src="https://counter.theconversation.com/content/176721/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Finding out what goes on behind jury decisions and the biases that influence them is hugely important if the criminal justice system is to work properly.Lee John Curley, Lecturer in Psychology, The Open UniversityItiel Dror, Senior Cognitive Neuroscience Researcher, UCLJames Munro, Psychology Technical Lead (Teaching & Research) School of Psychology & Counselling Psychology, The Open UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1750152022-01-31T14:44:26Z2022-01-31T14:44:26ZThe cognitive bias that tripped us up during the pandemic<p>The human brain is a marvellous machine, capable of handling complex information. To help us make sense of information quickly and make rapid decisions, it has learned to use shortcuts, called “heuristics”. Most of the time, these shortcuts help us to make good decisions. But sometimes they lead to cognitive biases. </p>
<p>Answer this question as quickly as you can without reading on: which European country was hit the hardest by the pandemic? </p>
<p>If you answered “Italy”, you’re wrong. But you’re not alone. Italy is not even in the top ten European countries by the number of <a href="https://www.statista.com/statistics/1110187/coronavirus-incidence-europe-by-country/">confirmed COVID cases</a> or <a href="https://www.statista.com/statistics/1111779/coronavirus-death-rate-europe-by-country/">deaths</a>.</p>
<p>It is easy to understand why people might give a wrong answer to this question – as happened when I played this game with friends. Italy was the first European country to be hit by the pandemic, or at least this is what <a href="https://www.thejournal.ie/italy-coronavirus-5038359-Mar2020/">we were told</a> at the beginning. And our perception of the situation formed early on with a focus on Italy. Later, of course, other countries were hit worse than Italy, but Italy is the name that got stuck in our heads. </p>
<p>The trick of this game is to ask people to answer quickly. When I gave friends time to think or look for evidence, they often came up with a different answer – some of them quite accurate. Cognitive biases are shortcuts and shortcuts are often used when there are limited resources – in this case, the resource is time.</p>
<p>This particular bias is called “<a href="https://link.springer.com/article/10.1007/s42001-021-00158-0">anchoring bias</a>”. It occurs when we rely too heavily on the first piece of information we receive about a topic and fail to update our perception when we receive new information.</p>
<p>As we show in <a href="https://link.springer.com/article/10.1007/s42001-021-00158-0">a recent work</a>, anchoring bias can take more complex forms, but in all of them, one feature of our brain is essential: it is easier to stick to the information we have stored first and try to work out our decisions and perceptions starting from that reference point – and often not going too far.</p>
<h2>Data deluge</h2>
<p>The COVID pandemic is remarkable for many things, but, as a data scientist, the one that stands out for me is the amount of data, facts, stats and figures that are available to pore over. </p>
<p>It was rather exciting to be able to regularly check the numbers online on portals such as <a href="https://coronavirus.jhu.edu/map.html">Johns Hopkins Coronavirus Resource Center</a> and <a href="https://ourworldindata.org/">Our World in Data</a>, or just tune in to almost any radio or TV station or news website to see the latest COVID statistics. Many TV channels introduced programme segments specifically to report those numbers daily.</p>
<p><strong>Johns Hopkins data portal</strong></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of Johns Hopkins COVID tracker, with lots of charts, maps and numbers." src="https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=284&fit=crop&dpr=1 600w, https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=284&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=284&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=357&fit=crop&dpr=1 754w, https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=357&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/443452/original/file-20220131-19-1lszs0c.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=357&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A wealth of COVID data.</span>
<span class="attribution"><a class="source" href="https://coronavirus.jhu.edu/map.html">Johns Hopkins</a></span>
</figcaption>
</figure>
<p>However, the firehose of COVID data that came at us is not compatible with the rate at which we can meaningfully use and handle that data. Our brain takes in the anchors, the first wave of numbers or other information, and sticks to them. </p>
<p>Later, when it is challenged by new numbers, it takes some time to switch to the new anchor and update. This eventually leads to data fatigue, when we stop paying attention to any new input and we forget the initial information, too. After all, what was the safe length for social distancing in the UK: <a href="https://www.reuters.com/article/us-health-coronavirus-distance-explainer-idUSKBN23U22W">one or two metres</a>? Oh no, <a href="https://icad.ie/1-5-meter-of-fear/">1.5 metres</a>, or <a href="https://www.bbc.com/news/uk-51506729">6 feet</a>. But six feet is 1.8 metres, no? Never mind. </p>
<p>The issues with COVID communication are not limited to the statistics describing the spread and prevalence of the pandemic or the safe distance we should keep from others. Initially, we were told that “herd immunity” appears once <a href="https://www.nature.com/articles/d41586-021-00728-2">60%-70% of the population</a> has gained immunity either through infection or vaccination. </p>
<p>Later, with more studies and analysis this number was more accurately predicted to be <a href="https://www.theguardian.com/commentisfree/2022/jan/10/herd-immunity-threshold-covid-new-variants">around 90%-95%</a>, which is meaningfully larger than the initial number. However, as shown in our study, the role of that initial number can be profound and a simple update wasn’t enough to remove it from people’s minds. This could to some extent explain the vaccine hesitancy that has been observed in many countries; after all, if enough other people are vaccinated, why should we be bothered to risk the vaccine’s side-effects? Never mind that the “enough” might not be enough.</p>
<p>The point here is not that we should stop the flow of information or ignore statistics and numbers. Instead, we should learn when we deal with information to consider our cognitive limitations. If we were going through the pandemic all over again, I would be more careful with how much data exposure I got in order to avoid data fatigue. And when it comes to decisions, I would take time not to force my brain into shortcuts – I would check the latest data rather than relying on what I thought I knew. This way, my risk of cognitive bias would be minimised.</p><img src="https://counter.theconversation.com/content/175015/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Taha Yasseri receives funding from EPSRC. </span></em></p>Anchoring bias meant we found it hard to get rid of the first bit of information we heard.Taha Yasseri, Associate Professor, School of Sociology; Geary Fellow, Geary Institute for Public Policy, University College DublinLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1729682022-01-30T19:12:26Z2022-01-30T19:12:26ZHere’s why misinformation is a smaller problem than you think<figure><img src="https://images.theconversation.com/files/443122/original/file-20220128-19-na0h1j.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5882%2C3891&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>It’s widely believed that this is the age of misinformation, of alternative facts, and of conspiracy theories gone mainstream, from QAnon to anti-vaccine and anti-lockdown movements. </p>
<p>In this telling, claims spread by internet crackpots are amplified by partisan news networks and social media to the point that wild myths can now influence or even change governments. </p>
<p>But is this really the case, or are we inflating the problem of misinformation? Ironically, many of our common beliefs about the issue are, well, myths. </p>
<h2>Conspiratorial beliefs are held by a small minority</h2>
<p>How many people actually believe misinformation-fed conspiracy theories? It turns out, not many. Wild conspiracy theories <a href="https://theconversation.com/qanon-hasnt-gone-away-its-alive-and-kicking-in-states-across-the-country-154788">like QAnon</a> draw headlines, especially given their believers were amongst the rioters who stormed the US Capitol a year ago. But these beliefs are still rare. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/misinformation-disinformation-and-hoaxes-whats-the-difference-158491">Misinformation, disinformation and hoaxes: What’s the difference?</a>
</strong>
</em>
</p>
<hr>
<p>While surveys estimate the number of QAnon believers in the US to be as high as <a href="https://abcnews.go.com/Politics/majority-americans-trump-convicted-barred-holding-federal-office/story?id=75729878">15%</a>, this is likely due to “acquiescence bias”. This is the tendency for people to agree with whatever they’re asked in a survey, even statements like “the government, media, and financial worlds in the US are controlled by a group of Satan-worshipping pedophiles who run a global child sex trafficking operation.” As political scientists Seth Hill and Molly Roberts <a href="https://www.sethjhill.com/HillRoberts_AcqBiasPoliticalBeliefs.pdf">have demonstrated</a>, phrasing survey questions differently can slash the numbers who agree by half. </p>
<figure class="align-center ">
<img alt="the word " src="https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/443125/original/file-20220128-21-1sc1hiv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">How potent are lies and misinformation?</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Of course, even if only a small percentage of us believe false or deliberately misleading information, there may be real consequences. In America, around 15% of adults <a href="https://www.nytimes.com/2021/12/25/world/the-unvaccinated-in-the-us-remain-defiant.html">refuse to get</a> a COVID vaccine. That, in turn, is leading to what’s been dubbed the <a href="https://time.com/6138566/pandemic-of-unvaccinated/">pandemic of the unvaccinated</a>.</p>
<p>Why do people fall for false information, even when it’s against their own direct interest, such as keeping themselves and their families alive? </p>
<h2>Are we really too gullible?</h2>
<p>A common answer is that people are <a href="https://politicalwire.com/2016/04/01/what-your-brain-really-thinks-about-donald-trump/">easily duped</a>. The ability of populists like Donald Trump to ride to power on the back of a series of <a href="https://www.independent.co.uk/news/world/americas/donald-trump-president-lies-and-mistruths-during-us-election-campaign-a7406821.html">false or misleading claims</a> would seem to be compelling evidence of such widespread credulity. Trump drove the “Birther myth” that Barack Obama was not born in the United States and pedalled wildly inaccurate statistics on <a href="https://www.vox.com/2016/10/12/13255466/trump-murder-rate">crime rates</a>, <a href="https://www.politifact.com/factchecks/2015/sep/30/donald-trump/donald-trump-says-unemployment-rate-may-be-42-perc/">unemployment</a> throughout his campaign. </p>
<p>But the idea that only a few of us can resist the deluge of falsehoods is another myth. If people were so easily gulled, we’d all be the willing slaves of a manipulative elite! Rather, as French social and cognitive psychologist Hugo Mercier <a href="https://www.newscientist.com/article/mg24532700-300-why-the-human-race-may-be-less-gullible-than-you-think/">has argued</a>, people have “open vigilance” cognitive mechanisms that prevent this from happening. While we are open to letting in new information, our standard response is to treat that information sceptically.</p>
<h2>Are we just irrational?</h2>
<p>So how does misinformation slip through? First, our ability to critically evaluate information is far from perfect. While it was once common belief humans would always rationally act in our own best interests, research by Nobel Prize-winning economist <a href="https://www.scientificamerican.com/article/kahneman-excerpt-thinking-fast-and-slow/">Daniel Kahneman</a> and many others has shown we all have systematic cognitive errors such as the “availability heuristic” and the “omission bias”.</p>
<p><a href="https://www.forbes.com/sites/sarahwatts/2019/02/21/5-cognitive-biases-that-explain-why-people-still-dont-vaccinate/?sh=14b1a1b84414">Both errors</a> are involved in vaccine hesitancy. If rare vaccine side effects draw media attention, many people will fixate on this risk, despite how low it is. That’s the availability heuristic at work. </p>
<p>At the same time, people discount the risks associated with not taking an action (being unvaccinated), while overestimating the risks of taking an action (getting vaccinated). That’s the omission bias. </p>
<p>There is a link between susceptibility to misinformation and lower levels of <a href="https://royalsocietypublishing.org/doi/10.1098/rsos.201199">cognitive reasoning</a>. But irrationality is not the whole story. When it comes to explaining support for conspiracy theories like QAnon, we need to look beyond people’s numeracy skills. </p>
<h2>We’re team players</h2>
<p>As Mercier has pointed out, we’re more likely to believe a lie if it comes from a source we already trust. Ours is a deeply social species. We evolved to use culture – shared beliefs and practices – as a kind of <a href="https://theconversation.com/conspiracy-theories-how-belief-is-rooted-in-evolution-not-ignorance-128803">societal glue</a>. In practice, this means we sometimes suspend our disbelief just to to get along. </p>
<p>Take, for example, the well-studied effect of political partisanship on American acceptance of the <a href="https://www.theatlantic.com/ideas/archive/2020/05/birtherism-and-trump/610978/">Birther myth</a>: by 2016, while 80% of Democrats believed that Barack Obama was born in the United States, only 25% of Republicans did. People accept misinformation like Birtherism and QAnon to fit in with their group.</p>
<h2>How can we help people taken in by misinformation?</h2>
<p>For some of us, the pandemic has brought with it an unwelcome challenge: trying to change the mind of a loved one swayed by misinformation about vaccination. </p>
<figure class="align-center ">
<img alt="Two female friends talking" src="https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/443126/original/file-20220128-15-azic6r.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Creating a common understanding is vital to give persuasion a chance.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>According to an influential theory known as the “<a href="https://daily.jstor.org/the-backfire-effect/">backfire effect</a>”, not only do people resist information running contrary to their prior beliefs, but confronting them with this information only increases their commitment to their prior belief.</p>
<p>If this theory was true, there would be no point in arguing. Luckily, the backfire or backlash effect is yet another <a href="https://link.springer.com/article/10.1007/s11109-018-9443-y">popular myth</a>. “Out of the hundreds of opportunities to document backlash in my own experimental work on persuasion, I’ve never seen it.” That’s Yale persuasion expert <a href="https://alexandercoppock.com/">Alexander Coppock</a>, who I corresponded with by email. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/radicalization-pipelines-how-targeted-advertising-on-social-media-drives-people-to-extremes-173568">Radicalization pipelines: How targeted advertising on social media drives people to extremes</a>
</strong>
</em>
</p>
<hr>
<p>Why does the myth persist? Coppock believes it’s because disagreement is unpleasant on a personal level. “When we try to persuade others, they don’t like it and they like us less for having tried,” Coppock said. What happens next? After we seemingly fail in our efforts at persuasion, we reassure ourselves the person holding the belief is simply wrong, if not stupid. </p>
<p>Our failed efforts at persuasion shouldn’t stop us trying. The <a href="https://link.springer.com/article/10.1007/s11109-018-9443-y">experimental evidence</a> clearly shows us that everyone, even strongly partisan people, can update their views when given accurate information. While some of us have further to go before we are fully convinced, clear, accurate information usually moves us in the <a href="https://www.cambridge.org/core/elements/abs/false-alarm/0A941DA922E709B1405D4640058E3442">right direction</a>.</p>
<p>The key is to avoid making it a partisan right/wrong issue. The more you can make someone else feel included and on the same team, the more empathy and trust you generate. </p>
<p>The more the other person feels understood, the better your chances are of bringing them back in from the wilds of misinformation.</p><img src="https://counter.theconversation.com/content/172968/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Kenny does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It can be easy to despair about the problem of misinformation. But the problem is smaller than you might think.Paul Kenny, Professor of Political Science, Australian Catholic UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1723272021-11-24T13:07:39Z2021-11-24T13:07:39ZDrivers and hand-held mobile phones: extending the ban won’t solve the problem – here’s why<figure><img src="https://images.theconversation.com/files/433456/original/file-20211123-15-14hsu90.jpg?ixlib=rb-1.1.0&rect=8%2C8%2C5742%2C3819&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/beautiful-woman-using-mobile-phone-while-689940928">wavebreakmedia/Shutterstock</a></span></figcaption></figure><p>The laws around mobile phone use while driving are to be tightened under new <a href="https://www.gov.uk/government/news/any-use-of-hand-held-mobile-phone-while-driving-to-become-illegal">UK government plans</a> to make any use of a hand-held phone illegal. From 2022, mobile phone law will be extended to cover taking photos or videos, scrolling through playlists or playing games while driving or stationary, say, at a traffic light. Use of a mobile phone ‘hands-free’, however, will still be allowed – even though research shows it is <a href="https://theconversation.com/car-firms-are-still-pushing-hands-free-phone-tech-despite-how-dangerous-it-is-75419">equally distracting</a>.</p>
<p>Currently, UK drivers using a hand-held mobile phone can only be prosecuted if it can be proven that they were using it for an “<a href="https://www.gov.uk/government/consultations/expanding-the-offence-of-using-a-hand-held-mobile-phone-while-driving-to-include-non-connected-mobile-application-actions/outcome/using-a-mobile-phone-while-driving-consultation-outcome">interactive communicative function</a>” such as calling or texting. The change in the law closes this loophole, and makes it easier for distracted drivers to be prosecuted, fined £200, and given six points on their licence.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1461840602845356039"}"></div></p>
<p>According to <a href="https://www.gov.uk/government/consultations/expanding-the-offence-of-using-a-hand-held-mobile-phone-while-driving-to-include-non-connected-mobile-application-actions/outcome/using-a-mobile-phone-while-driving-consultation-outcome">the UK government</a>, 81% of people who responded to its consultation supported the move. This aligns with findings from roadside breakdown group RAC, whose <a href="https://www.rac.co.uk/drive/features/rac-report-on-motoring-2021/">annual report</a> on motoring regularly shows that mobile phone use by other drivers is a top concern for motorists.</p>
<p>But <a href="https://www.brake.org.uk/files/downloads/Reports/Direct-Line-Safe-Driving/In-vehicle-distraction-Direct-Line-Safe-Driving-Report-2019.pdf">data also shows</a> that many drivers who claim to support the law nevertheless continue to use their phones while behind the wheel. <a href="https://www.rac.co.uk/drive/features/rac-report-on-motoring-2021/">One survey</a> found that more than a quarter of drivers admitted to hand-held mobile phone use, at least occasionally. </p>
<p>So why do drivers who support the law, and acknowledge the dangers of distracted driving, still use their phones? The answer partly lies in driver attitudes and biases. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/car-firms-are-still-pushing-hands-free-phone-tech-despite-how-dangerous-it-is-75419">Car firms are still pushing hands-free phone tech – despite how dangerous it is</a>
</strong>
</em>
</p>
<hr>
<h2>What the evidence tells us</h2>
<p>Research <a href="https://psycnet.apa.org/record/2007-15150-012">consistently shows</a> that most drivers consider themselves to be above average at driving. Statistically speaking, of course, this is highly unlikely. But this “self-enhancement bias” gives drivers a rationale for believing <em>their</em> mobile phone use is safe, while condemning others for doing the same thing.</p>
<p>Phone-using drivers <a href="https://sites.tufts.edu/appliedcognition/files/2015/10/Why-drivers-use-cell-phones-and-support-legislation-to-restrict-this-practice.pdf">justify their behaviour</a> by claiming they are able <a href="https://pubmed.ncbi.nlm.nih.gov/28189943/">to modify</a> their mobile phone use dependent on the driving situation, such as limiting use on busy roads. They believe they are able to multitask and <a href="https://pubmed.ncbi.nlm.nih.gov/25133486/">mitigate the risk</a> in a way that other drivers cannot. </p>
<p>Drivers with self-enhancement bias also often demonstrate “<a href="https://psycnet.apa.org/record/2007-15150-012">crash risk optimism</a>” – judging themselves to be at lower risk of a crash compared to other drivers.</p>
<p>In a sense, every journey a self-perceived above-average driver successfully completes while using a mobile phone appears to confirm to them that their behaviour is appropriate, and the law is aimed at other drivers. This helps to explain why strong support for a tightened law in this area can coexist with high rates of offending.</p>
<figure class="align-center ">
<img alt="A man speaking on the phone while driving." src="https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/433459/original/file-20211123-13-1bdt0z0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Research tells us many drivers consider themselves to be above average at driving.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/men-cell-phone-use-while-driving-432261919">APM STOCK/Shutterstock</a></span>
</figcaption>
</figure>
<p>Education campaigns that, for example, feature fatal or serious collisions caused by a distracted driver can actually play into these biases. Such campaigns appear to confirm drivers’ belief that they can handle it, while these other “inferior” drivers could not.</p>
<p>For these over-confident drivers, perhaps the only deterrent would be the threat of enforcement. But in recent years, numbers of dedicated roads-policing officers in the UK <a href="https://www.pacts.org.uk/wp-content/uploads/Roads-Policing-Report-FinalV1-merged-1.pdf">have declined</a>, and the public has, apparently, noticed. In <a href="https://www.theaa.com/about-us/newsroom/driving-offence-enforcement">one survey</a>, 54% of respondents felt they were unlikely to be caught or punished for using a hand-held mobile phone while driving. </p>
<p>This combination of circumstances makes it very difficult to persuade drivers that they shouldn’t use their mobile phones behind the wheel. If a driver thinks they can safely multitask while also avoiding prosecution, what’s stopping them? </p>
<h2>We need to change attitudes</h2>
<p>The tightening of the law may help to encourage some drivers to think about their phone use, but it seems unlikely it will solve the problem of mobile phone use among drivers, and eliminate the harm it causes.</p>
<p>In a broader sense, changes to the law will never be able to keep pace with new technologies. <a href="https://theconversation.com/in-car-technology-are-we-being-sold-a-false-sense-of-security-117473">In-vehicle distractions</a>, such as interactive screens on the dashboard and digital assistants like Alexa, are developing more quickly than the law can keep up with. </p>
<p>If we want to reduce the <a href="https://www.gov.uk/government/statistical-data-sets/reported-road-accidents-vehicles-and-casualties-tables-for-great-britain">number of people killed</a> and seriously injured each year by drivers using their mobile phones, we have to <a href="https://viewer.joomag.com/mobileengaged-compendium-2021/0552788001608635854?short&">persuade drivers</a> not to do it regardless of whether or not they’ll get caught.</p>
<p>We need to challenge the narratives that drivers regularly deploy to justify their behaviour, and address driver biases head-on by providing education, based on psychological evidence, that’s harder for drivers to resist or deny. <a href="https://www.open.edu/openlearn/health-sports-psychology/psychology/are-you-focused-driver">Interactive education</a>, which allows drivers to experience their own distraction, rather than hearing about the failures of others, would be a good place to start. </p>
<p>If we don’t address driver attitudes, we won’t meaningfully address driver distraction, regardless of what the law says.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/five-vital-things-you-cant-do-properly-when-youre-on-your-phone-85308">Five vital things you can't do properly when you're on your phone</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/172327/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gemma Briggs has received funding from UKROEd. </span></em></p><p class="fine-print"><em><span>Helen Wells has received funding from UKROEd and The Road Safety Trust.</span></em></p>Many drivers still use mobile phones despite the fact that it’s illegal.Gemma Briggs, Senior Lecturer in Psychology, The Open UniversityHelen Wells, Senior Lecturer in Criminology, Keele UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1709112021-11-11T15:41:51Z2021-11-11T15:41:51ZHow cognitive biases and adverse events influence vaccine decisions (maybe even your own)<figure><img src="https://images.theconversation.com/files/430453/original/file-20211105-25-wktftj.jpg?ixlib=rb-1.1.0&rect=1041%2C431%2C3862%2C2110&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Vaccine hesitancy has been a growing challenge for more than a decade. Concerns about vaccine safety and adverse events are the most commonly cited reasons.</span> <span class="attribution"><span class="source">(AP Photo/Rogelio V. Solis) </span></span></figcaption></figure><iframe style="width: 100%; height: 175px; border: none; position: relative; z-index: 1;" allowtransparency="" src="https://narrations.ad-auris.com/widget/the-conversation-canada/how-cognitive-biases-and-adverse-events-influence-vaccine-decisions--maybe-even-your-own-" width="100%" height="400"></iframe>
<p>The <a href="https://www.who.int/immunization/research/forums_and_initiatives/1_RButler_VH_Threat_Child_Health_gvirf16.pdf">World Health Organization</a> recognized vaccine hesitancy as a growing challenge in 2011, and identified it as <a href="https://www.who.int/wer/2011/wer8621.pdf">a new priority topic</a>. This was mostly because of the return of vaccine-preventable diseases like <a href="https://doi.org/10.1038/s41390-019-0354-3">measles in Europe and the United States</a>. </p>
<p>Ten years later, in 2021, we see that vaccine hesitancy has become an even more significant challenge despite all the efforts. The COVID-19 pandemic has brought it to a peak, and all efforts to manage the pandemic depend on the people’s willingness to take the vaccination. However, <a href="https://doi.org/10.3390/vaccines9020160">the numbers are not very promising as some percentage of populations in every country are reluctant to vaccinate</a>.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://theconversation.com/ca/topics/vaccine-confidence-in-canada-107061">Click here for more articles in our series about vaccine confidence.</a></span>
</figcaption>
</figure>
<p>Vaccine hesitancy means “<a href="https://doi.org/10.1016/S2352-4642(19)30092-6">delay in acceptance or refusal of vaccines despite availability of vaccination services</a>.” Vaccine-hesitant people cite <a href="https://doi.org/10.1016/j.vaccine.2015.01.068">distrust in vaccine safety and concerns over vaccine adverse events</a> as the most common reasons for reluctance to get vaccinated. </p>
<p>Vaccines are used in healthy people to prevent a disease that might harm them in the future. However, as they are healthy at the time of vaccination, they may worry about the vaccine’s safety.</p>
<p>Our team of business analytics and artificial intelligence researchers at Concordia University, along with a professor of epidemiology at McGill University, has published a paper in the <a href="https://doi.org/10.1186/s12889-021-11745-1"><em>BMC Public Health</em></a> journal that investigated this critical concern from two perspectives. </p>
<p>First, we addressed vaccine safety concerns by analyzing data from vaccine adverse events systems. These are vaccine surveillance systems where adverse events following immunization are reported, monitored and stored in a database. Canada’s system is called the <a href="https://www.canada.ca/en/public-health/services/immunization/canadian-adverse-events-following-immunization-surveillance-system-caefiss.html#_About_the_system">Canadian Adverse Events Following Immunization Surveillance System (CAEFISS)</a>.</p>
<p>Second, we focused on cognitive science and highlighted the critical role of cognitive biases in people’s vaccination decision-making that might lead to vaccine hesitancy.</p>
<h2>Data-driven evidence to address vaccine safety</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A vaccination centre with Ontario Premier Doug Ford in the background touring the facility and a line of people waiting to greet him in the foreground" src="https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430455/original/file-20211105-19-6xhu5j.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ontario Premier Doug Ford tours a vaccine centre in Windsor, Ont. Distrust in vaccine safety and concerns over vaccine adverse events are the most cited reasons for vaccine hesitancy.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/ Geoff Robins</span></span>
</figcaption>
</figure>
<p>A solution to mitigate distrust in vaccines safety is to <a href="https://doi.org/10.1177/0272989X15607855">provide evidence-based meaningful information about vaccine safety and adverse events</a>. We followed this path and analyzed all the adverse events reported to the <a href="https://vaers.hhs.gov/">U.S. Vaccine Adverse Event Reporting System (VAERS)</a>.</p>
<p>We analyzed almost 294,000 reports over eight years from 2011 to 2018. It equals roughly 115 reports per million people, covering 87 vaccine types. The most frequently reported vaccines were those for chickenpox, influenza, pneumococcal bacteria and human pappilomavirus (HPV).</p>
<p>Each VAERS report (representing one incident) involved an average of three adverse events, the most common being rashes, fever, swelling, pain and headaches. Only 5.5 per cent of the reports were marked as serious, resulting in hospitalization, disability, threats to life or death. The top adverse events in this group also include fever, pain, vomiting, headaches and shortness of breath. </p>
<p>We also analyzed the vaccine adverse events reported to <a href="https://www.canada.ca/en/health-canada/services/drugs-health-products/medeffect-canada/canada-vigilance-program.html">Canada Vigilance</a>. Our findings were consistent with those from the VAERS.</p>
<p>We have provided our results in an <a href="https://public.tableau.com/app/profile/aefi/viz/VAERSAdverseEventFollowingImmuinzationAEFIReports2011-2018/Dashboard1">interactive dashboard</a>. Health-care professionals and others involved in vaccine communication can use this dashboard to provide evidence-based information to the public. Research suggests that <a href="https://doi.org/10.1016/j.vaccine.2016.03.087">summarized data is the best format for communicating vaccine safety information</a>, so using this dashboard in vaccination communication can help mitigate vaccine hesitancy and safety concerns, and increase trust in vaccines.</p>
<h2>The role of cognitive biases in vaccine hesitancy</h2>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man in camouflage T-shirt and hat holding an anti-vaccine sign in the foreground, with a group of people in the background" src="https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=425&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=425&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=425&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=535&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=535&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430454/original/file-20211105-17-1g2k9jz.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=535&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An anti-vaccine demonstrator in front of a hospital in Montréal in September 2021.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Paul Chiasson</span></span>
</figcaption>
</figure>
<p>In the second part of our study, after addressing concerns about vaccine adverse events, we examined the role of cognitive biases on vaccine hesitancy. We identified cognitive biases that might affect vaccine communication and decision-making. </p>
<p>As mentioned earlier, vaccines are administrated to healthy people. When people are making decisions about vaccination, they might feel some degrees of risk, ambiguity and uncertainty about the results, which can instigate cognitive biases in the decision-making process. Such cognitive biases might <a href="https://doi.org/10.1016/j.vaccine.2015.03.048">nudge people toward vaccine hesitancy</a>.</p>
<p>For example, contrary to the positive effect of providing people with summarized vaccine safety information that increases vaccine trust, detailed vaccine adverse event reports will decrease trust because of two cognitive biases. </p>
<p>First, when vaccine hesitant people read a detailed report about a vaccine adverse event, it gives them the chance to see what they want to see. It is an example of confirmation bias, which is the <a href="https://doi.org/10.1037/1089-2680.2.2.175">tendency to recall and interpret information that confirms our existing beliefs</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&rect=89%2C40%2C2748%2C1859&q=45&auto=format&w=1000&fit=clip"><img alt="An upper arm bearing a heart tattoo and a small round bandage over an injection site" src="https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&rect=89%2C40%2C2748%2C1859&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/430452/original/file-20211105-23-1m8sflu.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Research suggests that summarized vaccine safety information is the best format for increasing trust in vaccines.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Kayle Neis</span></span>
</figcaption>
</figure>
<p>Second, a detailed adverse event report will also increase the event’s vividness, making it easier to recall the next time there is a decision to be made about taking a vaccine. That is the effect of availability bias, <a href="https://doi.org/10.1016/0010-0285(73)90033-9">the tendency to attribute more weight to factors that are easier to recall</a>.</p>
<p>We identified 15 cognitive biases in the vaccine decision-making process and categorized them into three groups:</p>
<ul>
<li><p><strong>Cognitive biases triggered by processing vaccine-related information</strong> include availability bias, as in the above example, as well as framing effect, base rate neglect, availability bias, anchoring effect and authority bias.</p></li>
<li><p><strong>Cognitive biases triggered in vaccination decision-making</strong> include omission bias, which is when the results of not taking an action are viewed as less damaging than the results of taking action, even when this is not the case. Others include ambiguity aversion, optimism bias, present bias and protected values. </p></li>
<li><p><strong>Cognitive biases triggered by prior beliefs regarding vaccination</strong> include confirmation bias such as the one in the example, as well as belief bias, shared information bias and false consensus effect.</p></li>
</ul>
<p>The <a href="https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-021-11745-1/tables/1">full list of cognitive biases affecting vaccination decision-making and their examples is available here</a>. Public health officials and practitioners can use this list and customize their plans, interventions and other forms of vaccine communication to decrease vaccine hesitancy. </p>
<p>You also can check the list and see if these biases have influenced your own vaccination decisions.</p>
<p><em>Do you have a question about COVID-19 vaccines? Email us at <a href="mailto:ca-vaccination@theconversation.com">ca-vaccination@theconversation.com</a> and vaccine experts will answer questions in upcoming articles.</em></p><img src="https://counter.theconversation.com/content/170911/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To help increase trust in vaccines, researchers analyzed data on adverse events to address safety concerns, and then used cognitive science to show how cognitive biases feed vaccine hesitancy.Hossein Azarpanah, PhD Candidate in Business Technology Management, Concordia UniversityLouise Pilote, Professor of Medicine, James McGill Chair, McGill UniversityMohsen Farhadloo, Assistant professor, John Molson School of Business, Concordia UniversityRustam Vahidov, Professor, Dept. of Supply Chain & Business Technology Management, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1687112021-10-14T18:57:31Z2021-10-14T18:57:31ZPeople use mental shortcuts to make difficult decisions – even highly trained doctors delivering babies<figure><img src="https://images.theconversation.com/files/426007/original/file-20211012-23-181b1i7.jpg?ixlib=rb-1.1.0&rect=528%2C0%2C3665%2C2552&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The situation in the delivery room can change suddenly, and doctors need to react fast.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/obstetrician-at-work-royalty-free-image/154891439">naphtalina/E+ via Getty Images</a></span></figcaption></figure><p>Being a physician is a difficult job. They must make complex, high-stakes decisions under severe pressure, with limited information about the patient, the disease and the treatment, while juggling personal and hospital priorities under the ever-present threat of lawsuits.</p>
<p>So what do physicians do in such highly uncertain situations?</p>
<p>Like all human beings, they unconsciously rely on quick rules that simplify complex decisions. Psychologists and economists call these mental shortcuts “<a href="https://thedecisionlab.com/biases/heuristics/">heuristics</a>.”</p>
<p>For example, if your sandwich falls on the floor, you might employ the <a href="https://theconversation.com/explainer-is-it-really-ok-to-eat-food-thats-fallen-on-the-floor-45541">five-second rule</a> to decide whether to pick it up and eat it or simply throw it away. That’s a heuristic – it allows you to approximate the correct decision quickly and easily, without getting mired in a lengthy mental debate about the pros and cons of each possible course of action.</p>
<p>While the average person’s reliance on heuristics is usually of little concern to society, the use of heuristics by physicians can have serious consequences.</p>
<h2>Heuristics in the delivery room</h2>
<p><a href="https://scholar.google.com/citations?user=_SUPGvQAAAAJ&hl=en&oi=ao">I’m a health economist</a> interested in the intersection of applied decision theory and health care.</p>
<p>There are all kinds of decisions a doctor must make while attending a birth: Should a woman continue to labor if the baby shows signs of distress? What interventions are warranted? Is it time for an emergency cesarean? The physician is responsible for life-and-death choices in a fraught, emotional environment.</p>
<p><a href="https://doi.org/10.1126/science.abc9818">In my recent research</a> published in the journal Science, I found that physicians use heuristics in the delivery room in ways that could potentially harm the mother and baby.</p>
<p>Looking at two academic hospitals’ data from more than 86,000 deliveries over 21 years, I saw that physicians who experienced complications during one patient’s delivery were more likely to switch to the other mode of delivery for their next patient, regardless of what the situation calls for. For example, if the physician’s last patient hemorrhaged during her vaginal delivery, the physician is more likely to perform a cesarean delivery for their next patient, even if a C-section is not indicated for that patient.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="preparing for operation in darkened surgical suite" src="https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/426008/original/file-20211012-21-1f3axac.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A doctor may lean toward a C-section because the last vaginal delivery had complications.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/two-male-surgeons-performing-an-operation-royalty-free-image/1227588366">SDI Productions/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>It appears physicians may overcorrect after a bad outcome, tending to shy away from the decision they believe caused it – even when faced with a new patient with her own unique circumstances.</p>
<p>Complications during a vaginal delivery increased the likelihood of a subsequent C-section by up to 3.6%. That’s about 23 potentially inappropriate C-sections per year per hospital. Complications during a cesarean increased the likelihood of a subsequent vaginal delivery by up to 3.4%. That’s about 50 potentially inappropriate vaginal deliveries per year per hospital.</p>
<p>It’s a sizable effect, considering the baseline effect should be zero. And patients at poorly resourced hospitals that have higher numbers of labor-and-delivery complications are more likely to be affected – as physicians experience more difficulties, this heuristic means they’ll be swayed toward more potentially inappropriate delivery choices.</p>
<p>There is evidence that this switching heuristic is harmful to the affected patient. For instance, if the physician switches delivery modes after the prior delivery had complications, my analysis found that the second patient and/or her baby are more likely to die than if the physician had switched delivery modes after no prior complications.</p>
<h2>What’s behind the overcorrection</h2>
<p>Since psychologists <a href="https://news.stanford.edu/pr/96/960605tversky.html">Amos Tversky</a> and Nobel laureate <a href="https://scholar.google.com/citations?user=ImhakoAAAAAJ&hl=en&oi=sra">Daniel Kahneman</a> <a href="https://doi.org/10.1126/science.185.4157.1124">introduced the idea of heuristics and biases</a> into the mainstream a few decades ago, researchers have conducted hundreds of studies establishing the various types of heuristics people rely on in various contexts. While these mental shortcuts are often useful for making immediate judgments with limited information, they can lead people to make very predictable mistakes.</p>
<p>There are several heuristics that could explain the switching behavior I identified in the delivery room data.</p>
<p>Take, for instance, the “win-stay/lose-shift” heuristic, which has been seen in <a href="https://doi.org/10.1007/s00442-010-1679-0">birds</a>, <a href="https://doi.org/10.3389/fnbeh.2020.00137">bees</a>, <a href="https://doi.org/10.1126/science.185.4153.796">rats</a>, <a href="https://doi.org/10.1037/h0023269">monkeys</a>, <a href="https://doi.org/10.1037/h0028753">children</a> and <a href="https://www.pnas.org/content/101/52/18053.short">adults</a>. According to this heuristic, individuals stick with a strategy until they experience a “loss,” such as a labor-and-delivery complication. At that point, they switch strategies – like trying a different delivery mode.</p>
<p>Researchers have been especially interested in <a href="https://doi.org/10.1037/a0016755">how experts use heuristics</a>, since it is not immediately clear whether people with enhanced knowledge of their specialized fields fall prey to the same decision-making flaws that afflict the lay individual. There is growing evidence that experts in a variety of fields – such as <a href="https://doi.org/10.1016/j.jarmac.2017.09.001">forensic scientists</a>, <a href="https://doi.org/10.1016/0749-5978(87)90046-X">real estate agents</a>, <a href="https://doi.org/10.1257/aer.101.1.129">elite athletes</a>, <a href="https://doi.org/10.1073/pnas.1018033108">judges</a>, <a href="https://doi.org/10.1006/ijhc.2000.0393">academics</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/12414468/">physicians</a> – do, in fact, rely on heuristics. Whether the use of such heuristics leads to poor outcomes – whether it can be called a “bias” – is still a matter of debate. </p>
<h2>Useful time-saver or dangerous bias?</h2>
<p>A bias arising from a heuristic implies a deviation from an “optimal” decision. However, identifying the optimal decision in real life is difficult because you usually don’t know what could have been: the counterfactual. This is especially relevant in medicine.</p>
<p>Take the win-stay/lose-shift strategy, for example. There are other studies that show that after “bad” events, physicians switch strategies. Missing an important diagnosis makes physicians test more on <a href="https://doi.org/10.5811/cpcem.2019.9.43975">subsequent patients</a>. Experiencing complications with a drug makes the physician <a href="https://doi.org/10.1136/bmj.38698.709572.55">less likely to prescribe it again</a>.</p>
<p>But from a learning perspective, it’s difficult to say that ordering a test after missing a diagnosis is a flawed heuristic. Ordering a test always increases the chance that the physician catches an important diagnosis. So it’s a useful heuristic in some instances – say, for example, the physician had been underordering tests before, or the patient or insurer prefers shelling out the extra money for the chance to detect a cancer early.</p>
<p>In my study, though, switching delivery modes after complications offers no documented guarantees of avoiding future complications. And there is the added consideration of the short- and long-term <a href="https://doi.org/10.1016/S0140-6736(18)31930-5">health consequences of delivery-mode choice</a> for mother and baby. Further, people are generally less tolerant of having inappropriate medical procedures performed on them than they are of being the recipients of unnecessary tests.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="doctor in scrubs using tablet" src="https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=379&fit=crop&dpr=1 600w, https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=379&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=379&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=476&fit=crop&dpr=1 754w, https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=476&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/426009/original/file-20211012-19-jbfnr1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=476&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Decision support can be built into the systems doctors use as they make choices about care.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/integrating-the-modern-and-medical-worlds-royalty-free-image/1162117375">shapecharge/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>Tweaking the heuristic</h2>
<p>Can physicians’ reliance on heuristics be lessened? Possibly.</p>
<p><a href="https://hbr.org/2018/03/how-mayo-clinic-is-combating-information-overload-in-critical-care-units">Decision support systems</a> that assist physicians with important clinical decisions are gathering momentum in medicine, and could help doctors course-correct after emotional events such as delivery complications. </p>
<p>For example, such algorithms can be built into electronic health records and perform a variety of tasks: flag physician decisions that appear nonstandard, identify patients who could benefit from a particular decision, summarize clinical information in ways that make it easier for physicians to digest and so on. As long as physicians retain at least <a href="https://doi.org/10.1287/mnsc.2016.2643">some autonomy</a>, decision support systems can do just that – support doctors in making clinical decisions.</p>
<p><a href="http://dx.doi.org/10.2139/ssrn.2499658">Nudges</a> that unobtrusively encourage physicians to make certain decisions can be accomplished by tinkering with the way options are presented – what’s called “choice architecture.” They already work for <a href="https://doi.org/10.1007/s11606-017-4286-5">other clinical decisions</a>.</p>
<p>Imagine a policy objective is to reduce prescription of drug X. The medical record system could present drug X as the last option in the physician’s drop-down menu, or auto-populate a default drug Y that the physician could choose to override. The physician would still be able to prescribe drug X, but it would require a little more mental involvement on their part to do so.</p>
<p>However, it is critical to understand that physicians frequently make highly consequential decisions under immense pressure. Any administrative barriers that hinder their ability to respond to clinical information in real time might harm patients even more. Designing and implementing interventions aimed at improving physician decision-making will be a challenge.</p>
<p>[<em>Get the best of The Conversation, every weekend.</em> <a href="https://theconversation.com/us/newsletters/weekly-highlights-61?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=weeklybest">Sign up for our weekly newsletter</a>.]</p><img src="https://counter.theconversation.com/content/168711/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Manasvini Singh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s human nature to unconsciously rely on quick rules to help make spur-of-the-moment decisions. New research finds physicians use these shortcuts, too, which can be bad news for some patients.Manasvini Singh, Assistant Professor of Health Economics, UMass AmherstLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1694202021-10-07T12:23:43Z2021-10-07T12:23:43ZFacebook whistleblower Frances Haugen testified that the company’s algorithms are dangerous – here’s how they can manipulate you<figure><img src="https://images.theconversation.com/files/425065/original/file-20211006-19-17853wo.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4200%2C2797&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Whistleblower Frances Haugen called Facebook's algorithm dangerous.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CongressFacebookWhistleblower/94e0a7294c644f869f79e7b96f7844b2/photo">Matt McClain/The Washington Post via AP</a></span></figcaption></figure><p>Former Facebook product manager Frances Haugen testified before the U.S. Senate on Oct. 5, 2021, that the company’s social media platforms “<a href="https://www.youtube.com/watch?v=rd2yC63DMBE">harm children, stoke division and weaken our democracy</a>.” </p>
<p>Haugen was the primary source for a <a href="https://www.wsj.com/articles/the-facebook-files-11631713039">Wall Street Journal exposé</a> on the company. She called Facebook’s algorithms dangerous, said Facebook executives were aware of the threat but put profits before people, and called on Congress to regulate the company.</p>
<p>Social media platforms rely heavily on people’s behavior to decide on the content that you see. In particular, they watch for content that people respond to or “engage” with by liking, commenting and sharing. <a href="https://www.axios.com/trolls-misinformation-facebook-twitter-iran-dd1a13b4-de1f-48cd-91a6-cac66202344b.html">Troll farms</a>, organizations that spread provocative content, exploit this by copying high-engagement content and <a href="https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/">posting it as their own</a>, which helps them reach a wide audience.</p>
<p>As a <a href="https://scholar.google.com/citations?user=f_kGJwkAAAAJ&hl=en">computer scientist</a> who studies the ways large numbers of people interact using technology, I understand the logic of using the <a href="https://www.penguinrandomhouse.com/books/175380/the-wisdom-of-crowds-by-james-surowiecki/">wisdom of the crowds</a> in these algorithms. I also see substantial pitfalls in how the social media companies do so in practice.</p>
<h2>From lions on the savanna to likes on Facebook</h2>
<p>The concept of the wisdom of crowds assumes that using signals from others’ actions, opinions and preferences as a guide will lead to sound decisions. For example, <a href="https://www.investopedia.com/terms/p/prediction-market.asp">collective predictions</a> are normally more accurate than individual ones. Collective intelligence is used to predict <a href="https://augur.net/">financial markets, sports</a>, <a href="https://iemweb.biz.uiowa.edu/">elections</a> and even <a href="https://www.centerforhealthsecurity.org/our-work/Center-projects/disease-prediction-project.html">disease outbreaks</a>. </p>
<p>Throughout millions of years of evolution, these principles have been coded into the human brain in the form of cognitive biases that come with names like <a href="https://doi.org/10.1086/208859">familiarity</a>, <a href="https://dictionary.apa.org/mere-exposure-effect">mere exposure</a> and <a href="https://www.psychologytoday.com/us/blog/stronger-the-broken-places/201708/the-bandwagon-effect">bandwagon effect</a>. If everyone starts running, you should also start running; maybe someone saw a lion coming and running could save your life. You may not know why, but it’s wiser to ask questions later. </p>
<p>Your brain picks up clues from the environment – including your peers – and uses <a href="https://global.oup.com/academic/product/simple-heuristics-that-make-us-smart-9780195143812">simple rules</a> to quickly translate those signals into decisions: Go with the winner, follow the majority, copy your neighbor. These rules work remarkably well in typical situations because they are based on sound assumptions. For example, they assume that people often act rationally, it is unlikely that many are wrong, the past predicts the future, and so on.</p>
<p>Technology allows people to access signals from much larger numbers of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on news feeds. </p>
<h2>Not everything viral deserves to be</h2>
<p>Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong <a href="http://doi.org/10.1002/asi.24121">popularity bias</a>. When applications are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences. </p>
<p>Social media like Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content. These algorithms take as input what you like, comment on and share – in other words, content you engage with. The goal of the algorithms is to maximize engagement by finding out what people like and ranking it at the top of their feeds. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/doWZHFnVPQ8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A primer on the Facebook algorithm.</span></figcaption>
</figure>
<p>On the surface this seems reasonable. If people like credible news, expert opinions and fun videos, these algorithms should identify such high-quality content. But the wisdom of the crowds makes a key assumption here: that recommending what is popular will help high-quality content “bubble up.” </p>
<p>We <a href="http://doi.org/10.1038/s41598-018-34203-2">tested this assumption</a> by studying an algorithm that ranks items using a mix of quality and popularity. We found that in general, popularity bias is more likely to lower the overall quality of content. The reason is that engagement is not a reliable indicator of quality when few people have been exposed to an item. In these cases, engagement generates a noisy signal, and the algorithm is likely to amplify this initial noise. Once the popularity of a low-quality item is large enough, it will keep getting amplified. </p>
<p>Algorithms aren’t the only thing affected by engagement bias – it can <a href="https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/">affect people</a> too. Evidence shows that information is transmitted via “<a href="https://doi.org/10.1371/journal.pone.0184148">complex contagion</a>,” meaning the more times people are exposed to an idea online, the more likely they are to adopt and reshare it. When social media tells people an item is going viral, their cognitive biases kick in and translate into the irresistible urge to pay attention to it and share it.</p>
<h2>Not-so-wise crowds</h2>
<p>We recently ran an experiment using <a href="https://fakey.iuni.iu.edu/">a news literacy app called Fakey</a>. It is a game developed by our lab that simulates a news feed like those of Facebook and Twitter. Players see a mix of current articles from fake news, junk science, hyperpartisan and conspiratorial sources, as well as mainstream sources. They get points for sharing or liking news from reliable sources and for flagging low-credibility articles for fact-checking. </p>
<p>We found that players are <a href="https://doi.org/10.37016/mr-2020-033">more likely to like or share and less likely to flag</a> articles from low-credibility sources when players can see that many other users have engaged with those articles. Exposure to the engagement metrics thus creates a vulnerability.</p>
<p><iframe id="HoqGE" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/HoqGE/5/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case. </p>
<p>First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which social media users can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as <a href="https://doi.org/10.1007/s42001-020-00084-7">echo chambers</a>. </p>
<p>Second, because many people’s friends are friends of one another, they influence one another. A <a href="https://doi.org/10.1126/science.1121066">famous experiment</a> demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment. </p>
<p>Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “<a href="https://www.webopedia.com/TERM/L/link_farming.html">link farms</a>” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own <a href="https://theconversation.com/misinformation-on-social-media-can-technology-save-us-69264">vulnerabilities</a>. </p>
<p>People aiming to manipulate the information market have created <a href="https://www.washingtonpost.com/technology/2020/10/13/black-fake-twitter-accounts-for-trump/">fake accounts</a>, like trolls and <a href="https://cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext">social bots</a>, and <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">organized</a> <a href="https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html">fake networks</a>. They have <a href="http://doi.org/10.1038/s41467-018-06930-7">flooded the network</a> to create the appearance that a <a href="https://www.newsweek.com/2020/10/23/qanon-conspiracy-theories-draw-new-believers-scientists-take-aim-misinformation-pandemic-1538901.html">conspiracy theory</a> or a <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14127">political candidate</a> is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even <a href="https://doi.org/10.1038/s41586-019-1507-6">altered the structure of social networks</a> to create <a href="https://doi.org/10.1371/journal.pone.0147617">illusions about majority opinions</a>. </p>
<p>[<em>Over 110,000 readers rely on The Conversation’s newsletter to understand the world.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=100Ksignup">Sign up today</a>.]</p>
<h2>Dialing down engagement</h2>
<p>What to do? Technology platforms are currently on the defensive. They are becoming more <a href="https://www.nytimes.com/2020/10/09/technology/twitter-election-ban-features.html">aggressive</a> during elections in <a href="https://www.socialmediatoday.com/news/facebook-outlines-its-evolving-efforts-to-combat-misinformation-ahead-of-ne/597129/">taking down fake accounts and harmful misinformation</a>. But these efforts can be akin to a game of <a href="https://www.marketplace.org/shows/marketplace-tech/facebook-plays-whack-a-mole-with-foreign-election-interference/">whack-a-mole</a>. </p>
<p>A different, preventive approach would be to add <a href="https://www.theguardian.com/commentisfree/2020/jun/29/social-distancing-social-media-facebook-misinformation">friction</a>. In other words, to slow down the process of spreading information. High-frequency behaviors such as automated liking and sharing could be inhibited by <a href="https://www.cloudflare.com/learning/bots/how-captchas-work/">CAPTCHA</a> tests, which require a human to respond, or fees. Not only would this decrease opportunities for manipulation, but with less information people would be able to pay more attention to what they see. It would leave less room for engagement bias to affect people’s decisions.</p>
<p>It would also help if social media companies adjusted their algorithms to rely less on engagement signals and more on quality signals to determine the content they serve you. Perhaps the whistleblower revelations will provide the necessary impetus.</p>
<p><em>This is an updated version of an <a href="https://theconversation.com/facebooks-algorithms-fueled-massive-foreign-propaganda-campaigns-during-the-2020-election-heres-how-algorithms-can-manipulate-you-168229">article originally published on Sept. 20, 2021</a>.</em></p><img src="https://counter.theconversation.com/content/169420/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Filippo Menczer receives funding from Knight Foundation, Craig Newmark Philanthropies, DARPA and AFOSR.</span></em></p>You have evolved to tap into the wisdom of the crowds. But on social media, your cognitive biases can lead you astray, something organized disinformation campaigns count on.Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1682292021-09-20T12:31:29Z2021-09-20T12:31:29ZFacebook’s algorithms fueled massive foreign propaganda campaigns during the 2020 election – here’s how algorithms can manipulate you<figure><img src="https://images.theconversation.com/files/421912/original/file-20210917-27-1onwiap.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3530%2C2373&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facebook has known that its algorithms enable trolls to spread propoganda.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/facebook-logo-and-a-laptop-are-pictured-in-this-news-photo/1234311508">STR/NurPhoto via Getty Images</a></span></figcaption></figure><p>An internal Facebook report found that the social media platform’s algorithms – the rules its computers follow in deciding the content that you see – enabled disinformation campaigns based in Eastern Europe to reach nearly half of all Americans in the run-up to the 2020 presidential election, according to a <a href="https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/">report in Technology Review</a>.</p>
<p>The campaigns produced the most popular pages for Christian and Black American content, and overall reached 140 million U.S. users per month. Seventy-five percent of the people exposed to the content hadn’t followed any of the pages. People saw the content because Facebook’s content-recommendation system put it into their news feeds. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1438672616970194947"}"></div></p>
<p>Social media platforms rely heavily on people’s behavior to decide on the content that you see. In particular, they watch for content that people respond to or “engage” with by liking, commenting and sharing. <a href="https://www.axios.com/trolls-misinformation-facebook-twitter-iran-dd1a13b4-de1f-48cd-91a6-cac66202344b.html">Troll farms</a>, organizations that spread provocative content, exploit this by copying high-engagement content and <a href="https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/">posting it as their own</a>. </p>
<p>As a <a href="https://scholar.google.com/citations?user=f_kGJwkAAAAJ&hl=en">computer scientist</a> who studies the ways large numbers of people interact using technology, I understand the logic of using the <a href="https://www.penguinrandomhouse.com/books/175380/the-wisdom-of-crowds-by-james-surowiecki/">wisdom of the crowds</a> in these algorithms. I also see substantial pitfalls in how the social media companies do so in practice.</p>
<h2>From lions on the savanna to likes on Facebook</h2>
<p>The concept of the wisdom of crowds assumes that using signals from others’ actions, opinions and preferences as a guide will lead to sound decisions. For example, <a href="https://www.investopedia.com/terms/p/prediction-market.asp">collective predictions</a> are normally more accurate than individual ones. Collective intelligence is used to predict <a href="https://augur.net/">financial markets, sports</a>, <a href="https://iemweb.biz.uiowa.edu/">elections</a> and even <a href="https://www.centerforhealthsecurity.org/our-work/Center-projects/disease-prediction-project.html">disease outbreaks</a>. </p>
<p>Throughout millions of years of evolution, these principles have been coded into the human brain in the form of cognitive biases that come with names like <a href="https://doi.org/10.1086/208859">familiarity</a>, <a href="http://socialpsychonline.com/2016/03/the-mere-exposure-effect/">mere exposure</a> and <a href="https://www.psychologytoday.com/us/blog/stronger-the-broken-places/201708/the-bandwagon-effect">bandwagon effect</a>. If everyone starts running, you should also start running; maybe someone saw a lion coming and running could save your life. You may not know why, but it’s wiser to ask questions later. </p>
<p>Your brain picks up clues from the environment – including your peers – and uses <a href="https://global.oup.com/academic/product/simple-heuristics-that-make-us-smart-9780195143812">simple rules</a> to quickly translate those signals into decisions: Go with the winner, follow the majority, copy your neighbor. These rules work remarkably well in typical situations because they are based on sound assumptions. For example, they assume that people often act rationally, it is unlikely that many are wrong, the past predicts the future, and so on.</p>
<p>Technology allows people to access signals from much larger numbers of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on news feeds. </p>
<h2>Not everything viral deserves to be</h2>
<p>Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong <a href="http://doi.org/10.1002/asi.24121">popularity bias</a>. When applications are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences. </p>
<p>Social media like Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content. These algorithms take as input what you like, comment on and share – in other words, content you engage with. The goal of the algorithms is to maximize engagement by finding out what people like and ranking it at the top of their feeds. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/doWZHFnVPQ8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A primer on the Facebook algorithm.</span></figcaption>
</figure>
<p>On the surface this seems reasonable. If people like credible news, expert opinions and fun videos, these algorithms should identify such high-quality content. But the wisdom of the crowds makes a key assumption here: that recommending what is popular will help high-quality content “bubble up.” </p>
<p>We <a href="http://doi.org/10.1038/s41598-018-34203-2">tested this assumption</a> by studying an algorithm that ranks items using a mix of quality and popularity. We found that in general, popularity bias is more likely to lower the overall quality of content. The reason is that engagement is not a reliable indicator of quality when few people have been exposed to an item. In these cases, engagement generates a noisy signal, and the algorithm is likely to amplify this initial noise. Once the popularity of a low-quality item is large enough, it will keep getting amplified. </p>
<p>Algorithms aren’t the only thing affected by engagement bias – it can <a href="https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/">affect people</a> too. Evidence shows that information is transmitted via “<a href="https://doi.org/10.1371/journal.pone.0184148">complex contagion</a>,” meaning the more times people are exposed to an idea online, the more likely they are to adopt and reshare it. When social media tells people an item is going viral, their cognitive biases kick in and translate into the irresistible urge to pay attention to it and share it.</p>
<h2>Not-so-wise crowds</h2>
<p>We recently ran an experiment using <a href="https://fakey.iuni.iu.edu/">a news literacy app called Fakey</a>. It is a game developed by our lab, which simulates a news feed like those of Facebook and Twitter. Players see a mix of current articles from fake news, junk science, hyperpartisan and conspiratorial sources, as well as mainstream sources. They get points for sharing or liking news from reliable sources and for flagging low-credibility articles for fact-checking. </p>
<p>We found that players are <a href="https://doi.org/10.37016/mr-2020-033">more likely to like or share and less likely to flag</a> articles from low-credibility sources when players can see that many other users have engaged with those articles. Exposure to the engagement metrics thus creates a vulnerability.</p>
<p><iframe id="HoqGE" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/HoqGE/3/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case. </p>
<p>First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which social media users can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as <a href="https://doi.org/10.1007/s42001-020-00084-7">echo chambers</a>. </p>
<p>Second, because many people’s friends are friends of one another, they influence one another. A <a href="https://doi.org/10.1126/science.1121066">famous experiment</a> demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment. </p>
<p>Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “<a href="https://www.webopedia.com/TERM/L/link_farming.html">link farms</a>” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own <a href="https://theconversation.com/misinformation-on-social-media-can-technology-save-us-69264">vulnerabilities</a>. </p>
<p>People aiming to manipulate the information market have created <a href="https://www.washingtonpost.com/technology/2020/10/13/black-fake-twitter-accounts-for-trump/">fake accounts</a>, like trolls and <a href="https://cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext">social bots</a>, and <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">organized</a> <a href="https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html">fake networks</a>. They have <a href="http://doi.org/10.1038/s41467-018-06930-7">flooded the network</a> to create the appearance that a <a href="https://www.newsweek.com/2020/10/23/qanon-conspiracy-theories-draw-new-believers-scientists-take-aim-misinformation-pandemic-1538901.html">conspiracy theory</a> or a <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14127">political candidate</a> is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even <a href="https://doi.org/10.1038/s41586-019-1507-6">altered the structure of social networks</a> to create <a href="https://doi.org/10.1371/journal.pone.0147617">illusions about majority opinions</a>. </p>
<p>[<em>Over 110,000 readers rely on The Conversation’s newsletter to understand the world.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=100Ksignup">Sign up today</a>.]</p>
<h2>Dialing down engagement</h2>
<p>What to do? Technology platforms are currently on the defensive. They are becoming more <a href="https://www.nytimes.com/2020/10/09/technology/twitter-election-ban-features.html">aggressive</a> during elections in <a href="https://www.socialmediatoday.com/news/facebook-outlines-its-evolving-efforts-to-combat-misinformation-ahead-of-ne/597129/">taking down fake accounts and harmful misinformation</a>. But these efforts can be akin to a game of <a href="https://www.marketplace.org/shows/marketplace-tech/facebook-plays-whack-a-mole-with-foreign-election-interference/">whack-a-mole</a>. </p>
<p>A different, preventive approach would be to add <a href="https://www.theguardian.com/commentisfree/2020/jun/29/social-distancing-social-media-facebook-misinformation">friction</a>. In other words, to slow down the process of spreading information. High-frequency behaviors such as automated liking and sharing could be inhibited by <a href="https://www.cloudflare.com/learning/bots/how-captchas-work/">CAPTCHA</a> tests or fees. Not only would this decrease opportunities for manipulation, but with less information people would be able to pay more attention to what they see. It would leave less room for engagement bias to affect people’s decisions.</p>
<p>It would also help if social media companies adjusted their algorithms to rely less on engagement to determine the content they serve you. Perhaps the revelations of Facebook’s knowledge of troll farms exploiting engagement will provide the necessary impetus.</p>
<p><em>This is an updated version of an <a href="https://theconversation.com/how-engagement-makes-you-vulnerable-to-manipulation-and-misinformation-on-social-media-145375">article originally published on Sept. 10, 2021</a>.</em></p><img src="https://counter.theconversation.com/content/168229/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Filippo Menczer receives funding from Knight Foundation, Craig Newmark Philanthropies, DARPA and AFOSR. </span></em></p>You have evolved to tap into the wisdom of the crowds. But on social media your cognitive biases can lead you astray, something organized disinformation campaigns count on.Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1453752021-09-10T12:47:06Z2021-09-10T12:47:06ZHow ‘engagement’ makes you vulnerable to manipulation and misinformation on social media<figure><img src="https://images.theconversation.com/files/420340/original/file-20210909-13-bbylok.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C4802%2C3184&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">People tend to view social media posts more favorably when more people have liked, commented on or shared them, regardless of the quality of the posts.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/this-picture-taken-on-december-18-2018-shows-myanmar-youths-news-photo/1074380170">Sai Aung Main/AFP via Getty Images</a></span></figcaption></figure><p><em>An updated version of this article was published on Sept. 20, 2021. <a href="https://theconversation.com/facebooks-algorithms-fueled-massive-foreign-propaganda-campaigns-during-the-2020-election-heres-how-algorithms-can-manipulate-you-168229">Read it here</a>.</em></p>
<p>Facebook has been <a href="https://about.fb.com/news/2021/02/reducing-political-content-in-news-feed/">quietly experimenting</a> with reducing the amount of political content it puts in users’ news feeds. The move is a tacit acknowledgment that the way the company’s algorithms work <a href="https://www.wired.com/story/facebook-quietly-makes-big-admission-political-content/">can be a problem</a>.</p>
<p>The heart of the matter is the distinction between provoking a response and providing content people want. Social media algorithms – the rules their computers follow in deciding the content that you see – rely heavily on people’s behavior to make these decisions. In particular, they watch for content that people respond to or “engage” with by liking, commenting and sharing.</p>
<p>As a <a href="https://scholar.google.com/citations?user=f_kGJwkAAAAJ&hl=en">computer scientist</a> who studies the ways large numbers of people interact using technology, I understand the logic of using the <a href="https://www.penguinrandomhouse.com/books/175380/the-wisdom-of-crowds-by-james-surowiecki/">wisdom of the crowds</a> in these algorithms. I also see substantial pitfalls in how the social media companies do so in practice.</p>
<h2>From lions on the savanna to likes on Facebook</h2>
<p>The concept of the wisdom of crowds assumes that using signals from others’ actions, opinions and preferences as a guide will lead to sound decisions. For example, <a href="https://www.investopedia.com/terms/p/prediction-market.asp">collective predictions</a> are normally more accurate than individual ones. Collective intelligence is used to predict <a href="https://augur.net/">financial markets, sports</a>, <a href="https://iemweb.biz.uiowa.edu/">elections</a> and even <a href="https://www.centerforhealthsecurity.org/our-work/Center-projects/disease-prediction-project.html">disease outbreaks</a>. </p>
<p>Throughout millions of years of evolution, these principles have been coded into the human brain in the form of cognitive biases that come with names like <a href="https://doi.org/10.1086/208859">familiarity</a>, <a href="http://socialpsychonline.com/2016/03/the-mere-exposure-effect/">mere-exposure</a> and <a href="https://www.psychologytoday.com/us/blog/stronger-the-broken-places/201708/the-bandwagon-effect">bandwagon effect</a>. If everyone starts running, you should also start running; maybe someone saw a lion coming and running could save your life. You may not know why, but it’s wiser to ask questions later. </p>
<p>Your brain picks up clues from the environment – including your peers – and uses <a href="https://global.oup.com/academic/product/simple-heuristics-that-make-us-smart-9780195143812">simple rules</a> to quickly translate those signals into decisions: Go with the winner, follow the majority, copy your neighbor. These rules work remarkably well in typical situations because they are based on sound assumptions. For example, they assume that people often act rationally, it is unlikely that many are wrong, the past predicts the future, and so on.</p>
<p>Technology allows people to access signals from much larger numbers of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on news feeds. </p>
<h2>Not everything viral deserves to be</h2>
<p>Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong <a href="http://doi.org/10.1002/asi.24121">popularity bias</a>. When applications are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences. </p>
<p>Social media like Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content. These algorithms take as input what you “like,” comment on and share – in other words, content you engage with. The goal of the algorithms is to maximize engagement by finding out what people like and ranking it at the top of their feeds. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/doWZHFnVPQ8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A primer on the Facebook algorithm.</span></figcaption>
</figure>
<p>On the surface this seems reasonable. If people like credible news, expert opinions and fun videos, these algorithms should identify such high-quality content. But the wisdom of the crowds makes a key assumption here: that recommending what is popular will help high-quality content “bubble up.” </p>
<p>We <a href="http://doi.org/10.1038/s41598-018-34203-2">tested this assumption</a> by studying an algorithm that ranks items using a mix of quality and popularity. We found that in general, popularity bias is more likely to lower the overall quality of content. The reason is that engagement is not a reliable indicator of quality when few people have been exposed to an item. In these cases, engagement generates a noisy signal, and the algorithm is likely to amplify this initial noise. Once the popularity of a low-quality item is large enough, it will keep getting amplified. </p>
<p>Algorithms aren’t the only thing affected by engagement bias – it can <a href="https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/">affect people</a>, too. Evidence shows that information is transmitted via “<a href="https://doi.org/10.1371/journal.pone.0184148">complex contagion</a>,” meaning the more times someone is exposed to an idea online, the more likely they are to adopt and reshare it. When social media tells people an item is going viral, their cognitive biases kick in and translate into the irresistible urge to pay attention to it and share it.</p>
<h2>Not-so-wise crowds</h2>
<p>We recently ran an experiment using <a href="https://fakey.iuni.iu.edu/">a news literacy app called Fakey</a>. It is a game developed by our lab, which simulates a news feed like those of Facebook and Twitter. Players see a mix of current articles from fake news, junk science, hyper-partisan and conspiratorial sources, as well as mainstream sources. They get points for sharing or liking news from reliable sources and for flagging low-credibility articles for fact-checking. </p>
<p>We found that players are <a href="https://doi.org/10.37016/mr-2020-033">more likely to like or share and less likely to flag</a> articles from low-credibility sources when players can see that many other users have engaged with those articles. Exposure to the engagement metrics thus creates a vulnerability.</p>
<p><iframe id="HoqGE" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/HoqGE/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case. </p>
<p>First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which a social media user can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as <a href="https://doi.org/10.1007/s42001-020-00084-7">echo chambers</a>. </p>
<p>Second, because many people’s friends are friends of each other, they influence each other. A <a href="https://doi.org/10.1126/science.1121066">famous experiment</a> demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment. </p>
<p>Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “<a href="https://www.webopedia.com/TERM/L/link_farming.html">link farms</a>” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own <a href="https://theconversation.com/misinformation-on-social-media-can-technology-save-us-69264">vulnerabilities</a>. </p>
<p>People aiming to manipulate the information market have created <a href="https://www.washingtonpost.com/technology/2020/10/13/black-fake-twitter-accounts-for-trump/">fake accounts</a>, like trolls and <a href="https://cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext">social bots</a>, and <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">organized</a> <a href="https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html">fake networks</a>. They have <a href="http://doi.org/10.1038/s41467-018-06930-7">flooded the network</a> to create the appearance that a <a href="https://www.newsweek.com/2020/10/23/qanon-conspiracy-theories-draw-new-believers-scientists-take-aim-misinformation-pandemic-1538901.html">conspiracy theory</a> or a <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14127">political candidate</a> is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even <a href="https://doi.org/10.1038/s41586-019-1507-6">altered the structure of social networks</a> to create <a href="https://doi.org/10.1371/journal.pone.0147617">illusions about majority opinions</a>. </p>
<p>[<em>Over 110,000 readers rely on The Conversation’s newsletter to understand the world.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=100Ksignup">Sign up today</a>.]</p>
<h2>Dialing down engagement</h2>
<p>What to do? Technology platforms are currently on the defensive. They are becoming more <a href="https://www.nytimes.com/2020/10/09/technology/twitter-election-ban-features.html">aggressive</a> during elections in <a href="https://www.socialmediatoday.com/news/facebook-outlines-its-evolving-efforts-to-combat-misinformation-ahead-of-ne/597129/">taking down fake accounts and harmful misinformation</a>. But these efforts can be akin to a game of <a href="https://www.marketplace.org/shows/marketplace-tech/facebook-plays-whack-a-mole-with-foreign-election-interference/">whack-a-mole</a>. </p>
<p>A different, preventive approach would be to add <a href="https://www.theguardian.com/commentisfree/2020/jun/29/social-distancing-social-media-facebook-misinformation">friction</a>. In other words, to slow down the process of spreading information. High-frequency behaviors such as automated liking and sharing could be inhibited by <a href="https://www.cloudflare.com/learning/bots/how-captchas-work/">CAPTCHA</a> tests or fees. This would not only decrease opportunities for manipulation, but with less information people would be able to pay more attention to what they see. It would leave less room for engagement bias to affect people’s decisions.</p>
<p>It would also help if social media companies adjusted their algorithms to rely less on engagement to determine the content they serve you.</p><img src="https://counter.theconversation.com/content/145375/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Filippo Menczer receives funding from Knight Foundation, Craig Newmark Philanthropies, DARPA and AFOSR. </span></em></p>You have evolved to tap into the wisdom of the crowds. But on social media your cognitive biases can lead you astray.Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1636662021-07-01T09:23:31Z2021-07-01T09:23:31ZHuman behaviour: what scientists have learned about it from the pandemic<figure><img src="https://images.theconversation.com/files/409092/original/file-20210630-17-oba0o.jpg?ixlib=rb-1.1.0&rect=99%2C90%2C5942%2C3931&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">People haven't been as irrational during the pandemic as some initially thought.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/new-york-april-01-2020-long-1690663636">Jennifer M. Mason/Shutterstock</a></span></figcaption></figure><p>During the pandemic, a lot of assumptions were made about how people behave. Many of those assumptions were wrong, and they led to disastrous policies. </p>
<p>Several governments worried that their pandemic restrictions would quickly lead to “behavioural fatigue” so that people would stop adhering to restrictions. In the UK, the prime minister’s former chief adviser Dominic Cummings recently admitted that <a href="https://committees.parliament.uk/oralevidence/2249/pdf/">this was the reason</a> for not locking down the country sooner.</p>
<p>Meanwhile, former health secretary Matt Hancock revealed that the government’s failure to provide financial and other forms of support for people to self-isolate was down to their fear that <a href="https://committees.parliament.uk/oralevidence/2318/pdf/">the system “might be gamed”</a>. He warned that people who tested positive may then falsely claim that they had been in contact with all their friends, so they could all get a payment.</p>
<p>These examples show just how deeply some governments distrust their citizens. As if the virus was not enough, the public was portrayed as an additional part of the problem. But is this an accurate view of human behaviour?</p>
<p>The distrust is based on two forms of reductionism – describing something complex in terms of its fundamental constituents. The first is limiting psychology to the characteristics – and more specifically the limitations – of individual minds. In this view the human psyche is inherently flawed, beset by biases that distort information. It is seen as incapable of dealing with complexity, probability and uncertainty – and tending to panic in a crisis.</p>
<p>This view is attractive to those in power. By emphasising the inability of people to govern themselves, it justifies the need for a government to look after them. Many governments subscribe to this view, having established <a href="https://www.instituteforgovernment.org.uk/explainers/nudge-unit">so-called nudge units</a> – behavioural science teams tasked with subtly manipulating people to make the “right” decisions, without them realising why, from eating less sugar to filing their taxes on time. But it is becoming increasingly clear that this approach is limited. As the pandemic has shown, it is particularly flawed when it comes to behaviour in a crisis. </p>
<p>In recent years, <a href="https://www.tandfonline.com/doi/abs/10.1080/10463283.2018.1471948">research has shown</a> that the notion of people panicking in a crisis is something of a myth. People generally respond to crises in a measured and orderly way – they look after each other. </p>
<p>The key factor behind this behaviour is <a href="https://www.researchgate.net/publication/228642933_The_Nature_of_Collective_Resilience_Survivor_Reactions_to_the_2005_London_Bombings">the emergence of a sense of shared identity</a>. This extension of the self to include others helps us care for those around us and <a href="https://espace.library.uq.edu.au/view/UQ:ada9d87">expect support from them</a>. Resilience cannot be reduced to the qualities of individual people. It <a href="https://psycnet.apa.org/record/2014-35722-001">tends to be something</a> that emerges in groups.</p>
<h2>The problem with ‘psychologism’</h2>
<p>Another type of reductionism that governments adopt is “psychologism” – when you <a href="https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-6-42">reduce the explanation of people’s behaviour to just psychology</a>. But there are many other factors that shape what we do. In particular, we rely on information and practical means (not least money!) to decide what needs to be done – and to be able to do it.</p>
<p>If you reduce people to just psychology, it makes their actions entirely a consequence of individual choice. If we get infected, it is because we chose to act in ways that led to infection: we decided to go out and socialise, we ignored advice on physical distancing.</p>
<p>This mantra of individual responsibility and blame has certainly been at the core of the UK government’s response throughout the pandemic. When cases started rising in the autumn, the government blamed it on students having parties. Hancock even warned young people “<a href="https://www.bbc.co.uk/news/newsbeat-54056771">don’t kill your gran</a>”. And as the government envisages the total removal of restrictions, the focus on what people must do has become even stronger. As the prime minister <a href="https://www.gov.uk/government/speeches/pm-statement-at-coronavirus-press-conference-14-may-2021">recently put it</a>: “I want us to trust people to be responsible and to do the right thing.”</p>
<p>Such narratives ignore the fact that, at various critical points in the pandemic, infections rose not because people were breaking rules, but <a href="https://www.theguardian.com/commentisfree/2021/jan/28/public-uk-covid-rules-ministers">rather heeding advice</a>, such as “<a href="https://www.theguardian.com/world/2021/mar/27/boris-johnson-branded-irresponsible-over-back-to-the-office-call">go to work</a>” and “<a href="https://commonslibrary.parliament.uk/research-briefings/cbp-8978/">eat out to help out</a>”. And if people did break the rules, it was often because they had no choice. In many deprived areas, people were unable to work from home and <a href="https://www.medrxiv.org/content/10.1101/2020.04.01.20050039v1">needed to go to work</a> to put food on the table.</p>
<p>Instead of addressing these issues and helping people to avoid exposing themselves and others, the individualistic narrative of personal responsibility blames the victim and, indeed, further victimises vulnerable groups. As the delta variant took hold in UK towns, Hancock took the opportunity to stand in parliament and repeatedly <a href="https://hansard.parliament.uk/commons/2021-05-17/debates/BEC589F3-7FE2-424E-A1ED-4BE5019D4F31/Covid-19Update">blame people</a> who had “chosen” not to have the vaccine. </p>
<p>This brings us to a critical point. The fundamental issue with the government’s distrust and its individualistic psychology is that it creates huge problems.</p>
<h2>Creating a crisis</h2>
<p>The UK government assumed that people’s cognitive fragility would lead to – and explain – low adherence with the measures necessary to combat COVID-19. But the evidence showed <a href="https://www.bmj.com/content/372/bmj.n137.long">that adherence was high</a> due to a sense of community among the public – except in areas where it is hard to adhere without adequate means. Instead of emphasising individual responsibility and blame, then, a successful response to the pandemic depends on fostering community and providing support.</p>
<figure class="align-center ">
<img alt="Image of a woman handing a bag of shopping to a an older woman." src="https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/409094/original/file-20210630-8655-1w1r3p8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">People help each in a crisis.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/helping-hand-shopping-food-during-pandemic-1904362390">encierro/Shutterstock</a></span>
</figcaption>
</figure>
<p>But here’s the rub. If a government constantly tells you that the problem lies in those around you, it corrodes trust in and solidarity with your fellow community members – which explains why most people (92%) <a href="https://www.ucl.ac.uk/news/2020/dec/majority-feel-they-comply-covid-19-rules-better-others">state that they are complying</a> with the rules while others are not doing so. </p>
<p>Ultimately, the greatest threat to controlling the pandemic is the failure of people to get tested as soon as they have symptoms, and to provide their contacts and self-isolate. Providing adequate support for isolation <a href="https://blogs.bmj.com/bmj/2021/04/05/why-contrasting-figures-on-adherence-to-self-isolation-show-that-support-to-self-isolate-is-even-more-important-than-we-previously-realised/">is critical to all of these</a>. And so, by deprioritising the case for support, blaming the public fuels the pandemic. The government’s psychological assumptions have, in fact, squandered the greatest asset we have for dealing with a crisis: a community that is <a href="https://osf.io/preprints/socarxiv/p5sfd/">mobilised and unified</a> in mutual aid. </p>
<p>When an inquiry is eventually held about the UK’s response to COVID-19, it is essential that we give full attention to the psychological and behavioural dimensions of failure as much as the decisions and policies implemented. Only by exposing the way in which the government came to accept and rely upon the wrong model of human behaviour can we begin to build policies that work.</p><img src="https://counter.theconversation.com/content/163666/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stephen Reicher receives funding from the Economic and Social Research Council, UK
He is a participant in the behavioural science advisory group to the UK Government - SPI-B - the advisory group to the Scottish Government and a member of Independent SAGE in the UK, convening its behavioural sib-group</span></em></p>A society in crisis is more than the sum of its flawed parts.Stephen Reicher, Bishop Wardlaw Professor in the School of Psychology & Neuroscience, University of St AndrewsLicensed as Creative Commons – attribution, no derivatives.