tag:theconversation.com,2011:/africa/topics/fake-news-33438/articlesFake news – The Conversation2024-03-19T19:10:23Ztag:theconversation.com,2011:article/2249292024-03-19T19:10:23Z2024-03-19T19:10:23ZThe Online Harms Act doesn’t go far enough to protect democracy in Canada<figure><img src="https://images.theconversation.com/files/582655/original/file-20240318-20-5g48qj.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5648%2C3762&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Online Harms Act aims to protect Canadians from harmful content.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The Liberal government’s recent proposal for regulating social media platforms, <a href="https://www.canada.ca/en/canadian-heritage/services/online-harms.html">the Online Harms Act (Bill C-63)</a>, comes as the final act in a <a href="https://nationalpost.com/news/politics/the-first-100-days-major-battle-over-free-speech-internet-regulation-looms-when-parliament-returns">promised trilogy of bills</a> aimed at bringing some order to the digital world. </p>
<p>After contentious <a href="https://www.canada.ca/en/canadian-heritage/services/online-news.html">attempts to address the fallout from the Online News Act</a> and the <a href="https://www.canada.ca/en/radio-television-telecommunications/news/2023/09/crtc-takes-major-step-forward-to-modernize-canadas-broadcasting-framework.html">threat from online streaming platforms to Canadian content</a>, this final bill attempts to identify and regulate harmful content. The Online Harms Act follows <a href="https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en">Europe</a>, the <a href="https://www.legislation.gov.uk/ukpga/2023/50/contents/enacted">United Kingdom</a> and <a href="https://www.esafety.gov.au/newsroom/whats-on/online-safety-act">Australia</a> in setting up a new regulator in an attempt to address the spread of what is considered harmful content.</p>
<p>The idea that such efforts are necessary is not controversial — content that sexually exploits children, for instance, has already been <a href="https://calgarysun.com/news/crime/edmonton-man-who-lured-92-children-into-sending-child-porn-sentenced-to-18-years-in-prison">a target for law enforcement</a>, and hate speech has been illegal for decades in <a href="https://doi.org/10.1080/17577632.2022.2092261">most industrialized democracies</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/9HsPnK9HMT0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC News looks at the Online Harms Act.</span></figcaption>
</figure>
<h2>Platform responsibility</h2>
<p>Online harms laws are based on the idea of “<a href="https://cyberlaw.stanford.edu/focus-areas/intermediary-liability">intermediary liability</a>”: making the platforms legally responsible when users use them to distribute content that breaks laws. </p>
<p>Under the Online Harms Act, platforms will be required to promptly remove two forms of content — that which “sexually victimizes a child or revictimizes a survivor” and “intimate images posted without consent” — or face large fines. </p>
<p>But it also includes less strict measures to deal with other forms of harmful content, including promotion of terrorism or genocide, incitement to violence or hate speech. Platforms will be required to develop, and make public, plans to “mitigate the risk that users will be exposed to harmful content on the services and submitting digital safety plans to the Digital Safety Commission of Canada.”</p>
<h2>Crime and punishment</h2>
<p>There are also <a href="https://www.cbc.ca/news/politics/liberals-table-online-harms-legislation-1.7126080">new criminal offences and penalties</a> for users who upload these forms of content. These provisions have been the subject of <a href="https://www.theglobeandmail.com/politics/article-justice-minister-defends-house-arrest-power-for-people-feared-to/">much of the debate over the bill</a>. </p>
<p>Many civil libertarians argue that they <a href="https://www.michaelgeist.ca/2024/02/why-the-criminal-code-and-human-rights-act-provisions-should-be-removed-from-the-online-harms-act/">go too far</a>, while advocates for marginalized groups believe that they are <a href="https://www.thestar.com/opinion/contributors/finally-a-tool-to-combat-online-hate/article_41ec2db0-d664-11ee-b404-bf5272436be5.html">long overdue</a>. </p>
<p>But much of the debate over these specific details misses a deeper failing of the bill, which derives from the way the idea of “online harm” is understood.</p>
<h2>‘Lawful but awful’</h2>
<p>For much of the last decade, digital media scholars have also been directing attention to different ways in which platform communication <a href="https://doi.org/10.1386/jdmp_00061_1">ought to be considered harmful</a>. The definition of harmful content in Bill C-63 focuses on harms that are experienced by users when they encounter particular forms of content posted by others. </p>
<p>But platforms aren’t merely empty spaces for users to send messages to other users — <a href="https://arstechnica.com/tech-policy/2020/12/the-christchurch-shooter-and-youtubes-radicalization-trap/">they play an active role</a> in shaping the communication that takes place, determining how messages are combined and sorted, and how their distribution is prioritized and limited. </p>
<p>For this reason, <a href="https://heinonline.org/HOL/Page?handle=hein.journals/jtelhtel13&id=227&collection=journals&index=">algorithms that amplify or suppress particular kinds of messages should also be seen as a source of harm</a>.</p>
<p>This is often understood as the reason why fake news or hyper-partisan political commentary is so problematic on platforms. Even perfectly legal communication — what is called “<a href="https://www.cbc.ca/radio/thecurrent/online-harms-act-arif-virani-1.7127037">lawful but awful</a>” content — can contribute to a pattern of serious harm. </p>
<p>One person <a href="https://theconversation.com/close-to-home-the-canadian-far-right-covid-19-and-social-media-178714">denying the scientific consensus on vaccines</a>, promoting <a href="https://theconversation.com/qanon-is-spreading-outside-the-us-a-conspiracy-theory-expert-explains-what-that-could-mean-198272">entirely baseless conspiracy theories about political figures</a> or <a href="https://www.npr.org/2020/10/24/927300432/robocalls-rumors-and-emails-last-minute-election-disinformation-floods-voters">discouraging people from voting</a>, might not be “harmful” in the sense that Bill C-63 defines the concept. </p>
<p>But when social media algorithms ensure that many users don’t see counter-evidence from outside their “<a href="https://www.penguinrandomhouse.com/books/309214/the-filter-bubble-by-eli-pariser/">filter bubble</a>,” the dangers are real. This is also true of any number of other kinds of <a href="https://academic.oup.com/book/26406?login=false">platformed deception</a>, such as <a href="https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/">AI-generated deep fake videos</a> of political candidates.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a face with a long nose, at the end is a mask of a face with a regular nose" src="https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Misinformation, such as deepfakes of politicians, can spread unregulated on online platforms.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Democracy at risk</h2>
<p>Democracy relies on open and rational deliberation. The conditions for that kind of communication can be degraded by the way that algorithms operate. That algorithms are operated by private, for-profit corporations that seek to maximize “engagement” makes the problem even worse; this creates an <a href="https://www.theguardian.com/technology/2021/oct/22/facebook-whistleblower-hate-speech-illegal-report">incentive for content that provokes outrage</a> and further <a href="https://doi.org/10.1177/14614448231161880">polarizes political opinion</a>.</p>
<p>Exactly how algorithms should be regulated is not a simple question. Some of the provisions in Bill C-63 might be a step in the right direction: requirements for risk mitigation plans, an ombudsperson who can help the public submit complaints about platforms to a regulator and obligations to provide information about content. And importantly, all of this can be done without unnecessarily violating users’ freedom of expression.</p>
<p>But a more specific legal obligation on platforms to deprioritize content that is clearly false — such as public health messaging or information related to elections — would be necessary to stop increasing online polarization and promoting <a href="https://www.uwestminsterpress.co.uk/site/books/e/10.16997/book30/">anti-democratic populism</a>. </p>
<p>While the Online Harms Act might protect individuals from being exposed to specific kinds of content, protecting the democratic nature of our society will require a more robust set of regulations than what has been proposed.</p><img src="https://counter.theconversation.com/content/224929/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Derek Hrynyshyn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Algorithms that amplify or suppress particular kinds of messages should be seen as a source of harm.Derek Hrynyshyn, Contract Faculty, Communication & Media Studies, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2240162024-03-03T15:55:34Z2024-03-03T15:55:34ZWhy do millions of Americans believe the 2020 presidential election was ‘stolen’ from Donald Trump?<p>Since the 1980s, Super Tuesday has been one of the most important dates in the American presidential campaign: about one third of the delegates will be awarded to the presidential candidates in each party. There is very little suspense as to who the winners will be this year: both <a href="https://projects.fivethirtyeight.com/polls/president-primary-r/2024/national/">Donald Trump</a> and <a href="https://projects.fivethirtyeight.com/polls/president-primary-d/2024/national/">Joe Biden</a> have been the frontrunners and have shown commanding leads in the polls, despite their <a href="https://news.gallup.com/poll/548138/american-presidential-candidates-2024-election-favorable-ratings.aspx">low popularity</a>.</p>
<h2>The ongoing perception of a “stolen” election</h2>
<p>Never before has a non-incumbent GOP candidate enjoyed such a lead at this point of the campaign, not even <a href="https://en.wikipedia.org/wiki/2000_Republican_Party_presidential_primaries">George W. Bush in 2000</a>. One reason may be that Donald Trump is not really a non-incumbent. More importantly, he is seen by a majority of his base as the only legitimate president. <a href="https://www.washingtonpost.com/dc-md-va/2024/01/02/jan-6-poll-post-trump/">Two thirds of Republican voters</a> (and nearly 3 in 10 Americans) continue to believe that the 2020 election was stolen from him, and that Biden was not lawfully elected. In fact, this “election denialism” is <a href="https://theconversation.com/us-election-haleys-supporters-believe-radically-different-things-to-trump-so-where-do-they-go-next-222674">one of the major differences between</a> those who support Trump and those who voted for his rival, Nikki Haley. According to them, “massive” fraud occurred in certain states (fake voters, rigged voting machines, etc.) with the blessing of election officials and unscrupulous judges, thus tipping the contest.</p>
<p>Of course, there is <a href="https://www.pnas.org/doi/10.1073/pnas.2103619118">no evidence of fraud</a> that could have changed the outcome, and <a href="https://en.wikipedia.org/wiki/Post-election_lawsuits_related_to_the_2020_U.S._presidential_election">all the lawsuits challenging the results have been lost after hearings on the merits</a> or dismissed as moot – even by judges he <a href="https://www.washingtonpost.com/politics/2020/12/14/most-remarkable-rebukes-trumps-legal-case-judges-he-hand-picked/">hand-picked</a>.</p>
<h2>A perfect martyr</h2>
<p>More than his conviction of sexual assault – in truth a <a href="https://www.washingtonpost.com/politics/2023/07/19/trump-carroll-judge-rape/">rape</a> – and his <a href="https://www.nytimes.com/article/trump-investigations-civil-criminal.html">multiple indictments</a>, Donald Trump’s most grievous fault has been his attempt at obstructing the democratic transfer of power by <a href="https://www.washingtonpost.com/national-security/2023/12/05/trump-jan-6-violence-election-obstruction/">encouraging his supporters</a> to violently oppose the certification of the election in 2021, and his continuous false claim that he, in fact, <a href="https://www.nytimes.com/2023/08/17/us/politics/trump-election-lies-fact-check.html">won in 2020</a>.</p>
<p>Trump’s diehard supporters once again see him as the victim of a <a href="https://www.politico.com/news/2023/03/31/donald-trump-indictment-00090001">“witch hunt”</a>, just as they did during the two impeachments he faced – it’s because he was taking on a “corrupt system”, they believe. Trump has used his legal troubles to <a href="https://time.com/6555904/donald-trump-gop-primary-2024/">raise millions of dollars</a>, a large part of which has gone to <a href="https://www.pbs.org/newshour/politics/trump-political-committee-has-spent-more-than-40-million-on-lawyers-fees-as-his-legal-peril-mounts">pay his defence lawyers</a> rather than fund his presidential campaign. Despite this, he has <a href="https://projects.fivethirtyeight.com/polls/president-primary-r/2024/national/">risen in the Republican primaries</a> and could well become the GOP’s candidate in the November 2024 election.</p>
<p>So how can we explain that tens of millions of Americans continue to adhere to this narrative of the stolen election, despite <a href="https://www.brennancenter.org/sites/default/files/analysis/Briefing_Memo_Debunking_Voter_Fraud_Myth.pdf">numerous studies</a> demonstrating its utter falsehood?</p>
<h2>Tracing the roots of political paranoia</h2>
<p>The myth of the stolen election is a <a href="https://www.researchgate.net/publication/355068117_The_Rise_of_Presidential_Eschatology_Conspiracy_Theories_Religion_and_the_January_6th_Insurrection">mass conspiracy belief</a>, a type of unverified counter-narrative that questions well-established facts and relies instead on the idea that powerful and malevolent actors are operating in the shadows. What characterises the United States is not necessarily that its population is more gullible than others, but rather that a large part of its political and media class is willing to accept, exploit, and organise conspiracy thinking for its benefit.</p>
<p>In a landmark 1964 essay published in <em>Harper’s Magazine</em>, <a href="https://harpers.org/archive/1964/11/the-paranoid-style-in-american-politics/">“The Paranoid Style in American Politics”</a>, historian Richard Hofstadter famously explored the American passion for conspiracy, focusing on the right’s obsession with a supposed communist conspiracy during the McCarthy era. At that time, the Christian right merged with nationalism, becoming a powerful force opposing the supposedly godless communist bloc. In the 1970s, the political narrative of a universal struggle between Good and Evil became an <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1460-2466.2006.00021.x">essential theme in presidential speeches</a>, particularly those by Ronald Reagan and George W. Bush.</p>
<h2>The “enemy within” and the “culture war”</h2>
<p>With the end of the Cold War in 1991, this binary narrative was adapted to the <a href="https://www.vox.com/policy-and-politics/2020/7/9/21291493/donald-trump-evangelical-christians-kristin-kobes-du-mez">“culture war”</a>, pitting religious fundamentalists against progressives on moral and societal issues such as abortion and sexuality. It is a narrative of decline that identifies any political opposition as an “enemy” jeopardising the moral foundations of the nation.</p>
<p>This narrative was fuelled by a sense of powerlessness and humiliation that followed the September 11, 2001, attacks. Then came the 2008 financial crisis and two decades of the “war on terror” without anything like a tangible victory. As the country’s demographic makeup evolved, <a href="https://academic.oup.com/socrel/article/81/3/272/5836966">racial resentment grew</a> and conspiracy thinking with it, as embodied by the narrative of the <a href="https://www.prri.org/spotlight/replacement-theory-is-not-a-fringe-theory/">“Great Replacement”</a>. The Covid crisis heightened the distrust of government. The <a href="https://theconversation.com/demons-of-the-deep-state-how-evangelicals-and-conspiracy-theories-combine-in-trumps-america-144898">“Deep State”</a> was born, perceived as literally demonic.</p>
<p>The politicisation of religion reached its peak with Donald Trump, who used religious language <a href="https://www.researchgate.net/publication/331071656_The_God_Card_Strategic_Employment_of_Religious_Language_in_US_Presidential_Discourse">more than any other president</a>. Unlike his predecessors, he explicitly associated <a href="https://www.researchgate.net/publication/344337560_Thou_Art_in_a_Deal_The_Evolution_of_Religious_Language_in_the_Public_Communications_of_Donald_Trump">American identity with Christianity</a>. He emphasised themes of Christian nationalism, highly popular among the white evangelicals he courted. It is within this religious group that adherence to the myth of the “stolen” election is the <a href="https://www.prri.org/spotlight/after-three-years-and-many-indictments-the-big-lie-that-led-to-the-january-6th-insurrection-is-still-believed-by-most-republicans/">strongest</a>.</p>
<h2>Donald Trump: a “saviour” who’s both godless and lawless</h2>
<p>The irony of Trump courting evangelicals is that Trump himself is <a href="https://edition.cnn.com/interactive/2017/politics/state/donald-trump-religion/">far from religious</a>. His xenophobic slurs against immigrants, <a href="https://www.theatlantic.com/politics/archive/2020/09/trump-americans-who-died-at-war-are-losers-and-suckers/615997/">contempt for veterans</a>, calls for <a href="https://www.washingtonpost.com/politics/2023/11/13/white-house-biden-trump-vermin/">violence against political opponents</a>, mockery of a <a href="https://www.bbc.com/news/world-us-canada-34930042">disabled journalist</a>, and a glaring <a href="https://edition.cnn.com/2016/10/21/politics/trump-religion-gospel/index.html">lack of religious culture</a> are fundamentally incompatible with Christian ethics. In speeches and interviews, he frequently <a href="https://www.youtube.com/watch?v=qIHhB1ZMV_o">highlights extremist groups</a>, such as the <a href="https://en.wikipedia.org/wiki/Proud_Boys">Proud Boys</a> and conspiracists such as <a href="https://www.youtube.com/watch?v=GNI553Np__k">QAnon believers</a>.</p>
<p>The link between conspiracy theories and white Christian nationalism is <a href="https://theconversation.com/evangelical-leaders-like-billy-graham-and-jerry-falwell-sr-have-long-talked-of-conspiracies-against-gods-chosen-those-ideas-are-finding-resonance-today-132241">well documented</a>, most recently regarding topics such as vaccines or climate change. Evangelicals “rationalise” the election lie by <a href="https://www.vox.com/identities/2018/3/5/16796892/trump-cyrus-christian-right-bible-cbn-evangelical-propaganda">comparing Trump to Cyrus</a>, a historical Persian king who, in the Old Testament (<a href="https://enterthebible.org/passage/isaiah-4423-458-cyrus-gods-anointed-shepherd">Isaiah)</a>, did not worship the god of Israel but is portrayed as an instrument used by God to deliver the Jewish people.</p>
<h2>How the Capitol attack comforted evangelists’ views</h2>
<p>These beliefs stem from a <a href="https://en.wikipedia.org/wiki/Premillennialism">“premillennialist”</a> interpretation of the Book of Revelation, adopted by a majority of evangelicals (<a href="https://www.pewresearch.org/short-reads/2022/12/08/about-four-in-ten-u-s-adults-believe-humanity-is-living-in-the-end-times/">63%</a>) who believe that humanity is currently experiencing the <a href="https://en.wikipedia.org/wiki/Eschatology">“End Times”</a>.</p>
<p>This worldview was embodied by the <a href="https://theconversation.com/christian-nationalism-is-downplayed-in-the-jan-6-report-and-collective-memory-189440">attack on the US Capitol on January 6, 2021</a>. It gave Republican leaders a unique opportunity to condemn Donald Trump in an impeachment trial that could have ended his political ambitions. Despite the stakes, neither the Speaker of the House, Kevin McCarthy, nor the influential Senate majority leader, Mitch McConnell, voted for impeachment. Yet both acknowledged that Trump was <a href="https://www.politico.com/news/2021/02/13/mcconnell-condemns-trump-acquitted-469002">“morally responsible”</a> for the <a href="https://www.npr.org/sections/trump-impeachment-effort-live-updates/2021/01/13/956452691/gop-leader-mccarthy-trump-bears-responsibility-for-violence-wont-vote-to-impeach">violence</a>.</p>
<p>As the Republican Party did during Trump’s first impeachment trial and with every one of his <a href="https://en.wikipedia.org/wiki/False_or_misleading_statements_by_Donald_Trump">innumerable lies</a>, including <a href="https://www.vox.com/2020-presidential-conventions/2020/8/25/21400657/trump-rnc-2020-coronavirus-Covid-19-pandemic">during the Covid crisis</a>, it once again showed itself willing to sacrifice democracy itself on the altar of political ambition.</p>
<p>The result is that the election lie has become the norm and now a loyalty test within the party. A vast majority of <a href="https://www.nytimes.com/interactive/2022/11/09/us/politics/election-misinformation-midterms-results.html">new congressional members in 2022</a> have in turn cast doubt on the 2020 results. When Kevin McCarthy proved to be insufficiently loyal to Trump, he was replaced as Speaker of the House by Mike Johnson, a <a href="https://slate.com/news-and-politics/2024/01/january-6-insurrection-mike-johnson-evangelical-christian-apostolic-reformation.html">Christian nationalist</a> and <a href="https://www.brennancenter.org/our-work/analysis-opinion/mike-johnson-now-most-powerful-election-denier-washington">staunch election denier</a>.</p>
<h2>A widespread lie financed by powerful groups</h2>
<p>This lie is not the democratic and populist expression of grassroots anti-elitism. It is fuelled by national organisations that are <a href="https://www.newyorker.com/magazine/2021/08/09/the-big-money-behind-the-big-lie">funded by some of the country’s wealthiest conservatives</a>. New York University’s <a href="https://www.brennancenter.org/our-work/research-reports/big-donors-working-overturn-2020-election-are-backing-election-denial">Brennan Center for Justice</a> has identified several of these groups, including the <a href="https://www.eip-ca.com/">Election Integrity Project California</a>, <a href="https://www.freedomworks.org/issue/election-protection/">FreedomWorks</a>, or the <a href="https://www.honestelections.org/">Honest Elections Project</a>, whose names belie their intentions.</p>
<p>Among these groups, the <a href="https://fedsoc.org/commentary/publications/voter-fraud-in-our-republic">Federalist Society</a>, which promoted the appointment of the most conservative members to the Supreme Court, has led the <a href="https://www.motherjones.com/politics/2023/12/how-leonard-leos-dark-money-network-orchestrated-a-new-attack-on-the-voting-rights-act/">attack against the Voting Rights Act</a> (a 1965 law prohibiting racial discrimination in voting).</p>
<p>The role of the <a href="https://www.heritage.org/voterfraud">Heritage Foundation</a> is also notable.</p>
<p>One of the most powerful and influential conservative organisations, it has used the spectre of electoral fraud as a pretext for removing voters from voting lists. One of its founders, <a href="https://en.wikipedia.org/wiki/Paul_Weyrich">Paul Weyrich</a>, <a href="https://www.youtube.com/watch?v=8GBAsFwPglw">declared in 1980</a>:</p>
<blockquote>
<p>“I don’t want everybody to vote. Elections are not won by a majority of people, they never have been from the beginning of our country and they are not now. As a matter of fact, our leverage in the elections quite candidly goes up as the voting populace goes down.”</p>
</blockquote>
<p>Add to this an overt strategy of <a href="https://time.com/6334985/trump-fox-news-lies-brian-stelter-essay/">media disinformation</a> used by Trump and his allies, summarised by Steve Bannon, the former leader of Breitbart News and former advisor to Donald Trump: <a href="https://www.vox.com/policy-and-politics/2020/1/16/20991816/impeachment-trial-trump-bannon-misinformation">“Flood the zone with shit”</a>. The point is simply to overwhelm the press and the public with so much false information and disinformation that distinguishing truth from lies becomes too challenging, if not impossible.</p>
<p>All of this is, of course, amplified by acute <a href="https://press.uchicago.edu/ucp/books/book/chicago/U/bo27527354.html">political polarisation rooted in social identity</a>. This is <a href="https://www.theatlantic.com/ideas/archive/2018/11/why-are-americans-so-geographically-polarized/575881/">manifested geographically</a>, where partisan preferences are correlated with population density – urban versus rural, to simplify. Republicans who believe in the myth of a stolen election cannot believe that Joe Biden could have been elected by a majority because <em>no one around them voted Democrat</em>, after all.</p>
<p>This physical polarisation is reinforced by <a href="https://www.pewresearch.org/journalism/2020/01/24/u-s-media-polarization-and-the-2020-election-a-nation-divided/">media polarisation</a> that creates a true informational bubble. Thus, a majority of Republicans trust only <a href="https://www.pewresearch.org/short-reads/2020/04/08/five-facts-about-fox-news/">Fox News</a> and far-right television channels like <a href="https://edition.cnn.com/2023/09/05/media/dominion-exec-oan-lawsuit-settlement/index.html">One American News</a>, whose primetime hosts have <a href="https://www.nytimes.com/2023/02/27/business/media/fox-news-dominion-rupert-murdoch.html">endorsed lies even they themselves don’t believe</a> about electoral fraud. These were then <a href="https://www.axios.com/2022/09/19/election-misinformation-social-media-big-lie-report">amplified by social networks</a>.</p>
<h2>Will history repeat itself next November?</h2>
<p>Questioning electoral results is a constant theme for Donald Trump. In 2012, he <a href="https://abcnews.go.com/Politics/donald-trumps-2012-election-tweetstorm-resurfaces-popular-electoral/story?id=43431536">called Barack Obama’s re-election</a> a <a href="https://twitter.com/realdonaldtrump/status/266035509162303492">“total sham and a travesty”</a>, adding that “we are not a democracy” and that it would be necessary to “march on Washington” and stop what he claimed was a “travesty”. In 2016, he contested, with no evidence whatsoever, the results of the Iowa caucus and the popular vote won by Hillary Clinton, attributing it to <a href="https://www.bbc.com/news/world-us-canada-38126438">“millions of illegal votes”</a>.</p>
<p>The difference between 2020 and today is that Donald Trump is no longer a political curiosity. His voice is now heard and believed by millions of citizens. Thus, almost a quarter of US citizens (<a href="https://www.prri.org/spotlight/after-three-years-and-many-indictments-the-big-lie-that-led-to-the-january-6th-insurrection-is-still-believed-by-most-republicans/">23%</a>) say that they would be willing to use violence to “save the country.” Regardless of the outcome of the 2024 election, there is cause for concern. Donald Trump <a href="https://thehill.com/homenews/campaign/3998962-trump-wont-commit-to-accepting-2024-election-results/">has refused to commit</a> to accepting the 2024 election results if it is not in his favour. And his followers are once again ready to follow his words of refusal, turning them into action.</p><img src="https://counter.theconversation.com/content/224016/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jérôme Viala-Gaudefroy ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Nearly a third of Americans say they believe that Donald Trump was the real winner of the last election, and the ratio is twice as high among Republican voters.Jérôme Viala-Gaudefroy, Assistant lecturer, CY Cergy Paris UniversitéLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2246262024-02-29T03:52:46Z2024-02-29T03:52:46ZAlgorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?<figure><img src="https://images.theconversation.com/files/578812/original/file-20240229-22-ki29m8.jpg?ixlib=rb-1.1.0&rect=3%2C7%2C2462%2C1608&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/online-news-mobile-phone-close-smartphone-1204164946">Tero Vesalainen/Shutterstock</a></span></figcaption></figure><p>Generative artificial intelligence (AI) tools are supercharging the problem of misinformation, disinformation and fake news. OpenAI’s ChatGPT, Google’s Gemini, and various image, voice and video generators have made it easier than ever to produce content, while making it harder to tell what is factual or real.</p>
<p>Malicious actors looking to spread disinformation can use AI tools to largely automate the generation of <a href="https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and">convincing and misleading text</a>. </p>
<p>This raises pressing questions: how much of the content we consume online is true and how can we determine its authenticity? And can anyone stop this?</p>
<p>It’s not an idle concern. Organisations seeking to covertly influence public opinion or sway elections can now <a href="https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and">scale their operations</a> with AI to unprecedented levels. And their content is being widely disseminated by search engines and social media. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-sora-a-new-generative-ai-tool-could-transform-video-production-and-amplify-disinformation-risks-223850">What is Sora? A new generative AI tool could transform video production and amplify disinformation risks</a>
</strong>
</em>
</p>
<hr>
<h2>Fakes everywhere</h2>
<p>Earlier this year, <a href="https://www.techradar.com/computing/search-engines/google-search-might-be-getting-worse-and-ai-threatens-to-ruin-it-entirely">a German study</a> on search engine content quality noted “a trend toward simplified, repetitive and potentially AI-generated content” on Google, Bing and DuckDuckGo.</p>
<p>Traditionally, readers of news media could rely on editorial control to uphold journalistic standards and verify facts. But AI is rapidly changing this space.</p>
<p>In a report published this week, the internet trust organisation NewsGuard <a href="https://www.newsguardtech.com/special-reports/ai-tracking-center/">identified 725 unreliable websites</a> that publish AI-generated news and information “with little to no human oversight”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1761047409243603406"}"></div></p>
<p>Last month, Google <a href="https://www.adweek.com/media/google-paying-publishers-unreleased-gen-ai/">released an experimental AI tool</a> for a select group of independent publishers in the United States. Using generative AI, the publisher can summarise articles pulled from a list of external websites that produce news and content relevant to their audience. As a condition of the trial, the users have to publish three such articles per day.</p>
<p>Platforms hosting content and developing generative AI blur the traditional lines that enable trust in online content. </p>
<h2>Can the government step in?</h2>
<p>Australia has already seen tussles between government and online platforms over the display and moderation of news and content.</p>
<p>In 2019, the Australian government <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=s1201">amended the criminal code</a> to mandate the swift removal of “abhorrent violent material” by social media platforms. </p>
<p>The Australian Competition and Consumer Commission’s (ACCC) inquiry into power imbalances between Australian news media and digital platforms led to the 2021 implementation of <a href="https://www.accc.gov.au/by-industry/digital-platforms-and-services/news-media-bargaining-code/news-media-bargaining-code">a bargaining code</a> that forced platforms to pay media for their news content.</p>
<p>While these might be considered partial successes, they also demonstrate the scale of the problem and the difficulty of taking action.</p>
<p><a href="https://journals.sagepub.com/doi/full/10.1177/02683962221114408">Our research</a> indicates these conflicts saw online platforms initially open to changes and later resisting them, while the Australian government oscillated from enforcing mandatory measures to preferring voluntary actions.</p>
<p>Ultimately, the government realised that relying on platforms’ “trust us” promises wouldn’t lead to the desired outcomes. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-google-and-meta-owe-news-publishers-much-more-than-you-think-and-billions-more-than-theyd-like-to-admit-216818">Why Google and Meta owe news publishers much more than you think – and billions more than they’d like to admit</a>
</strong>
</em>
</p>
<hr>
<p>The takeaway from our study is that once digital products become integral to millions of businesses and everyday lives, they serve as a tool for platforms, AI companies and big tech to anticipate and push back against government.</p>
<p>With this in mind, it is right to be sceptical of early calls for regulation of generative AI by tech leaders like <a href="https://fortune.com/2023/11/02/elon-musk-ai-regulations-uk-prime-minister-sunak-ai-safety-summit/">Elon Musk</a> and Sam Altman. Such calls have faded as AI takes a hold on our lives and online content.</p>
<p>A challenge lies in the sheer speed of change, which is so swift that safeguards to mitigate the potential risks to society are not yet established. Accordingly, the World Economic Forum’s 2024 Global Risk Report has predicted mis- and disinformation as the <a href="https://www.weforum.org/publications/global-risks-report-2024/">greatest threats</a> in the next two years.</p>
<p>The problem gets worse through generative AI’s ability to create multimedia content. Based on current trends, we can expect an increase in <a href="https://www.nbcnews.com/tech/social-media/emma-watson-deep-fake-scarlett-johansson-face-swap-app-rcna73624">deepfake incidents</a>, although social media platforms like Facebook are responding to these issues. They aim to <a href="https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/">automatically identify and tag</a> AI-generated photos, video and audio.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-openai-saga-demonstrates-how-big-corporations-dominate-the-shaping-of-our-technological-future-218540">The OpenAI saga demonstrates how big corporations dominate the shaping of our technological future</a>
</strong>
</em>
</p>
<hr>
<h2>What can we do?</h2>
<p>Australia’s eSafety commissioner <a href="https://www.esafety.gov.au/industry/tech-trends-and-challenges/generative-ai">is working on ways to regulate and mitigate</a> the potential harm caused by generative AI while balancing its potential opportunities.</p>
<p>A key idea is “safety by design”, which requires tech firms to place these safety considerations at the core of their products.</p>
<p>Other countries like the US are further ahead with the regulation of AI. For example, US President Joe Biden’s recent executive order <a href="https://www.theguardian.com/technology/2023/oct/30/biden-orders-tech-firms-to-share-ai-safety-test-results-with-us-government">on the safe deployment of AI</a> requires companies to share safety test results with the government, regulates <a href="https://en.wikipedia.org/wiki/Red_team">red-team testing</a> (simulated hacking attacks), and guides watermarking on content.</p>
<p>We call for three steps to help protect against the risks of generative AI in combination with disinformation.</p>
<p>1. Regulation needs <a href="https://www.linkedin.com/posts/noamsp_3-steps-to-reshaping-our-digital-landscape-activity-7152649121189797889-WEct">to pose clear rules</a> without allowing for nebulous “best effort” aims or “trust us” approaches.</p>
<p>2. To protect against large-scale disinformation operations, we need to teach media literacy in the same way we teach maths.</p>
<p>3. Safety tech or “safety by design” needs to become a non-negotiable part of every product development strategy.</p>
<p>People are aware AI-generated content is on the rise. In theory, they should adjust their information habits accordingly. However, research shows users <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8196605/">generally tend to underestimate</a> their own risk of believing fake news compared to the perceived risk for others.</p>
<p>Finding trustworthy content shouldn’t involve sifting through AI-generated content to make sense of what is factual.</p><img src="https://counter.theconversation.com/content/224626/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stan Karanasios receives funding from Emergency Management Victoria, Asia-Pacific Telecommunity, and the International Telecommunications Union.
Stan is a Distinguished Member of the Association for Information Systems.</span></em></p><p class="fine-print"><em><span>Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award (project number DE220101597) funded by the Australian Government.</span></em></p>It’s increasingly hard to tell which content online is fake. As malicious actors use generative AI to fuel disinformation, governments must regulate now before it’s too late.Stan Karanasios, Associate Professor, The University of QueenslandMarten Risius, Senior Lecturer in Business Information Systems, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2233922024-02-15T15:58:50Z2024-02-15T15:58:50ZDisinformation threatens global elections – here’s how to fight back<figure><img src="https://images.theconversation.com/files/575950/original/file-20240215-22-at0x1v.jpg?ixlib=rb-1.1.0&rect=180%2C90%2C5826%2C3890&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some Republicans still believe the 2020 election was "stolen" from Donald Trump.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/helena-montana-nov-7-2020-protesters-1849449790">Lyonstock/Shutterstock</a></span></figcaption></figure><p>With over half the world’s population heading to the polls in 2024, disinformation season is upon us — and the warnings are dire. The World Economic Forum <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">declared</a> misinformation a top societal threat over the next two years and major news organisations <a href="https://www.nbcnews.com/tech/misinformation/disinformation-unprecedented-threat-2024-election-rcna134290">caution</a> that disinformation poses an unprecedented threat to democracies worldwide. </p>
<p>Yet, some scholars and pundits have <a href="https://theconversation.com/disinformation-is-often-blamed-for-swaying-elections-the-research-says-something-else-221579">questioned</a> whether disinformation can really sway election outcomes. Others think concern over disinformation is just a <a href="https://undark.org/2023/10/26/opinion-misinformation-moral-panic/">moral panic</a> or merely a <a href="https://iai.tv/articles/misinformation-is-the-symptom-not-the-disease-daniel-walliams-auid-2690">symptom</a> rather than the cause of our societal ills. Pollster Nate Silver even thinks that misinformation “<a href="https://twitter.com/NateSilver538/status/1745556135157899389">isn’t a coherent concept</a>”.</p>
<p>But we argue the evidence tells a different story.</p>
<p>A 2023 study showed that the vast majority of academic <a href="https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/">experts</a> are in agreement about how to define misinformation (namely as false and misleading content) and what this looks like (for example lies, conspiracy theories and pseudoscience). Although the study didn’t cover disinformation, such experts generally agree that this can be defined as intentional misinformation.</p>
<p>A recent paper <a href="https://www.nature.com/articles/s44271-023-00054-5">clarified</a> that misinformation can both be a symptom and the disease. In 2022, nearly 70% of Republicans still <a href="https://www.politifact.com/article/2022/jun/14/most-republicans-falsely-believe-trumps-stolen-ele/">endorsed</a> the false conspiracy theory that the 2020 US presidential election was “stolen” from Donald Trump. If Trump had never floated this theory, how would millions of people have possibly acquired these beliefs?</p>
<p>Moreover, although it is clear that people do not always act on dangerous beliefs, the January 6 US Capitol riots, incited by false claims, serve as an important reminder that a <a href="https://www.politifact.com/article/2021/jun/30/misinformation-and-jan-6-insurrection-when-patriot/">misinformed</a> crowd can disrupt and undermine democracy. </p>
<p>Given that nearly 25% of elections are decided by a margin of <a href="https://www.pnas.org/doi/full/10.1073/pnas.1419828112">under 3%</a>, mis- and disinformation can have important influence. One <a href="https://www.sciencedirect.com/science/article/pii/S0261379418303019">study</a> found that among previous Barack Obama voters who did not buy into any fake news about Hillary Clinton during the 2016 presidential election, 89% voted for Clinton. By contrast, among prior Obama voters who believed at least two fake headlines about Clinton, only 17% voted for her. </p>
<p>While this doesn’t necessarily prove that the misinformation caused the voting behaviour, we do know that <a href="https://www.channel4.com/news/revealed-trump-campaign-strategy-to-deter-millions-of-black-americans-from-voting-in-2016">millions</a> of black voters were targeted with misleading ads discrediting Clinton in key swing states ahead of the election. </p>
<p>Research has shown that such micro-targeting of specific audiences based on
variables such as their personality not only influences <a href="https://www.pnas.org/doi/full/10.1073/pnas.1710966114">decision-making</a> but also impacts <a href="https://journals.sagepub.com/doi/full/10.1177/0093650220961965">voting intentions</a>. A recent <a href="https://academic.oup.com/pnasnexus/advance-article/doi/10.1093/pnasnexus/pgae035/7591134">paper</a> illustrated how large language models can be deployed to craft micro-targeted ads at scale, estimating that for every 100,000 individuals targeted, at least several thousand can be persuaded.</p>
<p>We also know that not only are people bad at <a href="https://www.cell.com/iscience/pdf/S2589-0042(21)01335-3.pdf">discerning</a> deepfakes (AI generated images of fake events) from genuine content, studies find that deepfakes do influence <a href="https://journals.sagepub.com/doi/full/10.1177/1940161220944364">political</a> attitudes among a small target group. </p>
<p>There are more indirect consequences of disinformation too, such as eroding public <a href="https://journals.sagepub.com/doi/full/10.1177/1461444820943878">trust</a> and <a href="https://www.pnas.org/doi/abs/10.1073/pnas.2115900119">participation</a> in elections.</p>
<p>Other than hiding under our beds and worrying, what can we do to protect ourselves?</p>
<h2>The power of prebunking</h2>
<p>Many efforts have focused on fact-checking and debunking false beliefs. In contrast, <a href="https://www.tandfonline.com/doi/full/10.1080/10463283.2021.1876983">“prebunking”</a> is a new way to prevent false beliefs from forming in the first place. Such “inoculation” involves warning people not to fall for a false narrative or propaganda tactic, together with an explanation as to why. </p>
<p>Misinforming rhetoric has clear <a href="https://journals.sagepub.com/doi/full/10.1177/09579265221076609">markers</a>, such as scapegoating or use of false dichotomies (there are many others), that people can learn to identify. Like a medical vaccine, the prebunk exposes the recipient to a “weakened dose” of the infectious agent (the disinformation) and refutes it in a way that confers protection. </p>
<p>For example, we created an online <a href="https://www.vice.com/en/article/dy8vzm/homeland-security-funded-this-game-about-destabilizing-a-small-us-town">game</a> for the Department of Homeland Security to empower Americans to spot foreign influence techniques during the 2020 presidential election. The weakened dose? <a href="https://www.nbcnews.com/news/us-news/u-s-cybersecurity-agency-uses-pineapple-pizza-demonstrate-vulnerability-foreign-n1035296">Pineapple pizza</a>.</p>
<p>How could pineapple pizza possibly be the way to tackle misinformation? It shows how bad-faith actors can take an innocuous issue such as whether or not to put pineapple on pizza, and use this to try to start a culture war. They might claim it’s offensive to Italians or urge Americans not to let anybody restrict their pizza-topping freedom.</p>
<p>They can then buy bots to amplify the issue on both sides, disrupt debate – and sow chaos. Our <a href="https://misinforeview.hks.harvard.edu/article/breaking-harmony-square-a-game-that-inoculates-against-political-misinformation/">results</a> showed that people improved in their ability to recognise these tactics after playing our inoculation game. </p>
<p>In 2020, <a href="https://www.npr.org/2022/10/28/1132021770/false-information-is-everywhere-pre-bunking-tries-to-head-it-off-early">Twitter</a> identified false election tropes as potential “vectors of misinformation” and sent out prebunks to millions of US users warning them of fraudulent claims, such as that voting by mail is not safe. </p>
<p>These prebunks armed people with a fact — that experts agree that voting by mail is reliable — and it worked insofar as the prebunks inspired confidence in the election process and motivated users to seek out more factual information. Other social media companies, such as <a href="https://medium.com/jigsaw/prebunking-to-build-defenses-against-online-manipulation-tactics-in-germany-a1dbfbc67a1a">Google</a> and <a href="https://sustainability.fb.com/blog/2022/10/24/climate-science-literacy-initiative/">Meta</a> have followed suit across a range of issues. </p>
<p>A new <a href="https://bpb-us-e1.wpmucdn.com/sites.dartmouth.edu/dist/5/2293/files/2024/02/voter-fraud-corrections-e163369556a2d7a4.pdf">paper</a> tested inoculation against false claims about the election process in the US and Brazil. Not only did it found that prebunking worked better than traditional debunking, but that the inoculation improved discernment between true and false claims, effectively reduced election fraud beliefs and improved confidence in the integrity of the upcoming 2024 elections. </p>
<p>In short, inoculation is a <a href="https://futurefreespeech.org/background-report-empowering-audiences-against-misinformation-through-prebunking/">free speech</a>-empowering intervention that can work on a global scale. When Russia was looking for a pretext to invade Ukraine, US president Joe Biden used this approach to “<a href="https://www.deseret.com/opinion/2022/3/2/22955870/opinion-how-the-white-house-prebunked-putins-lies-disinformation-joe-biden-donald-trump-russia">inoculate</a>” the world against Putin’s plan to stage and film a fabricated Ukrainian atrocity, complete with actors, a script and a movie crew. Biden declassified the intelligence and exposed the plot.</p>
<p>In effect, he warned the world not to fall for fake videos with actors pretending to be Ukrainian soldiers on Russian soil. Forewarned, the international community was <a href="https://www.economist.com/united-states/2022/02/26/deploying-reality-against-putin">unlikely</a> to fall for it. Russia found another pretext to invade, of course, but the point remains: forewarned is forearmed.</p>
<p>But we need not rely on government or tech firms to build <a href="https://harpercollins.co.uk/products/mental-immunity-infectious-ideas-mind-parasites-and-the-search-for-a-better-way-to-think-andy-norman?variant=39295503597646">mental immunity</a>. We can all <a href="https://interventions.withgoogle.com/static/pdf/A_Practical_Guide_to_Prebunking_Misinformation.pdf">learn</a> how to spot misinformation by studying the markers accompanying misleading rhetoric.</p>
<p>Remember that polio was a highly infectious disease that was eradicated through vaccination and herd immunity. Our challenge now is to build herd immunity to the tricks of disinformers and propagandists. </p>
<p>The future of our democracy may depend on it.</p><img src="https://counter.theconversation.com/content/223392/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sander van der Linden consults for or receives funding from the UK Government's Cabinet Office, The U.S. State Department, the American Psychological Association, the US Center for Disease Control, the European Commission, the Templeton World Charity Foundation, the United Nations, the World Health Organization, Google, and Meta. </span></em></p><p class="fine-print"><em><span>Lee McIntyre advises the UK Government on how to fight disinformation.</span></em></p><p class="fine-print"><em><span>Stephan Lewandowsky receives funding from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO), the
Humboldt Foundation through a research award, the Volkswagen Foundation (grant ``Reclaiming individual autonomy and democratic discourse online: How to rebalance human and algorithmic decision making''), and the European Commission (Horizon 2020 grants 964728 JITSUVAX and 101094752 SoMe4Dem). He also receives funding from Jigsaw (a technology incubator created by Google) and from UK Research and Innovation (through EU Horizon replacement funding grant number 10049415). He collaborates with the European Commission's Joint Research Centre.</span></em></p>Scientists estimate that for every 100,000 people targeted with specific political ads, several thousand can be persuaded.Sander van der Linden, Professor of Social Psychology in Society, University of CambridgeLee McIntyre, Research Fellow, Center for Philosophy and History of Science, Boston UniversityStephan Lewandowsky, Chair of Cognitive Psychology, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2228082024-02-09T12:19:12Z2024-02-09T12:19:12ZIt may be too late to stop the great election disinformation campaigns of 2024 but we have to at least try<figure><img src="https://images.theconversation.com/files/573598/original/file-20240205-19-3y7yaz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C953%2C494&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/human-hand-holding-megaphone-vote-words-2410794649">Shutterstock/Master1305</a></span></figcaption></figure><p>Global liberal democracy faces a near unprecedented list of digital threats in 2024 as the increasing exploitation of AI and the rampant spread of disinformation threaten the integrity of elections in more than 60 countries. And we are woefully unprepared.</p>
<p>Votes are scheduled in India, Pakistan, Mexico and South Africa, to name but a few. A hotly contested election will be held for the European parliament in June and the US presidential elections are on the horizon in November. A general election is also due in the UK at some stage in the coming year.</p>
<p>These elections are all happening at a time when global security and the very foundations of democracy are under significant strain from the rise of populism, far-right ideologies and fascist movements. Meanwhile, trust in mainstream institutions like politicians and the media <a href="https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2023">remains extremely low</a>.</p>
<p>What we might have once dismissed as outlandish conspiracy theories, such as that <a href="https://www.npr.org/2024/02/01/1228373511/heres-why-conspiracy-theories-about-taylor-swift-and-the-super-bowl-are-spreadin">Taylor Swift is secretly working for the Pentagon</a> and the Super Bowl is rigged, are gaining traction, and <a href="https://spssi.onlinelibrary.wiley.com/doi/full/10.1111/sipr.12091">social cohesion is fraying</a> as people segregate into isolated echo chambers online. </p>
<p>There is a real danger that unless we act now to protect the public these issues will only be exacerbated by the threats posed by AI, Russian disinformation campaigns, and the invasive use of technology to target voters in the coming months.</p>
<h2>AI, deepfakes and disinformation</h2>
<p>It’s already clear that 2024 will be known as the year of the first AI elections. AI’s ability to harvest near infinite amounts of data into actionable intelligence, and produce personalised content to sway public opinion will assuredly be used by mainstream political parties seeking to gain a tactical advantage in campaigning. </p>
<p>We are already seeing parties use <a href="https://www.cnbc.com/2023/12/17/how-2024-presidential-candidates-are-using-ai-in-election-campaigns.html">AI to analyse data on voting patterns</a> and targeting voters in real-time with <a href="https://theconversation.com/four-trends-youll-see-in-online-election-campaigns-this-year-222433">algorithmically-driven ad placements</a>.</p>
<p>There’s nothing inherently wrong or illegal about that, though it will alarm civil libertarians and does need to be regulated. The malevolent uses of AI by rogue actors is far more concerning. Deepfakes – false or manipulated texts, images, video and audio – are already being spread via the gaming of algorithms with the intention of <a href="https://www.cnbc.com/2023/09/20/ai-could-harm-2024-us-election-senate-intelligence-chair-warns.html">manipulating voters</a>.</p>
<p>A deepfake <a href="https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5">AI manipulated voice of US president Joe Biden</a> was already deployed in New Hampshire, urging voters not to turn out in its primary contest last month. During Slovakia’s parliamentary elections last year, a deepfake audio recording went viral on social media, falsely depicting a party leader claiming to have <a href="https://www.cfr.org/blog/campaign-roundup-deepfake-threat-2024-election">rigged the election and planning to increase beer prices</a>. </p>
<p>There are allegations that deepfakes were used in an attempt to sway voters in <a href="https://www.context.news/ai/are-ai-deepfakes-a-threat-to-elections">Argentina</a>, <a href="https://www.instagram.com/p/Cr2C5aqJXyy/">New Zealand</a> and <a href="https://www.reuters.com/world/middle-east/erdogan-rival-accuses-russia-deep-fake-campaign-ahead-presidential-vote-2023-05-12/">Turkey</a> in the past year. It’s certain we will see highly sophisticated deepfakes circulated in many countries by rogue actors in the coming months in an attempt to influence voters, sow dissent, and put politicians on the defensive.</p>
<h2>Bad actors</h2>
<p>The potential for state orchestrated disinformation campaigns is evidently also a concern in the democracies holding elections this year. US State Department officials have claimed that <a href="https://www.state.gov/disarming-disinformation/">Russia is planning to use disinformation</a> to try to influence public opinion against Ukraine during the numerous elections scheduled across Europe this year. </p>
<p>In October last year the US sent a <a href="https://www.state.gov/russias-pillars-of-disinformation-and-propaganda-report/">declassified intelligence assessment to more than 100 governments</a> accusing Moscow of using spies, social media and sympathetic media to spread disinformation and erode public faith in the integrity of election outcomes. Just last month the German Foreign Ministry disclosed that its security agencies had <a href="https://www.theguardian.com/world/2024/jan/26/germany-unearths-pro-russia-disinformation-campaign-on-x">exposed an extensive pro-Russian disinformation operation</a>, orchestrated using thousands of fake social media accounts.</p>
<p><a href="https://www.nato.int/cps/en/natohq/115204.htm">NATO</a> and the <a href="https://www.consilium.europa.eu/en/documents-publications/library/library-blog/posts/the-fight-against-pro-kremlin-disinformation/">European Union</a> have also warned against the threats to democratic cohesion caused by Kremlin-fuelled disinformation campaigns.</p>
<p>In India, the ultra-nationalist government of Narendra Modi has been accused of <a href="https://www.washingtonpost.com/world/2023/12/10/india-the-disinfo-lab-discredit-critics/">running a covert disinformation operation,</a> circulating propaganda to discredit foreign critics, attack political opponents and target Muslims and other ethnic and religious minorities. Human Rights Watch <a href="https://www.hrw.org/news/2024/01/11/india-increased-abuses-against-minorities-critics">reports</a> increased attacks against ethnic and religious minorities including Muslims, as well as journalists and opposition leaders.</p>
<h2>Taking action</h2>
<p>Calling for action now is almost moot as it’s probably already too late. The fact that there are so many elections happening simultaneously around the world in 2024 only exacerbates the problem. However, we must at least try.</p>
<p>An urgent global effort among nations is needed to set the ground rules for how the use of AI is to be regulated, particularly around elections. The US Senate is currently considering the <a href="https://www.congress.gov/bill/118th-congress/senate-bill/2770?q=%257B%2522search%2522:%2522deceptive+AI%2522%257D&s=1&r=1">Protecting Elections from Deceptive AI</a> Act, while <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473">the EU reached a tentative agreement</a> in December to regulate AI, becoming the first major global power to do so. </p>
<p>Laws need to force transparency in how AI models are trained and deployed, and require disclosure for when they are used in political campaigning. The worry is that the pace at which the technology is advancing is outpacing efforts to safeguard the public.</p>
<p>Social media platforms must be held accountable for disinformation spread. Companies like X, Meta and Alphabet have <a href="https://www.cnbc.com/2023/05/26/tech-companies-are-laying-off-their-ethics-and-safety-teams-.html">downsized teams dedicated to integrity</a>, hindering proactive disinformation countermeasures. Tough new laws are needed to force these tech monoliths to tackle disinformation and force transparency in algorithms and political ad targeting.</p>
<p>Proactive strategies like <a href="https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/">pre-bunking</a> (teaching people to spot fake news) and rapid response strategies are essential to combat election interference. Media outlets also need to learn from past mistakes and balance truthful reporting with free speech, avoiding the “false balance” trap of amplifying disinformation from populist politicians masquerading as legitimate discourse.</p>
<p>Finally, we must find ways to tackle the echo chambers and conspiracy theories that threaten to derail social cohesion. Gaining back public trust in institutions such as the mainstream media and government is not going to be easy. </p>
<p>There are no magic spells to fix this overnight. But we can’t just sit back and accept the status quo. Education in media literacy is also vital to defend against disinformation.</p>
<p>But while these steps may keep the mainstream parties honest, they will do nothing to stop the bad actors. Russia, China and Iran are all likely to attempt to shape geo-political outcomes in their favour in 2024 by attempting to <a href="https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2023/11/MTAC-Report-2024-Election-Threat-Assessment-11082023-2-1.pdf">interfere in elections</a>.</p>
<p>The stability of global democracy may well depend on how these emerging threats are navigated in the months to come. When Donald Trump claimed the 2020 election was stolen, thousands of his supporters stormed the US Capitol. He may well be the president of the US again in November.</p><img src="https://counter.theconversation.com/content/222808/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Felle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>With an unprecedented number of votes happening around the world, the information environment will be chaotic, to say the least.Tom Felle, Associate Professor of Journalism, University of GalwayLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2215792024-01-26T17:57:55Z2024-01-26T17:57:55ZDisinformation is often blamed for swaying elections – the research says something else<figure><img src="https://images.theconversation.com/files/571138/original/file-20240124-29-k5hu7q.jpg?ixlib=rb-1.1.0&rect=50%2C175%2C5575%2C3530&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/color-image-some-people-voting-polling-435657658">Alexandru Nika/Shutterstock</a></span></figcaption></figure><p>Many countries <a href="https://en.wikipedia.org/wiki/List_of_elections_in_2024">face general elections</a> this year. Political campaigning will include misleading and even false information. Just days ago, <a href="https://www.bloomberg.com/news/articles/2024-01-23/fake-biden-robocall-message-in-new-hampshire-alarms-election-experts?leadSource=uverify%20wall">it was reported</a> that a robocall impersonating US president Joe Biden had told recipients not to vote in the presidential primary. </p>
<p>But can disinformation significantly influence voting? </p>
<p>There are two typical styles of election campaigning. One is positive, presenting favourable attributes of politicians and their policies, and the other is negative – disparaging the opposition. The latter <a href="https://link.springer.com/article/10.1057/s41253-019-00084-8;">can backfire</a>, though, or lead to <a href="https://www.journals.uchicago.edu/doi/abs/10.1111/j.1468-2508.2007.00618.x?casa_token=kG3-EyUhaHYAAAAA:UydVoChML-dFiFC370Su8gRQmPSAMV1E0cqg0cZ2owdl-NSw4uvQvHsjXIpdxpebgYZXAYb5aDWX">voters disengaging</a> with the entire democratic process. </p>
<p>Voters are already <a href="https://www.annualreviews.org/doi/abs/10.1146/annurev.polisci.10.071905.101448?casa_token=a0oggffzdCkAAAAA:61ee1-KZtnN5OvUoordIlQChJwegerDlKfg6q5bCJZXUy-ND70U_4ZcapONNd1mibsDPVD8jjSvHYw">fairly savvy</a> – they know that campaigning tactics often include distortions and untruths. Both types of tactics, positive and negative, <a href="https://www.unhcr.org/innovation/wp-content/uploads/2022/02/Factsheet-4.pdf">can feature misinformation</a>, which loosely refers to inaccurate, false and misleading information. Sometimes this even counts as disinformation, because the details are deliberately designed to be misleading. </p>
<p>Unfortunately, recent research shows that the <a href="https://theconversation.com/misinformation-why-it-may-not-necessarily-lead-to-bad-behaviour-199123">lack of clarity in defining</a> misinformation and disinformation is a problem. There is no consensus. Scientifically and practically, this is bad. It’s hard to chart the scale of a problem if your starting point includes <a href="https://journals.sagepub.com/doi/full/10.1177/17456916221141344">vague or confused</a> concepts. This is a problem for the general public, too, given it makes it harder to decipher and trust research on the topic.</p>
<p>For example, depending on how inclusive the definition is, <a href="https://books.google.com/books?hl=en&lr=&id=hB5sEAAAQBAJ&oi=fnd&pg=PA173&dq=public+perceptions+negative+election+campaigning+%22propaganda%22&ots=i47RTsBtju&sig=JYS30Bjr6Hu17xdxRn50HXlsAPY">propaganda</a>, <a href="https://ideas.repec.org/a/taf/rcybxx/v5y2020i2p199-217.html">deep fakes</a>, <a href="https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211">fake news</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/35039654/">conspiracy theories</a> are all examples of disinformation. But <a href="https://edisciplinas.usp.br/pluginfile.php/4948550/mod_resource/content/1/Fake%20News%20Digital%20Journalism%20-%20Tandoc.pdf">news parody or political satire</a> can be too. </p>
<p>Unfortunately, researchers <a href="https://doi.org/10.1016/j.copsyc.2020.03.014">often fail to provide clear definitions</a>, and do not carefully compare different types of disinformation, adding uncertainty to evidence examining its effect on voting behaviour. </p>
<p>Nevertheless, let’s investigate the research on disinformation so far, which is generally viewed as more serious than misinformation, to see <a href="https://misinforeview.hks.harvard.edu/article/explaining-beliefs-in-electoral-misinformation-in-the-2022-brazilian-election-the-role-of-ideology-political-trust-social-media-and-messaging-apps/">how much influence it can really have</a> on the way we vote. </p>
<h2>Unconvincing findings</h2>
<p>Consider <a href="https://www.sciencedirect.com/science/article/pii/S0048733322001494">a study published in 2023</a>, investigating the role of fake news in the Italian general elections in 2013 and 2018. It used debunking websites to help create a fake news score for articles published in the run-up to the election.</p>
<p>Then the researchers analysed populist parties’ pre-election Facebook posts containing such news content. This also generated an engagement score based on the number of likes and shares of the posts. </p>
<p>Finally, scores were combined with actual electoral votes for populist parties to gauge the possible influence of fake news on such votes. The researchers estimated that fake news added a small but statistically significant electoral gain for populist parties. But the researchers suggested that fake news could not be the sole cause of the overall increase in vote share for populist parties – it only seemed to add a small amount to the overall increase in vote share.</p>
<p>Similar studies showing <a href="https://www.science.org/doi/10.1126/science.aau2706">low effects</a> of fake news on persuading voters has led some researchers <a href="https://www.nature.com/articles/s41562-020-0833-x">to argue</a> that the panic about fake news is overblown. </p>
<p>Other recent studies have looked at the potential influence of disinformation by asking people how they intended to vote and whether they believed specific pieces of disinformation. This was examined in national or presidential elections in <a href="https://www.martenscentre.eu/wp-content/uploads/2023/04/15.pdf">the Czech Republic in 2021</a>, <a href="https://www.tandfonline.com/doi/abs/10.1080/23743670.2020.1719858?casa_token=G5kslUWsQRkAAAAA:ZW_ghmhO0phxYhgElEnuToqcAK_f_3o2BLrzew-RW0tlNZBX9_UuXgricYyuzZ-qgvZVQUgfoycKXw">Kenya in 2017</a>, <a href="https://www.ajpor.org/article/12982-analysis-of-fake-news-in-the-2017-korean-presidential-election">South Korea in 2017</a>, <a href="https://www.tandfonline.com/doi/abs/10.1080/23743670.2020.1719858">Indonesia in 2019, Malaysia in 2018</a>, <a href="https://www.martenscentre.eu/wp-content/uploads/2023/04/15.pdf">Philippines in 2022</a> and <a href="https://www.ajpor.org/article/12985-does-fake-news-matter-to-election-outcomes-the-case-study-of-taiwan-s-2018-local-elections">Taiwan in 2018</a>. </p>
<p>The general finding among all these studies was that it is hard to establish a reliable causal influence of fake news on voting. One reason was that who people say they vote for and how they actually vote can be vastly different. </p>
<p>In fact, research has gone into understanding the reasons for dramatic failures of traditional pollsters to predict elections and referendums <a href="https://journalofbigdata.springeropen.com/articles/10.1186/s40537-021-00525-8">in Argentina in 2019</a>, <a href="https://www.cambridge.org/core/journals/canadian-journal-of-political-science-revue-canadienne-de-science-politique/article/abs/quebec-2018-a-failure-of-the-polls/97380BA7567B11B95E88FAA2149BDC51">Quebec in 2018</a>, <a href="https://www.researchgate.net/publication/319982710_Collective_failure_Lessons_from_combining_forecasts_for_the_UK's_referendum_on_EU_membership">UK in 2016</a> and <a href="https://digitalcommons.unl.edu/sociologyfacpub/543/">US in 2016</a>. People didn’t, for many reasons, reveal their actual voting intentions to pollsters and researchers. </p>
<h2>Who is susceptible?</h2>
<p>What about specific groups of voters, though? Might there be some that are more influenced by disinformation than others? Political affiliation doesn’t seem to matter. People tend <a href="https://doi.org/10.1016/j.copsyc.2020.03.014">to rate fake news as accurate</a> when it’s in line with their own political beliefs. For instance, in the 2016 US presidential elections, both Hillary Clinton and Donald Trump supporters <a href="https://doi.org/10.1111/ajpy.12233">were equally likely</a> to rate fake news about their opposition as accurate. </p>
<p>How about undecided voters? Some studies show that undecided voters are more likely than decided voters to <a href="https://doi.org/10.1080/1369118X.2021.1883706">consider fake news headlines as credible</a>. But the opposite has also been shown – that they are <a href="https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211">less susceptible to political fake news</a>. </p>
<p>Still, to maximise the influence of disinformation in an election, undecided voters would be the obvious target, especially in close-run elections. But accurately profiling undecided voters <a href="https://doi.org/10.1111/rssa.12414">is difficult</a> – especially since people are cautious in revealing their voting intentions and the reasons behind them.</p>
<p>And if politicians or campaign staff use <a href="https://journals.sagepub.com/doi/abs/10.1177/1369148119842038">disinformation in aggressive negative campaigning</a> to sway undecided voters, they can end up increasing disengagement in the election process – making some people even more undecided.</p>
<p>Ultimately, most research suggests that fake news <a href="https://journals.sagepub.com/doi/full/10.1177/17456916221141344">is more likely to enhance existing beliefs</a> and views rather than <a href="https://link.springer.com/article/10.1007/s00146-020-00980-6">radically change voting intentions</a> of the undecided. </p>
<p>Another issue that often gets ignored is a phenomenon known in psychology as <a href="https://psycnet.apa.org/record/2001-16230-004">the third-person effect</a> – that we think that others are more persuadable, and even gullible, than ourselves. </p>
<p>So when it comes to who is susceptible to disinformation, it is likely that those studying it, as well as those participating in the studies, <a href="https://misinforeview.hks.harvard.edu/article/the-presumed-influence-of-election-misinformation-on-others-reduces-our-own-satisfaction-with-democracy/">assume they are immune</a>, but that anyone else, such as supporters of the opposing political party, are not – making the evidence harder to interpret. </p>
<p>It would be naive to say that disinformation, <a href="https://books.google.co.uk/books/about/Politics_and_Propaganda.html?id=FTrgh74moswC">such as political propaganda</a>, doesn’t have any influence on voting. But we should be careful not to assign disinformation as the sole explanation for election results that go against predictions.</p>
<p>If we assign disinformation such a high level of influence, we ultimately deny people’s agency in making free voting choices. And studies show that <a href="https://www.researchgate.net/publication/375301055_Folk_beliefs_about_where_manipulation_outside_of_awareness_occurs_and_how_much_awareness_and_free_choice_is_still_maintained">we are aware</a> that manipulative methods are used on us. Still, we all judge that we can maintain <a href="https://psycnet.apa.org/record/2023-13856-001">an ability to make our own choice</a> when voting.</p>
<p>It’s important to take this seriously. Our belief in free will is ultimately a reason so many of us back democracy in the first place. Denying it can arguably be more damaging than a few fake news posts lurking on social media.</p><img src="https://counter.theconversation.com/content/221579/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Magda Osman receives funding from Research England, ESRC, Wellcome Trust, and Turing Institute. </span></em></p>Most studies suggests that fake news is more likely to enhance existing beliefs and views rather than radically change voting intentions of those who are undecided.Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2156232024-01-17T17:49:47Z2024-01-17T17:49:47ZSome people who share fake news on social media actually think they’re helping the world<figure><img src="https://images.theconversation.com/files/569339/original/file-20240115-25-gr73c2.jpg?ixlib=rb-1.1.0&rect=692%2C617%2C7020%2C4634&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">'You're welcome!'</span> <span class="attribution"><span class="source">Shutterstock/Roman Samborskyi</span></span></figcaption></figure><p><a href="https://www.weforum.org/publications/global-risks-report-2024/digest/">Misinformation</a> is the number one risk facing society over the next two years, according to the World Economic Forum. With key elections due in the US, UK and many other nations this year, an onslaught of political misinformation can be expected.</p>
<p>Some of this material is distributed through paid advertising on social media, like the AI generated “deep fake” videos of British prime minister <a href="https://www.theguardian.com/technology/2024/jan/12/deepfake-video-adverts-sunak-facebook-alarm-ai-risk-election">Rishi Sunak</a> doing the rounds. However, we know that much of <a href="https://www.science.org/doi/full/10.1126/science.aap9559">the spread of false material</a> is due to the actions of individual social media users.</p>
<p><a href="https://www.pewresearch.org/politics/wp-content/uploads/sites/4/2022/06/PDL_06.16.22_Twitter_Politics_full_report.pdf">Many people</a> share political news online. Inevitably some of that news is false. Fake political news is, after all, common. It’s not unusual to see it as you scroll through your social media feeds.</p>
<p>One of the main ways in which fake news spreads is when people share it to their own social networks. Some genuinely believe the story to be true and share it by mistake. We’ve <a href="https://www.sciencedirect.com/science/article/pii/S0191886921004487">found</a> that around 20% of people report having shared a story they later found out was untrue. </p>
<p>However, like <a href="https://misinforeview.hks.harvard.edu/wp-content/uploads/2023/08/littrell_knowingly_sharing_false_political_info_20230825.pdf">other researchers</a>, we also find that around one in 10 people admit sharing political information that they knew at the time was untrue. </p>
<p>Why would these people deliberately spread lies? Are they deliberately setting out to do harm? Or do they perhaps think it’s acceptable to spread because it supports ideas they hold strongly and <a href="https://www.sciencedirect.com/science/article/pii/S2352250X24000010">“might as well be true”</a>?</p>
<h2>Meaning well, meaning ill</h2>
<p>Only a minority of people share false information but, given the vast scale of social media platforms, even that can lead to fake stories spreading like wildfire. This makes it harder for people to get news they can trust and leads people to believe things that simply aren’t true.</p>
<p><a href="https://journals.sagepub.com/doi/10.1177/20563051231192032">Our research</a> revealed that some people shared fake stories because they thought they were funny (one said because they thought it was “ludicrous”, for example). Others shared the misinformation specifically to highlight that it was false. Others minimised the harm they were doing by suggesting it wasn’t actually that serious if they shared fake news. </p>
<p>Our findings reveal that some people behave in an antisocial way when it comes to fake news, deliberately sharing false information to achieve some personal objective, even if it means attacking other people or trying to manipulate them. Sharing false stories in this way can be used, for example, to affect people’s political views, whether by supporting a smear campaign against a politician or by boosting a politician’s clout. </p>
<p>People driven by such reasons seem not to be bothered by whether the news they are sharing is true or false, and may even view sharing news as a means of manipulation. At the very least, these people are being uncaring about the harmful effects of their actions. </p>
<figure class="align-center ">
<img alt="A phone showing the news with the word 'fake' written across it." src="https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Fake news is everywhere, and difficult to spot.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>In sharp contrast to these, some people share political news, whether true or false, with the best intentions. They seem to see sharing fake news as a way to make the world better. </p>
<p>“Good” reasons for sharing can reflect a desire to protect others (for example, by alerting them to potential dangers), to encourage people to “do the right thing”, or even to become socially or politically engaged. Other people may use news sharing as a force for good by pointing out that a particular story is false. Ironically though, that means the false story may spread even further. </p>
<h2>Dealing with fake news</h2>
<p>People can have strong reactions when they see a friend or family member sharing material they know is untrue. This is not a big surprise because misinformation tends to rely on <a href="https://www.nature.com/articles/s41599-022-01174-9">negative sentiment and to appeal to our morals</a>. It is the stories that make us emotional (for example by scaring us) that go viral in the first place.</p>
<p>However, the next time you see someone sharing a story you know to be false, and you think about giving them a piece of your mind or blocking them, remember that they may be unaware that they were doing harm and may even have been trying to do good. It may be that they were thinking only about themselves, but it may also be they have shared that story thinking that it benefits others. </p>
<p>Sharing false stories, even when done with the best intentions, may have implications that <a href="https://theconversation.com/disinformation-campaigns-are-undermining-democracy-heres-how-we-can-fight-back-217539">go beyond</a> people’s personal goals for sharing. When people expose others to misinformation in order to debunk it, they are potentially risking unintended <a href="https://journals.sagepub.com/doi/full/10.1177/1461444820943878">political consequences</a> such as increasing cynical perceptions towards election campaigns and politicians. </p>
<p>One way to reduce this risk and support the battle against misinformation is to follow <a href="https://www.who.int/campaigns/connecting-the-world-to-combat-coronavirus/how-to-report-misinformation-online">guidance on how to report false stories</a>, for example by marking them as false on the platform.</p>
<p>And if you yourself are tempted to share material that might not be true — for whatever reason — it is best to find other ways to get your message across.</p><img src="https://counter.theconversation.com/content/215623/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Buchanan receives funding from The Leverhulme Trust. </span></em></p><p class="fine-print"><em><span>Deborah Husbands and Rotem Perach do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We asked people why they shared misinformation and a lot of people do it with good intentions.Rotem Perach, Lecturer in Psychology, University of WestminsterDeborah Husbands, Reader, Social Sciences, University of WestminsterTom Buchanan, Professor of Psychology, University of WestminsterLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2183792024-01-08T17:43:43Z2024-01-08T17:43:43ZMisinformation: how fact-checking journalism is evolving – and having a real impact on the world<p>“Fake news” loves a crisis. It’s clear now that false information has played a role in recent events around the world from <a href="https://www.reuters.com/world/americas/brazil-election-marked-by-disinformation-networks-says-carter-center-2022-11-05/">divisive elections</a> to <a href="https://joint-research-centre.ec.europa.eu/jrc-news-and-updates/misinformation-covid-19-what-did-we-learn-2023-02-21_en">the COVID pandemic</a> to <a href="https://edmo.eu/2023/10/17/edmo-preliminary-analysis-of-the-israel-hamas-conflict-related-disinformation/">the conflict roiling Israel and Gaza</a>.</p>
<p>It is important to counter false claims and false narratives. And research now shows a lot more clarity about how to do this.</p>
<p>In <a href="https://www.nytimes.com/2023/09/29/business/media/fact-checkers-misinformation.html">a rather downbeat article</a> in September 2023, the New York Times (NYT) reported that “the momentum behind organizations that aim to combat online falsehoods has started to taper off”. It reported that the number of fact-checking operations around the world had “stagnated”, after rising <a href="https://reporterslab.org/tag/fact-checking-database/">from 11 in 2008 to 424 in 2022</a> and dropping slightly to 417 today.</p>
<p>The NYT report captured some of the well-known challenges fact-checkers face. But it offered a distressingly narrow picture of the work they actually do every day, how the fact-checking community’s approach to countering false information has evolved, and the different ways their work can make a difference in the world. </p>
<p>On numbers alone, the picture is more complex than was presented. <a href="https://africacheck.org/who-we-are/our-team">Africa Check</a>, the first fact-checking organisation in Africa, has grown from a team of two in 2012 to a staff of 40 with offices in four countries today. </p>
<p>The same is true of <a href="https://maldita.es/quienes-somos">Maldita</a> – which started as a Twitter account run by two TV journalists and <a href="https://maldita.es/quienes-somos">today has a staff of more than 50</a>. In some regions, the number of operations has fallen back. In others, Africa, the Middle East and Asia, it is still growing.</p>
<p>A second challenge is one of scale. Since fact-checkers around the world started contributing to a database of checks operated by Google, known as <a href="https://developers.google.com/search/docs/appearance/structured-data/factcheck">Claim Review</a>, they had, as of late September 2023, verified almost 300,000 true and false claims. </p>
<p>That is an impressive number but tiny by comparison with the scale of the problem, which may, of course, be worsened by AI. Groups such as UK fact-checking charity <a href="https://fullfact.org/">Full Fact</a> are <a href="https://fullfact.org/blog/2021/jul/how-does-automated-fact-checking-work/">developing AI</a> to help spot false claims and boost the reach of fact-checks.</p>
<h2>Does fact-checking work?</h2>
<p>A series of studies published over recent years have shown that, while fact-checks will, of course, not alter an individual’s long-held worldview, they can and do have “<a href="https://www.tandfonline.com/doi/abs/10.1080/10584609.2019.1668894">significantly positive overall influence</a>” on reader’s factual understanding and “<a href="https://www.pnas.org/doi/10.1073/pnas.2104235118">reduce belief in misinformation, often durably so</a>”. </p>
<p>What’s more, two recent studies have shown that so-called “warning labels” attached to online content “<a href="https://www.sciencedirect.com/science/article/pii/S2352250X23001550?dgcid=author">effectively reduce</a> belief and spread of misinformation” and do so <a href="https://osf.io/preprints/psyarxiv/t2pmb/">“even for those most distrusting of fact-checkers”</a>.</p>
<p>The problem, correctly identified by the NYT, is that this success “is inconsistent and contingent on many variables”. A first challenge is that those who see and believe misinformation are, often, not the same as those who see and believe the subsequent fact-checks. The two audiences often do not cross over. </p>
<p>Fact-checkers also understand the limits of information as a tool for countering misinformation. They see daily evidence in emails and comment threads that, while some appreciate their work, others reject it. Like countless journalists, fact-checkers accept that their work doesn’t reach everyone it should. Most argue that exposing falsehoods and hoaxes is worth the effort, nevertheless.</p>
<h2>Correcting the record</h2>
<p>But informing the public is only one way fact-checking organisations make a difference. First, research <a href="https://ore.exeter.ac.uk/repository/bitstream/handle/10871/21568/Nyhan%20Reifler%20AJPS.pdf">confirms</a> what many fact-checkers see firsthand: knowing someone is checking will often push politicians to <a href="https://onlinelibrary-wiley-com.ezproxy.library.wisc.edu/doi/pdf/10.1111/1467-923X.12898">be more careful</a> with their claims. </p>
<p>Obvious <a href="https://www.washingtonpost.com/politics/2018/12/10/meet-bottomless-pinocchio-new-rating-false-claim-repeated-over-over-again/">exceptions aside</a>, many public figures will quietly drop a claim after it’s been debunked – or even issue a mea culpa. This happened this year in Kenya when police apologised “unreservedly” after fact-checkers at AFP news agency caught them using unrelated images of one protest to hunt down those involved in another.</p>
<p>Many operations take a direct approach, contacting media outlets or political campaigns to ask them to <a href="https://fullfact.org/about/interventions/">correct</a> the <a href="https://www.poynter.org/fact-checking/2018/this-fact-checker-got-several-news-outlets-to-correct-a-false-story-about-a-mini-ice-age/">record</a>. And in many countries, fact-checkers <a href="https://africacheck.org/fact-checks/blog/blog-fact-checking-doesnt-work-way-you-think-it-does">intervene at a structural level</a> to promote a culture of accuracy in key institutions. </p>
<p>British lawmakers last month voted <a href="https://www.civilserviceworld.com/professions/article/parliamentary-corrections-process-opened-up-to-mps">to change House of Commons rules</a> on correcting the official record, following a campaign by the <a href="https://twitter.com/FullFact/status/1716845679325155561">fact-checkers Full Fact</a>.</p>
<p>In some regions, fact-checkers work with statistical agencies, advocate for open government, <a href="https://factsfirst.ph/about">operate broad coalitions against misinformation</a> and run media literacy programs. In the Arabic-language world, the Jordan-based <a href="https://arabfcn.net/en/about-us/">Arab Fact-Checkers Network</a> trains media in in-house fact-checking, to reduce the spread of false information prior to publication. In Europe, the <a href="https://eufactcheckingproject.com/">European Fact-Checking Standards Network</a>, has a team who work on public policy – something not possible in many parts of the world.</p>
<p>This growing breadth of approaches reflects how our understanding of false information has changed. </p>
<p>As Tom Rosenstiel of the University of Maryland <a href="https://www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online/">noted in 2017</a>: “Misinformation is not like plumbing, a problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to.” It also reflects the different organisational cultures of operations set up by media companies, and by civil society and academic institutions.</p>
<p>The picture, in summary, is more complex than was suggested, and in many, if not all, parts of the world, more hopeful too.</p><img src="https://counter.theconversation.com/content/218379/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Cunliffe-Jones is a member of the advisory board of the International Fact-Checking Network (IFCN), based at the Poynter Institute, founder of the fact-checking organisation Africa Check, and was senior advisor to the Arab Fact-Checkers Network (AFCN) in 2023. </span></em></p><p class="fine-print"><em><span>Lucas Graves does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence is likely to make the ‘fake news’ problem worse. But it can also be used to help us counter misinformation.Peter Cunliffe-Jones, Visiting Researcher & Co-Director Chevening African Media Freedom Fellowship, University of WestminsterLucas Graves, Professor in the School of Journalism and Mass Communication, University of Wisconsin-MadisonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2197252024-01-04T12:51:30Z2024-01-04T12:51:30ZHow subtle forms of misinformation affect what we buy and how much we trust brands<figure><img src="https://images.theconversation.com/files/566367/original/file-20231218-18-bq4prp.jpg?ixlib=rb-1.1.0&rect=42%2C0%2C4700%2C3123&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Both direct and indirect misinformation influence brand trust. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/motion-escalators-modern-shopping-mall-201174746">estherpoon/Shutterstock</a></span></figcaption></figure><p>Misinformation isn’t just blurring political lines anymore. It’s quietly infiltrating our shopping trolleys in subtle ways, shaping our decisions about what we buy and who we trust, as my research shows. </p>
<p>Spurred by political events, misinformation has garnered widespread media coverage and academic research. But most of the attention has been in the fields of <a href="https://www.aeaweb.org/articles?id=10.1257%2Fjep.31.2.211&fbclid=IwAR04My3aiycypMJKSI58e84gDvdrodsB9fqCycH9YfepWDDDwT--fZnVPvo;%20https://www.nyu.edu/about/news-publications/news/2019/january/fake-news-shared-by-very-few--but-those-over-65-more-likely-to-p.html">political science</a>, <a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(21)00051-6?dgcid=raven_jbs_etoc_email">social psychology</a>, <a href="https://www.sciencedirect.com/science/article/pii/S0306457318306794">information technology</a> and <a href="https://www.tandfonline.com/doi/full/10.1080/21670811.2017.1360143">journalism studies</a>. </p>
<p>More recently though, misinformation has also gained traction among <a href="https://www.sciencedirect.com/science/article/abs/pii/S0148296320307852">marketing</a> and <a href="https://myscp.onlinelibrary.wiley.com/doi/abs/10.1002/jcpy.1288">consumer</a> experts. Much of that research has focused on the direct impacts of misinformation on brands and consumer attitudes, but a new perspective on the topic is now emerging.</p>
<p>What if the influence of misinformation extends beyond explicit attacks on brands? What if our choices as consumers are shaped not only by deliberate misinformation campaigns but also by subtle, indirect false information? </p>
<p>My own research has explored the dynamics of misinformation from a consumer standpoint. I have looked at how misinformation <a href="https://www.sciencedirect.com/science/article/abs/pii/S0148296320307852">spreads</a>, why people find it <a href="https://journals.sagepub.com/doi/full/10.1177/07439156221103860">credible</a> and what we can do to try to <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/mar.21479">mitigate its spreading</a>. </p>
<p>However, my latest <a href="https://www.sciencedirect.com/science/article/pii/S2352250X23001616">study</a> looks at direct and indirect forms of misinformation and their consequences for brands and consumers. I have found that one of the major consequences of these types of misinformation is the erosion of trust.</p>
<h2>Direct and indirect misinformation</h2>
<p>Misinformation comes in direct and indirect forms. It can be direct when it purposefully targets brands or their products. Examples of direct misinformation include fabricated customer reviews or fake news campaigns targeting brands. </p>
<p>It was fake news that led to the <a href="https://www.nytimes.com/interactive/2016/12/10/business/media/pizzagate.html">“pizzagate” scandal</a> in 2016, for example. This involved unsubstantiated accusations of child abuse against prominent individuals linked to a Washington DC pizzeria. While last year, the brand Target was <a href="https://www.reuters.com/article/idUSL1N37S2U1/">falsely accused</a> of selling “satanic” children’s clothes on social media. </p>
<p>The consequences of direct misinformation can be far reaching, leading to a breakdown in brand trust. This erosion is particularly pronounced when misinformation originates from seemingly trustworthy sources, forcing brands into crisis management mode. </p>
<p>For example, in late 2022, Eli Lilly’s stock price fell by 4.37% after a <a href="https://www.washingtonpost.com/technology/2022/11/14/twitter-fake-eli-lilly/">fake Twitter</a> account impersonating the pharmaceutical company falsely announced that insulin would be given away for free. Investors were misled and the company was forced to issue multiple statements to regain their trust. </p>
<p>But beyond the realm of blatant brand attacks lies a subtler, less understood territory I call “indirect misinformation”. This type of misinformation doesn’t zero in on specific companies, but instead cloaks itself in issues like politics, social affairs or health issues.</p>
<p>The constant exposure to misinformation around issues like COVID-19 and politics can have a ripple effect. And my research, which reviewed the academic marketing literature on direct and indirect misinformation, argues that this constant barrage has the potential to impact consumer choices. </p>
<p>Consider the two distinct levels where these effects unfold for a company. At the brand level, reputable names may unwittingly find themselves entangled in disreputable fake news sites through <a href="https://journals.sagepub.com/doi/full/10.1177/0276146718755869">programmatic advertising</a>, in which automated technology is used to buy ad space on these websites. And while the misinformation itself might not directly impact brand trust, the association with dubious websites can cast a shadow over attitudes to brands. It can also <a href="https://journals.sagepub.com/doi/abs/10.1016/j.intmar.2018.09.001">impair</a> consumers’ intentions towards the brand. </p>
<p>Simultaneously, at the consumer level, the impact of indirect misinformation is profound. It breeds confusion, doubt and a general sense of vulnerability. Continuous exposure to misinformation is linked to <a href="https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power/">decreased trust</a> in mainstream and traditional media brands, for example. </p>
<p>Consequently, people might become wary of all information sources and even fellow consumers. Subconsciously influenced by misinformation, they may make different purchase decisions and hold <a href="https://www.journals.uchicago.edu/doi/full/10.1086/708035">altered views</a> of brands and products.</p>
<h2>What can brands do?</h2>
<p>While the negative repercussions of direct misinformation on brand trust have been well documented, shining a light on the subtler impacts of indirect misinformation marks a crucial step forward. It not only opens new avenues for researchers but also serves as a warning to brands. It urges them to be more proactive in their approach to misinformation. </p>
<p>If indirect misinformation makes consumers mistrustful and sceptical, brands could take preemptive measures. Tailoring specific marketing communications to instil trust in brands, products and offers becomes paramount in a world where trust is continually under siege. Building and maintaining a reputation for trustworthiness is essential for companies.</p>
<p>As we navigate this terrain of hidden influences, the call for a more comprehensive understanding of misinformation’s multifaceted impacts also becomes clearer. Researchers, brands and consumers alike need to decode the hidden messages of misinformation. This could help to fortify the foundations of trust in an era where it has become a precious commodity.</p><img src="https://counter.theconversation.com/content/219725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Giandomenico Di Domenico is affiliated with the International Panel on the Information Environment. </span></em></p>Trust in brands may be eroded as awareness of misinformation increases according to new research.Giandomenico Di Domenico, Lecturer in Marketing & Strategy, Cardiff UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2163382023-11-14T21:02:24Z2023-11-14T21:02:24ZFake news didn’t play a big role in NZ’s 2023 election – but there was a rise in ‘small lies’<p>The threat of disinformation on social media in the lead-up to New Zealand’s 2023 election <a href="https://thespinoff.co.nz/politics/09-08-2023/inside-the-plan-to-stop-a-misinformation-election">loomed large</a> for the <a href="https://www.stuff.co.nz/national/politics/132664984/analogue-politics-in-a-digital-age-how-officials-are-preparing-for-the-misinformation-wave-this-election">Electoral Commission</a> and <a href="https://thespinoff.co.nz/the-bulletin/12-11-2021/nzs-disinformation-surge">academics studying fake news</a>. </p>
<p>So how bad did it really get?</p>
<p>As part of the <a href="https://www.wgtn.ac.nz/hppi/centres/isprl/new-zealand-social-media-study">New Zealand Social Media Study</a>, we analysed more than 4,000 posts on Facebook from political parties and their leaders. Our study focused on the five weeks ahead of election day. </p>
<p>What we found should give New Zealanders some comfort about the political discourse on social media. While not perfect, there was not as much <a href="https://theconversation.com/misinformation-disinformation-and-hoaxes-whats-the-difference-158491">misinformation</a> (misleading information created without the intent to manipulate people) and <a href="https://theconversation.com/misinformation-disinformation-and-hoaxes-whats-the-difference-158491">disinformation</a> (deliberate attempts to manipulate with false information) as everyone feared. </p>
<h2>Looking for fake news and half truths</h2>
<p>To identify examples of both of those, a team of research assistants analysed and fact-checked posts, classifying them as either “not including fake news” or “including fake news”. </p>
<p>Fake news posts were defined as completely or mostly made up, and intentionally and verifiably false. </p>
<p>An example of this type of disinformation would be the “<a href="https://thespinoff.co.nz/society/08-06-2023/no-whangarei-girls-high-school-students-are-not-identifying-as-cats">litter box hoax</a>”, alleging schools provided litter boxes for students who identified as cats or furries. </p>
<p>Originating from overseas sources, this story has been debunked multiple times. In New Zealand, this <a href="https://www.stuff.co.nz/national/133004212/party-leader-sue-grey-raises-litterboxes-in-schools-myth-at-candidate-meeting">hoax was spread by Sue Grey</a>, leader of the NZ Outdoors & Freedoms Party. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-can-be-information-poison-when-we-need-facts-most-100495">Social media can be information poison when we need facts most</a>
</strong>
</em>
</p>
<hr>
<p>In cases of doubt, or when the research assistants couldn’t prove the information was false, they coded the posts as “not including fake news”. The term “fake news” was therefore reserved for very clear cases of false information. </p>
<p>If a post did not include fake news, the team checked for potential half-truths. Half-truths were defined as posts that were not entirely made up, but contained some incorrect information. </p>
<p>The National Party, for example, put up a post suggesting the Ministry of Pacific Peoples had <a href="https://www.rnz.co.nz/news/election-2023/498030/national-targets-ministry-for-pacific-peoples-50k-post-budget-breakfasts-spend">hosted breakfasts to promote Labour MPs</a>, at the cost of more than $50,000. While the ministry did host breakfasts to explain the most recent budget, and the cost was accurate, there was no indication the purpose of this event was to promote Labour MPs.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1710132911176065029"}"></div></p>
<h2>How 2023’s election compared to 2020</h2>
<p>At the beginning of the campaign, the proportion of what we identified as fake news being published on Facebook by political parties and their leaders was 2.5% – similar to what we saw in 2020. </p>
<p>The proportion of fake news posts then dropped below 2% for a long period and even fell as low as 0.7% at one point in the campaign, before rising again in the final stretch. The share of fake news peaked at 3.8% at the start of the last week of the campaign. </p>
<p>Over the five weeks of the campaign, we identified an average of 2.6% of Facebook posts by political parties and their leaders in any given week qualified as fake news. In 2020, the weekly average was 2.5%, which means the increase of fake news was minimal.</p>
<p>The sources of much of the outright fake news were parties on the fringes. According to our research, none of the major political parties were posting outright lies. </p>
<p>But there were posts from all political parties assessed as half-truths.</p>
<p>Half-truths stayed well below 10% during the five weeks we looked at, peaking at 6.5% in the final week. On average, the weekly share of half-truths was 4.8% in 2023, while in 2020 it was 2.5%. </p>
<p>So while the number of “big lies” – also known as “fake news” – did not increase in 2023 compared to 2020, the number of “small lies” in political campaigns is growing. </p>
<p>All of the political parties took more liberties with the truth in 2023 than they did in 2020.</p>
<h2>Playing on emotions and oversimplifying</h2>
<p>More than a third of all misleading posts in 2023 were emotional (37%), targeting voters’ emotions through words or pictures. Some 26% of the social media posts jumped to conclusions, while 23% oversimplified the topics being discussed. And 21% of the posts cherry-picked information, meaning the information presented was incomplete.</p>
<p>Some of the social media posts we identified as fake news or half-truths used pseudo-experts: people with some academic background, but who are not qualified to be expert witnesses on the topic under discussion (18%). </p>
<p>We also saw anecdotes of unclear origin, instead of scientific facts (15%), while 7% had unrealistic expectations of science, such as expecting science to offer 100% certainty.</p>
<p>Some of the posts included the claim that the posts’ authors had a silent majority behind them (5%). Another 5% of the social media posts identified as disinformation included personal attacks, rather than debating someone’s arguments.</p>
<h2>Staying vigilant</h2>
<p>The levels of misinformation and disinformation on social media during the past two elections in New Zealand have been fairly low – and certainly no cause for panic. But that doesn’t mean it will always stay that way. </p>
<p>On the one hand, we need to keep an eye on the social media campaigns in future elections and, in particular, monitor the development and use of misinformation and disinformation by political parties on the fringe. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/beyond-fake-news-social-media-and-market-driven-political-campaigns-78346">Beyond fake news: social media and market-driven political campaigns</a>
</strong>
</em>
</p>
<hr>
<p>We also need to keep eye on the major parties, as small lies might pave the way for more fake news or conspiracy theories in the future.</p>
<p>On the other hand, we need to resist overstating the use of misinformation and disinformation in New Zealand. Currently, there doesn’t appear to be the appetite to spread disinformation on social media by our major political parties or leaders. </p>
<p>This is a good thing for the health of our democracy, and we need to ensure it stays that way.</p><img src="https://counter.theconversation.com/content/216338/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mona Krewel is affiliated with the advisory panel to the Department of the Prime Minister and Cabinet to strengthen the country’s capacity to identify and address misinformation and disinformation. This article reflects her personal opinions as a researcher.</span></em></p>We found the number of “big lies” – also known as fake news – didn’t increase in 2023 compared to 2020. But we did spot more “small lies” this time. Here’s what to look out for in coming elections.Mona Krewel, Senior lecturer in Comparative Politics, Te Herenga Waka — Victoria University of WellingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2156482023-10-18T16:02:38Z2023-10-18T16:02:38ZPeople experiencing news fatigue are less likely to be voters<figure><img src="https://images.theconversation.com/files/553747/original/file-20231013-27-r0j6ye.jpg?ixlib=rb-1.1.0&rect=30%2C61%2C5080%2C3241&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock/Shyntartanya</span></span></figcaption></figure><p>In a comprehensive analysis of news consumption across the globe, a recent report by Reuters concluded that “interest in news continues to decline, fuelling disengagement and selective news avoidance”. In the 46 countries surveyed in the report, public interest in news has dropped significantly in the UK, France, the US and Spain <a href="https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf">over the eight year period from 2015 to 2023</a>.</p>
<p>The study was commissioned by the <a href="https://reutersinstitute.politics.ox.ac.uk/">Reuters Institute for the Study of Journalism</a> at the University of Oxford, which has been publishing reports on citizen media use in various countries since 2012. The fieldwork for the online surveys was done by YouGov in early 2023. They show that Britain has a particular problem.</p>
<p>The percentage of survey respondents who said that they were “extremely” or “very” interested in news in Britain fell from 70% in 2015 to 43% in 2023. A similar problem has occurred in the US, although it is not as bad as Britain. In the US 67% of respondents were “extremely” or “very” interested in the news in 2015, but this had fallen to 49% by 2023. Both represent huge changes in media consumption of news over this eight-year period.</p>
<p>As a result, large numbers of people are simply disassociating themselves from news about politics and current affairs. They have become disconnected citizens. The report points out that: “these declines in news interest are reflected in lower consumption of both traditional and online media sources in most cases”. Clearly, this is not just driven by people moving online from traditional media outlets, although this is of course happening.</p>
<p>In the Reuters Institute’s 2022 report, survey respondents gave a number of reasons why they have become disconnected from the news. Some 29% said they were “worn out by the quantity of news” and another 29% they felt “news is untrustworthy and biased”. </p>
<p>Another 36% said the news brings down their mood. These feelings have given rise to a growing group of people who actively avoid the news. In Britain 24% of respondents did this in 2017 but by 2022 it was 46%. The number of people who don’t want to know has doubled in five years.</p>
<h2>Double disillusionment?</h2>
<p>The Reuters report did not investigate the political effects of this development, which was beyond the scope of their remit. But there is a lively literature in political science about the effects of the media on political participation. In an influential book, political scientists Shanto Iyengar and Stephen Ansolabehere showed that attack adverts, which are such a feature of US political campaigns, <a href="https://politicalscience.stanford.edu/publications/going-negative-how-political-advertisements-shrink-and-polarize-electorate">demobilise people from participating</a>.</p>
<p>We can gain insights on this point by looking at data from the <a href="https://www.europeansocialsurvey.org/about/national-pages/united-kingdom/english">2020 European Social Survey for Britain</a>. These are very high-quality surveys and provide accurate information on what Europeans in general think about politics and the media. One of the questions in the survey asked: “on a typical day, about how much time do you spend watching, reading or listening to news about politics and current affairs?”.</p>
<p><strong>Voting in the UK General Election Compared with Time Spent Following Politics and Current Affairs in the Media, 2020</strong></p>
<figure class="align-center ">
<img alt="A chart showing British people who engaged with the news are more often voters." src="https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=477&fit=crop&dpr=1 600w, https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=477&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=477&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=600&fit=crop&dpr=1 754w, https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=600&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/553743/original/file-20231013-27-i4rjiz.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=600&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">News fatigue and voter turnout.</span>
<span class="attribution"><span class="source">Reuters/ESS</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>The chart shows the relationship between time spent by respondents acquiring information about politics and current affairs and their reported turnout in the previous general election. </p>
<p>There is a strong relationship between voting turnout and media usage. Only 49% of people who spent no time at all on news gathering turned out to vote while 33% of them did not vote. In fairness, 19% of this group were not eligible to vote, since the survey picked up people who are not on the electoral register. Even so, if we look at the group who spent one to two hours looking for news about politics, 91% of them voted and only 6% failed to do so. It is clear that media usage and participating in elections are closely related.</p>
<p>Further analysis shows that a similar pattern is evident in relation to other forms of democratic participation. It is people who are engaging with the news that are turning up to exercise their right to protest, for example.</p>
<figure class="align-center ">
<img alt="A chart showing that higher turnout leads to great vote share for The Conservatives." src="https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=439&fit=crop&dpr=1 600w, https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=439&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=439&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=551&fit=crop&dpr=1 754w, https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=551&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/554540/original/file-20231018-19-fxuw6x.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=551&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Conservatives are hit by low turnout.</span>
<span class="attribution"><span class="source">P Whiteley</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Media malaise damages political participation in general and given the massive changes highlighted in the Reuter’s report it could indicate that a lower turnout should be expected in the next general election. If we examine all 21 general elections in Britain since 1945, there is a strong correlation between turnout and the Conservative vote. The more people vote, the better the Conservative party does in the election.</p>
<p>There is also a positive relationship between turnout and Labour voting, but it is significantly weaker. Both parties would be damaged by lower turnout in the next election as a result of media malaise, but the Conservatives would be damaged more than Labour.</p><img src="https://counter.theconversation.com/content/215648/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Whiteley has received funding from the British Academy and the ESRC. </span></em></p>More and more people are saying they don’t trust the news or can’t face engaging with it – and that appears to have political implications.Paul Whiteley, Professor, Department of Government, University of EssexLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2134352023-10-17T18:01:05Z2023-10-17T18:01:05ZWe fact-checked residential school denialists and debunked their ‘mass grave hoax’ theory<iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/we-fact-checked-residential-school-denialists-and-debunked-their-mass-grave-hoax-theory" width="100%" height="400"></iframe>
<p>Recently a politician from a village in Prince Edward Island <a href="https://www.cbc.ca/news/canada/prince-edward-island/pei-murray-harbour-sign-1.6986901">displayed an offensive sign on his property in which he proclaimed there is a “mass grave hoax”</a> regarding the former Indian Residential Schools in Canada. Although <a href="https://globalnews.ca/news/10007201/murray-harbour-councillor-calls-for-resignation-mass-graves-sign-pei/">many</a> have called for him to resign, he is just one of many people who subscribe to this false theory.</p>
<p>A hoax is an <a href="https://www.merriam-webster.com/dictionary/hoax">act intended to trick people into believing</a> something that isn’t true. Commentary that a “hoax” exists began circulating in 2021 around the time of public announcements from First Nations across the country that — <a href="https://www.cbc.ca/news/canada/north/chooutla-residential-school-gravesite-investigation-anomalies-1.6978801">through the use of ground penetrating radar and other means</a> — the remains of Indigenous children are suspected to be in unmarked graves at or near some former residential schools.</p>
<p><a href="https://www.youtube.com/watch?v=Go6Fpp03Voc">Commentators circulating allegations of a “hoax”</a> contend journalists have misrepresented news of the potential unmarked graves, circulating sensational, attention-grabbing headlines and using the term “mass grave” to do so. They also contend some First Nations, activists or politicians used this language for political gain — to shock and guilt Canadians into caring about Indigenous Peoples and reconciliation.</p>
<p>Like the councillor in P.E.I., many people — <a href="https://www.rebelnews.com/tags/buried_truth">in Canada</a> and <a href="https://www.youtube.com/watch?v=egbXE18omy0">internationally</a>, fuelled partly by <a href="https://www.youtube.com/watch?v=fKeagTWr7_M">misinformation from the far-right</a> — <a href="https://www.youtube.com/watch?v=yZ5qHwxDM50">are accepting and promoting</a> the “mass grave hoax” narrative and casting doubt on the searches for missing children and unmarked burials being undertaken by First Nations across Canada.</p>
<h2>There is no media conspiracy</h2>
<p>As two settler academic researchers, we decided to investigate the claims of a media conspiracy and fact-check them against evidence. </p>
<p>What did Canadian news outlets actually report after the Tk’emlúps te Secwépemc First Nation made <a href="https://tkemlups.ca/wp-content/uploads/05-May-27-2021-TteS-MEDIA-RELEASE.pdf">their public announcements</a> about their search for missing children? </p>
<p>To find out, we analyzed 386 news articles across five Canadian media outlets (CBC, <em>National Post</em>, the <em>Globe and Mail</em>, <em>Toronto Star</em> and <em>The Canadian Press</em>) released between May 27 and Oct. 15, 2021. </p>
<p><a href="https://chrr.info/other-resources/debunking-residential-school-denialism-in-canada">What we found, according to our evidence from 2021</a>, is that most mainstream media did not use the terminology “mass graves.” Therefore, we argue that the “mass grave hoax” needs to be understood as <a href="https://doi.org/10.1080/2201473X.2021.1935574">residential school denialism</a>. </p>
<h2>‘Preliminary findings’ of ‘unmarked burials’</h2>
<p>After some public confusion over the specific details of the May 2021 Tk’emlúps te Secwépemc First Nation announcement, which named “preliminary findings” regarding “the remains of 215 children,” the First Nation <a href="https://tkemlups.ca/t%e1%b8%b1emlups-te-secwepemc-fully-supports-the-appointment-of-the-special-interlocutor/">clarified the findings</a> as the confirmation of “the likely presence of children, L’Estcwicwéý (the Missing) on the Kamloops Indian Residential School grounds” in “unmarked burials.” </p>
<p>The National Centre for Truth and Reconciliation had already <a href="https://nctr.ca/residential-schools/british-columbia/kamloops-st-louis/">identified 51 student deaths</a> at the Kamloops school using church and state records. </p>
<p><a href="https://rsc-src.ca/en/voices/%E2%80%98every-child-matters%E2%80%99-one-year-after-unmarked-graves-215-indigenous-children-were-found-in">A National Centre for Truth and Reconciliation Memorial Register</a> has to date confirmed the <a href="https://nctr.ca/memorial/national-student-memorial/memorial-register/">deaths of more than 4,000 Indigenous children</a> associated with residential schools. </p>
<p>But the Truth and Reconciliation Commission (TRC) noted its register of missing children <a href="https://www.theglobeandmail.com/politics/article-names-of-2800-children-who-died-in-residential-schools-documented-in/">was incomplete</a>, partly due to a large volume of yet-to-be-examined and destroyed records. The TRC’s <a href="https://ehprnh2mwo3.exactdn.com/wp-content/uploads/2021/01/Executive_Summary_English_Web.pdf">Calls to Action 71-76 refer to</a> missing children and burials.</p>
<p>The Tk’emlúps te Secwépemc First Nation — responding to these calls — initiated further research to learn the full truth to facilitate community healing. </p>
<h2>Countering harmful misinformation</h2>
<p>In the two years since, a number of commentators, <a href="https://www.ctvnews.ca/archdiocese-apologizes-after-priest-accuses-residential-school-survivors-of-lying-1.5528472">priests</a> and <a href="https://www.reddit.com/r/alberta/comments/y4f731/danielle_smith_the_premier_of_alberta_claims/">politicians,</a> including the P.E.I councillor with his sign, have downplayed the harms of residential schooling — or questioned the validity, gravity and significance of the the Tk’emlúps te Secwépemc First Nation’s announcement.</p>
<p>One <em>National Post</em> commentator wrote that the account of a “mass grave” was reported “<a href="https://nationalpost.com/opinion/the-year-of-the-graves-how-the-worlds-media-got-it-wrong-on-residential-school-graves">almost universally</a>” adding that this narrative, and subsequent “discoveries” preceded a descent into “shame, guilt and rage …”</p>
<p>Despite the Tk’emlúps te Secwépemc First Nation’s announcement never mentioning a “mass grave,” and Chief Rosanne Casimir saying in a news conference, <a href="https://www.squamishchief.com/bc-news/casimir-says-tkemlups-find-is-series-of-unmarked-graves-not-a-mass-burial-3848382">“this is not a mass grave, but rather unmarked burial sites that are, to our knowledge, also undocumented,”</a> some have even wrongly suggested the First Nation “<a href="https://nypost.com/2022/05/27/kamloops-mass-grave-debunked-biggest-fake-news-in-canada">announced the discovery of a mass grave</a>” and this was a “fake news story.” </p>
<p>In response, the <a href="https://www.canada.ca/en/department-justice/news/2022/06/independent-special-interlocutor-to-work-with-indigenous-communities-on-protection-of-unmarked-graves-and-burial-sites-near-former-residential-schools.html">independent special interlocutor for missing children and unmarked graves and burial sites associated with Indian Residential Schools</a> has amplified <a href="https://osi-bis.ca/wp-content/uploads/2023/06/OSI_InterimReport_June-2023_WEB.pdf">calls for</a> Canadians to take responsibility for countering such harmful misinformation. </p>
<p>We hope that our research can contribute to this work and that <a href="https://chrr.info/other-resources/debunking-residential-school-denialism-in-canada/">our report</a> helps to debunk the “mass grave hoax” narrative specifically. </p>
<h2>Cherry-picked ‘evidence’</h2>
<p>Our report reveals that most Canadian news outlets did not use the language, “mass grave.” The idea that a “mass grave hoax” exists is a myth.</p>
<p>Myths, however, <a href="https://arsenalpulp.com/Books/N/National-Dreams">are not pure fiction</a>; they often contain a kernel of truth that is <a href="https://arpbooks.org/product/storying-violence/">exaggerated or misrepresented</a>. </p>
<p>This selective representation of evidence is commonly referred to as <a href="https://www.tandfonline.com/doi/full/10.1080/03086534.2023.2209947">cherry-picking</a>, and it’s easy to see how those spreading the “mass grave hoax” narrative rely on cherry-picked evidence.</p>
<p>Of the 386 articles reviewed in our study, the majority of the articles (65 per cent, or 251) accurately reported on stories related to the location of potential unmarked graves in Canada.</p>
<p>A minority (35 per cent or 135 articles), contained some inaccurate or misleading reporting; however, many of the detected inaccuracies are easily understood as mistakes and most were corrected over time as is common practice in breaking news within the journalism industry. </p>
<p>Of the 386 total articles, only 25 — just 6.5 per cent of total articles — referred to the findings as “mass graves,” with most of the articles appearing in a short window of time and some actually using the term correctly in the hypothetical sense (that mass graves may still be found). </p>
<p>That means that 93.5 per cent of the Canadian articles released in the spring, summer and fall of 2021 that we examined did not report the findings as being “mass graves.” </p>
<p>It appears that some journalists and commentators misunderstood a large number of potential or likely unmarked graves for mass graves in late May/June 2021. By September, denialists were misrepresenting the extent of media errors to push the conspiratorial <a href="https://www.youtube.com/watch?v=Go6Fpp03Voc">“mass grave hoax” narrative</a> online. </p>
<p>Our research shows that the “mass grave hoax” narrative hinges on a misrepresentation of how Canadian journalists reported on the identification of potential unmarked graves at former residential school sites in 2021.
And we hope our report sparks a national conversation about how important language is when covering this issue. </p>
<p>Media needs to be precise with language and also acknowledge its errors (and avoid future ones), or clarify details in a way that feeds truth, empathy and more accurate reporting — not denialism, hate and conspiracy.</p>
<h2>Challenging Residential School denialism</h2>
<p>The “mass grave hoax” narrative cannot be reasonably seen as just skepticism. Rather, it should be understood as an expression of residential school denialism. </p>
<p>According to Daniel Heath Justice and Sean Carleton (one of the authors of this story), <a href="https://theconversation.com/truth-before-reconciliation-8-ways-to-identify-and-confront-residential-school-denialism-164692">residential school denialism</a> is not the denial of the residential school system’s existence. Nor do denialists, for the most part, deny that abuses happened. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/truth-before-reconciliation-8-ways-to-identify-and-confront-residential-school-denialism-164692">Truth before reconciliation: 8 ways to identify and confront Residential School denialism</a>
</strong>
</em>
</p>
<hr>
<p>Residential school denialism, like climate <a href="https://theconversation.com/the-thinking-error-that-makes-people-susceptible-to-climate-change-denial-204607">change denialism</a> or <a href="https://theconversation.com/science-denial-why-it-happens-and-5-things-you-can-do-about-it-161713">science denialism</a>, cherry-picks evidence to fit a conspiratorial counter-narrative. This distorts basic facts and the overall legacy of the Indian Residential School System (IRSS) to <a href="https://thewalrus.ca/residential-school-denialism/">alleviate settler guilt</a> and block important truth and reconciliation efforts.</p>
<h2>Truth before reconciliation</h2>
<p>Our research shows how detailed analysis can be an effective tool in confronting the growing threat of residential school denialism and other kinds of misinformation and disinformation, as called for recently by many <a href="https://osi-bis.ca/wp-content/uploads/2023/06/OSI_InterimReport_June-2023_WEB.pdf">Indigenous communities</a>. </p>
<p>Instead of directing ridicule and outrage at denialists — which can give them a larger platform — what is needed is deep and reasoned analysis of their discourse to show why they are wrong or misleading. </p>
<p>This is the strategy of disempowering and discrediting residential school denialism advocated by former TRC Chair <a href="https://www.aptnnews.ca/national-news/residential-school-deniers-white-supremacists-biggest-barrier-to-reconciliation-says-murray-sinclair/">Murray Sinclair</a>. </p>
<p>We hope others will join us in this type of research to help Canadians learn how to identify and confront residential school denialism and support meaningful reconciliation. </p>
<p>Our full findings can be <a href="https://chrr.info/other-resources/debunking-residential-school-denialism-in-canada/">read in our new report</a> for the Centre for Human Rights Research at the University of Manitoba. </p>
<p>As the Truth and Reconciliation Commission said in its final report, without truth there can be no genuine reconciliation. </p>
<p><em>For those who may be experiencing trauma or seeking support, here are some resources:</em></p>
<p><em>— The Indian Residential School Survivors Society’s 24/7 Crisis Support line: 1-800-721-0066</em></p>
<p><em>— The 24-hour National Indian Residential School Crisis Line: 1-866-925-4419</em> </p>
<p><em>The Conversation used the term “mass graves” in <a href="https://theconversation.com/no-longer-the-disappeared-mourning-the-215-children-found-in-graves-at-kamloops-indian-residential-school-161782">a story</a> published in the days following the announcement by the Tk’emlúps te Secwépemc First Nation. The article has since been updated to use the term “unmarked graves.”</em></p><img src="https://counter.theconversation.com/content/213435/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sean Carleton receives funding from the Social Sciences and Humanities Research Council of Canada and Centre for Human Rights Research at the University of Manitoba.</span></em></p><p class="fine-print"><em><span>Reid Gerbrandt receives funding from The Centre for Human Rights Research at the University of Manitoba. </span></em></p>Contrary to what some ‘denialists’ believe, research shows that Canadian media outlets did not help circulate a ‘mass grave hoax’ regarding unmarked graves at former Indian Residential Schools.Sean Carleton, Assistant Professor, Departments of History and Indigenous Studies, University of ManitobaReid Gerbrandt, MA Student, Department of Sociology and Criminology, University of ManitobaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2071872023-06-27T18:14:43Z2023-06-27T18:14:43ZChatbots can be used to create manipulative content — understanding how this works can help address it<figure><img src="https://images.theconversation.com/files/533248/original/file-20230621-14002-1plwfd.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5772%2C3827&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligence can be used to produce persuasive texts that influence behaviour.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Manipulative communication includes disinformation, propaganda, con artistry and fraud. With the increasing use of artificial intelligence (AI), manipulative communication is not only being aided by AI, automation and machine learning, but will likely be dominated by such practices in the near future.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="book cover with the title Social Engineering How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication" src="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Social Engineering’ looks at how the internet affected manipulative communication.</span>
</figcaption>
</figure>
<p>As communication scholars, we were interested in the applications of AI in supporting manipulative communication efforts. Our recent book, <a href="https://mitpress.mit.edu/9780262543453/"><em>Social Engineering: How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication</em></a> looks at how the internet affected manipulative communication.</p>
<h2>AI chatbots</h2>
<p>ChatGPT, one of the most popular and widely used chatbots, is an artificial intelligence chatbot, capable of parsing text input and quickly providing text responses. </p>
<p>While it has been <a href="https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html">critiqued for being misleading</a>, when we asked it about the possibilities for manipulative communication, its response was remarkably prescient:</p>
<blockquote>
<p>AI can be used for manipulative communication in a few ways:</p>
<p>• Social media bots: AI-powered bots can be programmed to mimic human behavior and spread false or misleading information to manipulate public opinion.</p>
<p>• Deepfake technology: AI algorithms can be used to generate fake audio or video content that appears to be real, making it easier to spread false information or manipulate individuals.</p>
<p>• Personalized advertising: AI algorithms can analyze data about a person’s behavior, preferences, and relationships to create targeted advertisements that manipulate their emotions and influence their purchasing decisions.</p>
</blockquote>
<p>As we argue in <em>Social Engineering</em>, manipulative communication can be understood as a form of “masspersonal social engineering.” It’s “masspersonal” in that it can oscillate between targeted messages or societal-scale manipulation. It’s “social engineering” in that it combines the respective tools and techniques of hackers and propagandists in an attempt to shape the perceptions and actions of audiences.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpts-greatest-achievement-might-just-be-its-ability-to-trick-us-into-thinking-that-its-honest-202694">ChatGPT's greatest achievement might just be its ability to trick us into thinking that it's honest</a>
</strong>
</em>
</p>
<hr>
<p>Masspersonal social engineering typically involves three stages: trashing, pretexting and bullshitting.</p>
<p>Each of these stages can be automated, with new AI tools increasing the pace and intensity.</p>
<h2>Trashing</h2>
<p>Trashing is the stage where the masspersonal social engineer gathers information on potential targets. We use the term “trashing” because it hearkens back to a mid-20th century hacker process of literally <a href="https://hackcur.io/trashing-the-phone-company-with-suzy-thunder/">going through corporate trash</a> to find passwords and restricted information.</p>
<p>While social engineers <a href="https://doi.org/10.1016/B978-1-59749-215-7.X0001-7">still go through physical trash</a>, these days trashing takes place in digital environments.</p>
<p>For example, trashing was key to the Russian hack of former White House Chief of Staff John Podesta’s emails in 2016. Podesta, who was in charge of Hillary Clinton’s 2016 presidential campaign, <a href="https://www.vice.com/en/article/mg7xjb/how-hackers-broke-into-john-podesta-and-colin-powells-gmail-accounts">fell victim to a phishing attack</a>. </p>
<p>Podesta wasn’t the first target — the <a href="https://apnews.com/dea73efc01594839957c3c9a6c962b8a">Russian hackers worked their way</a> through several email addresses used by Clinton staffers, including staffers who were no longer part of her campaign and who had abandoned their email accounts years before. </p>
<p>In other words, they had to work their way through the digital detritus of old and abandoned emails until they were able to find active ones – including Podesta’s – and then they could send a phishing email.</p>
<p>Digital trashing has already been automated. Facebook/Meta, Twitter and especially LinkedIn have been <a href="https://portal.research.lu.se/en/publications/the-weaponization-of-social-media-spear-phishing-and-cyberattacks">ripe targets for the automated gathering of data on potential targets</a>. </p>
<p>Beyond social media, websites — particularly those that have organizational structures, names of employees and email addresses — <a href="https://nostarch.com/practical-social-engineering">are targets</a>. </p>
<h2>Pretexting</h2>
<p>A pretext is the role a masspersonal social engineer plays when trying to get information or manipulate a target. For example, in a phishing email, the phisher is playing a role as a bank or government representative. The most effective pretexts are developed based on the information gathered in trashing — the more information a social engineer has on their target, the more likely the social engineer can construct a compelling role to play.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a man sits in the dark in front of a laptop and additional screen. he is wearing headphones" src="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When phishing for information, a social engineer may play a deceptive role.</span>
<span class="attribution"><span class="source">(Jefferson Santos/Unsplash)</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>And pretexts can be automated. We’ve already seen the effects of <a href="https://doi.org/10.1177/0894439320908190">socialbots on discourse in social media</a>. And for several years people have sounded alarms about <a href="https://doi.org/10.1109/ACCESS.2021.3131517">deepfake videos and audio</a> of political figures.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-combat-the-unethical-and-costly-use-of-deepfakes-184722">How to combat the unethical and costly use of deepfakes</a>
</strong>
</em>
</p>
<hr>
<p>But evidence from security professionals show that automated imitations of everyday people are happening, too. <a href="https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402">A case of fraud</a> involving an AI-based imitation of a CEO’s voice has already occurred, and there are <a href="https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc">reports of fraudsters using AI-generated voices</a> of relatives to scam their loved ones.</p>
<h2>Bullshitting</h2>
<p>The third and final stage, bullshitting, is the actual engagement with the target. All the trashing and development of a pretext leads to this point: trashing gives the social engineer background information, and the pretext provides a role-playing framework, but in any back-and-forth engagement with the target, the social engineer engages in improvisation.</p>
<p>As moral philosopher <a href="https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit">Harry Frankfurt famously defines it</a>, “bullshit” is not lying — it’s the indifference to truth. A bullshitter may or may not speak truth. The truth is beside the point; it’s the <em>effect</em> of the communication that matters.</p>
<p>AI could produce bullshit content — including deepfakes — that floods a media system at a much larger scale than a person, or group of people, working together. The primary concern here is the production of seemingly real content that is meant to deceive or muddy debate.</p>
<p>And we are already seeing interest among content marketers, who are <a href="https://www.entrepreneur.com/science-technology/how-can-companies-use-chatgpt-for-content-marketing/450831">using AI</a> to help them crank out more content for their blogs. </p>
<p>Even if no one piece is particularly effective, the flood of such content online will further add to the “<a href="https://doi.org/10.7249/PE198">firehose of falsehood</a>.” This could have the effect of further muddying the waters of online discourse, and eroding our sense of what is true, false and authentic online.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1671586620665995266"}"></div></p>
<h2>Increased intensity</h2>
<p>Manipulative communication isn’t new. But automated manipulative communication is a new development, increasing the pace and intensity of disinformation and misinformation. </p>
<p>We hope that this framework, which breaks down the manipulative communication process into stages, helps future researchers and policymakers come to grips with this development. </p>
<p>Reducing trashing behaviours involves better privacy regulations and cybersecurity to prevent data breaches, and enhanced penalties for organizations that do leak private data. </p>
<p>Addressing pretexting can involve more transparency in the funding for advertising campaigns, particularly in the case of political advertising on social media. </p>
<p>And to combat bullshitting, we should support projects that teach digital media literacy.</p><img src="https://counter.theconversation.com/content/207187/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert W. Gehl received funding from the Fulbright Commission. </span></em></p><p class="fine-print"><em><span>Sean Lawson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence could be used to generate content intended to manipulate people. Addressing this problem means understanding how communication works to influence people.Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice, York University, CanadaSean Lawson, Professor, Communication, University of UtahLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2037352023-06-07T15:21:06Z2023-06-07T15:21:06ZAlgorithms can be useful in detecting fake news, stopping its spread and countering misinformation<figure><img src="https://images.theconversation.com/files/527840/original/file-20230523-16009-p03nte.jpg?ixlib=rb-1.1.0&rect=98%2C0%2C5892%2C3000&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Stopping misinformation before it spreads is important.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Fake news is a complex problem and can span text, images and video. </p>
<p>For written articles in particular, there are several ways of generating fake news. A fake news article could be produced by selectively editing facts, including people’s names, dates or statistics. An article could also be completely fabricated with made-up events or people.</p>
<p>Fake news articles can also be machine-generated as advances in artificial intelligence make it particularly easy to generate misinformation. </p>
<h2>Damaging effects</h2>
<p>Questions like: “Was there voter fraud during the 2020 U.S. elections?” or “Is climate change a hoax?” can be fact-checked by analyzing available data. These questions can be answered with true or false, but there is potential for misinformation surrounding questions like these.</p>
<p>Misinformation and disinformation — or fake news — can have <a href="https://doi.org/10.2471%2FBLT.21.287654">damaging effects on a large number of people in a short time</a>. Although the notion of <a href="https://www.cits.ucsb.edu/fake-news/brief-history">fake news has existed well before technological advances</a>, social media have exacerbated the problem. </p>
<p><a href="https://doi.org/10.1126/science.aap9559">A 2018 Twitter study showed that</a> false news stories were more commonly retweeted by humans than bots, and 70 per cent more likely to be retweeted than true stories. The same study found that it took true stories approximately six times longer to reach a group of 1,500 people and, while true stories rarely reached more than 1,000 people, popular false news could spread up to 100,000.</p>
<p>The 2020 U.S. presidential election, COVID-19 vaccines and climate change have all been the subject of misinformation campaigns with grave consequences. It is estimated that <a href="https://www.centerforhealthsecurity.org/sites/default/files/2023-02/20211020-misinformation-disinformation-cost.pdf">misinformation surrounding COVID-19 costs between US$50-300 million daily</a>. The cost of political misinformation could be civil disorder, violence or even erosion of public trust in democratic institutions.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1307979841313157122"}"></div></p>
<h2>Detecting misinformation</h2>
<p>Detecting misinformation can be done by a combination of algorithms, machine-learning models and humans. An important question is who is responsible for controlling, if not stopping, the spread of misinformation once it’s detected. Only social media companies are really in the position to exercise control over the spread of information through their networks.</p>
<p>A particularly simple but effective means of generating misinformation is to selectively edit news articles. For example, consider “Ukrainian director and playwright arrested and accused of ‘justifying terrorism.’” This was achieved by replacing “Russian” with “Ukrainian” in the original sentence in a <a href="https://www.cnn.com/2023/05/06/europe/russia-theater-terror-arrests-intl/index.html">real news article</a>. </p>
<p>A multi-faceted approach is needed to detect misinformation online in order to control its growth and spread.</p>
<p>Communications in social media can be modelled as networks, with the users forming points in the network model and the communications forming links between them; a retweet or like of a post reflects a connection between two points. In this network model, spreaders of misinformation tend to form much more densely connected core-periphery structures than users spreading truth.</p>
<p>My research group <a href="https://doi.org/10.14778/3342263.3342645">has developed efficient algorithms</a> <a href="https://doi.org/10.1145/3318464.3389697">for detecting dense structures</a> <a href="https://doi.org/10.1145/3483940">from communication networks</a>. This information can be analyzed further for <a href="https://doi.org/10.14778/3551793.3551826">detecting instances of misinformation campaigns</a>. </p>
<p>Since these algorithms rely on communication structure alone, content analysis conducted by algorithms and humans is needed to confirm instances of misinformation. </p>
<p>Detecting manipulated articles takes careful analysis. Our research used a <a href="https://aclanthology.org/2022.acl-short.10.pdf">neural network-based approach</a> that combines textual information with an external knowledge base to detect such tampering.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Photograph of an open outstretched hand with an illustration of figures floating above it with arrows pointing between them" src="https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/530415/original/file-20230606-29-j7tbx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Users form connections by interacting with each others’ posted content.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Stopping the spread</h2>
<p>Detecting misinformation is just half the battle — decisive action is required to stop its spread. Strategies for combating the spread of misinformation in social networks include both intervention by internet platforms and launching counter-campaigns to neutralize fake news campaigns. </p>
<p>Intervention can take hard forms, like suspending a user’s account, or softer measures like labelling a post as suspicious. </p>
<p>Algorithms and AI-powered networks are not 100 per cent reliable. There is a cost to intervening on a true item by mistake as well as not intervening on a fake item. </p>
<p>To that end, we designed a smart intervention policy that automatically decides whether to intervene on an item <a href="https://doi.org/10.1145/3448016.3452778">based on its predicted truthiness and predicted popularity</a>. </p>
<h2>Countering fake news</h2>
<p>Launching counter-campaigns to minimize if not neutralize the effects of misinformation campaigns needs to factor in the <a href="https://doi.org/10.1126/science.aap9559">major differences between truth and fake news in terms of how quickly and extensively each of them spreads</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/as-provinces-open-up-trust-erodes-when-what-we-experience-differs-from-what-institutions-tell-us-178133">As provinces open up, trust erodes when what we experience differs from what institutions tell us</a>
</strong>
</em>
</p>
<hr>
<p>Besides these differences, reactions to stories can vary depending on the user, topic and length of the post. <a href="https://doi.org/10.14778/3547305.3547324">Our approach takes all these factors into account</a> and devises an efficient counter campaign strategy that effectively mitigates the propagation of misinformation.</p>
<p>Recent advances in generative AI, particularly those powered by large language models such ChatGPT, make it easier than ever to create articles at great speed and significant volume, raising the challenge of detecting misinformation and countering its spread at scale and in real time. Our current research continues to address this ongoing challenge which has enormous societal impact.</p><img src="https://counter.theconversation.com/content/203735/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Laks V.S. Lakshmanan receives funding from Natural Sciences and Engineering Research Council of Canada. </span></em></p>To restrict the spread of fake news on social media platforms, researchers designed an algorithm that can flag potential misinformation.Laks V.S. Lakshmanan, Professor of Computer Science, University of British ColumbiaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2033682023-04-12T21:18:39Z2023-04-12T21:18:39ZThe disturbing trend of state media use of deepfakes<figure><img src="https://images.theconversation.com/files/520004/original/file-20230410-14-pmb2rl.jpg?ixlib=rb-1.1.0&rect=0%2C68%2C9144%2C5981&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An image made from video of a fake video featuring former U.S. president Barack Obama showing elements of facial mapping used in new technology that lets anyone make deepfake videos.</span> <span class="attribution"><span class="source">(AP Photo)</span></span></figcaption></figure><p>Social media has been awash with fake images of a <a href="https://www.cnn.com/style/article/pope-francis-puffer-coat-ai-fashion-lotw/index.html">stylish Pope Francis</a>, <a href="https://www.instagram.com/p/CqyGx2xoUAX/">Elon Musk protesting in New York</a> and <a href="https://nypost.com/2023/03/22/chilling-deepfakes-claiming-to-show-trumps-arrest-spread-across-twitter/">Donald Trump resisting arrest</a>. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="an older white man (the pope) wearing a long white puffer jacket and a white skullcap" src="https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=750&fit=crop&dpr=1 600w, https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=750&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=750&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=943&fit=crop&dpr=1 754w, https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=943&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/520362/original/file-20230411-28-74gkj7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=943&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An AI-generated image of Pope Francis wearing a white puffer jacket went viral online, with users wondering if it was real.</span>
<span class="attribution"><a class="source" href="https://www.reddit.com/r/midjourney/comments/120vhdc/the_pope_drip/">(Reddit)</a></span>
</figcaption>
</figure>
<p>Such AI-generated images and videos, or <a href="https://dl.acm.org/doi/abs/10.1145/3425780">deepfakes</a>, have become increasingly accessible due to advances in artificial intelligence. As more sophisticated fabricated images spread, it will become increasingly difficult for users to differentiate the real from the fake.</p>
<p>Deepfakes get their name from the technology used to create them: <a href="https://doi.org/10.1016/j.cviu.2022.103525">deep-learning neural networks</a>. When unleashed on a dataset, these algorithms learn patterns and can replicate them in novel — and convincing — ways.</p>
<p>While this technology can be used for entertainment, it also has dark potential, raising <a href="https://books.google.ca/books?hl=en&lr=&id=4g1yEAAAQBAJ&oi=fnd&pg=PT9&dq=ethical+artificial+intelligence+schoenherr&ots=6fbnULsVRy&sig=nPCglnifLw5daLKARroY-DkIXKU&redir_esc=y#v=onepage&q&f=false">social and ethical concerns</a>.</p>
<p>Unlike simple stories or memes which differ little from propaganda techniques used by <a href="https://doi.org/10.1057/9780230511101">Nazi Germany</a> and photo editing by <a href="https://shop.tate.org.uk/the-commissar-vanishes/15415.html">Communist Russia</a>, deepfakes have a high degree of realism. Their accessibility to the public and states could erode our sense of reality.</p>
<h2>Fake news anchor</h2>
<p>Beyond the growing concern that <a href="https://www.vice.com/en/article/epzkwm/artificial-intelligence-art-creatives-ai">AI-generated art threatens human art and artists</a>, deepfakes can be used as the unchecked mouthpieces for organizations and states.</p>
<p>Leading the way, <a href="https://petapixel.com/2023/03/17/chinese-ai-news-anchor-works-24-hours-a-day-365-days-a-year/">China’s state media has experimented with an AI news anchors, named Ren Xiaorong</a>. Ren, although <a href="https://www.theguardian.com/world/2018/nov/09/worlds-first-ai-news-anchor-unveiled-in-china">not the first AI news anchor developed by China</a>, illustrates both the commitment to the technology and the incremental increases in realism. </p>
<p>Other countries such as <a href="https://www.bbc.com/news/world-middle-east-65238950">Kuwait</a> and <a href="https://www.prnewswire.com/news-releases/sogou-launches-worlds-first-russian-speaking-ai-news-anchor-300865159.html">Russia</a> have also launched AI generated anchors.</p>
<p>When looking at these anchors, we might object that only the most naive viewer would mistake them for real humans, such as <a href="https://www.bbc.com/news/technology-47981274">Russia’s first robotic news anchor</a>. Yet, these technologies are still in their infancy. We cannot dismiss them.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1645753239134978052"}"></div></p>
<h2>Fabricated news</h2>
<p>China’s transparency in using AI-generated news anchors stands in contrast to Venezuela’s fabricated news coverage. Venezuelan state media presented favourable reports of the country’s progress, purportedly created by international English-language news outlets. However, <a href="https://www.semafor.com/article/02/21/2023/venezuela-uses-ai-avatars-to-disseminate-propaganda">the stories and anchors were fabricated</a>.</p>
<p>The use of these videos in Venezuela is particularly troubling because they are used as external validation for the government’s activities. By claiming the video comes from outside of one’s country, it provides another source to bolster their claims.</p>
<p>Venezuela is not the only country to adopt these methods. Fabricated videos of <a href="https://www.bbc.co.uk/news/technology-60780142">Ukraine President Volodymyr Zelenskyy discussing surrender to Russia</a> were also circulated during the ongoing Russia-Ukraine conflict. </p>
<p>Fabricated images and videos are merely the tip of the deepfake iceberg. In 2021, Russia was accused of using <a href="https://www.theguardian.com/world/2021/apr/22/european-mps-targeted-by-deepfake-video-calls-imitating-russian-opposition">deepfake image filters to simulate opponents during interviews with international politicians</a>. The ability to mimic political figures and interact with others in <em>real time</em> is a truly disturbing development. </p>
<p>As these technologies become increasingly accessible to everyone, from harmless meme-makers and would-be social engineers, the boundaries of the real and imagined become progressively indistinguishable.</p>
<p>The proliferation of deepfakes foreshadow a <a href="https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war">post-truth world</a>, defined by a fractured geopolitical landscape, opinion echo chambers and mutual distrust that can be exploited by governmental and non-governmental organizations.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-use-of-deepfakes-can-sow-doubt-creating-confusion-and-distrust-in-viewers-182108">The use of deepfakes can sow doubt, creating confusion and distrust in viewers</a>
</strong>
</em>
</p>
<hr>
<h2>Disinformation and believable fakes</h2>
<p>The spread of disinformation requires that we understand how ideas, innovation or behaviour spread within a social network, referred to as <a href="https://doi.org/10.1073/pnas.1116502109">social contagion</a>.</p>
<p>Cognitive science is concerned with “<a href="https://doi.org/10.1016/S1364-6613(02)00005-0">information</a>” — anything that reduces our uncertainty about the actual state of the world. <em>Disinformation</em> has the appearance of information, except uncertainty is reduced at the expense of accuracy.</p>
<p>Observations that <a href="https://doi.org/10.1126/science.aap9559">disinformation spreads faster that facts</a> likely stems from the fact that when a message is <a href="https://doi.org/10.1037/0278-7393.29.1.22">simple</a>, it increases our confidence.</p>
<p>Disinformation spreads for a variety of reasons. It must appear close enough to the “truth” that it is believable. If a new “fact” is incompatible with what we know, we are inclined to reject it even if it is true. <a href="https://doi.org/10.1521/soco.2012.30.6.652">People don’t like the feeling of inconsistency</a> and seek to resolve it. People will also ignore the structure and quality of an argument, and <a href="https://doi.org/10.1016/0010-0277(92)90019-E">focus on the believability of its conclusion</a>.</p>
<p>Deepfakes move us beyond text-based persuasion, because <a href="https://doi.org/10.4324/9781315785233">images make a message far more memorable</a> — and <a href="https://doi.org/10.1080/15534510.2016.1157096">persuasive</a> — than abstract concepts alone. Its use in spreading disinformation is therefore far more concerning.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a book is held up with a renaissance style illustration of a woman and a snake on the cover and the words LA VERITÀ VI FARÀ LIBERI" src="https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/520363/original/file-20230411-917-schgcz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In 2018, Pope Francis published his annual social communications message, titled ‘The Truth Will Set You Free,’ after facing unprecedented bad press during his South American tour.</span>
<span class="attribution"><span class="source">(AP Photo/Andrew Medichini)</span></span>
</figcaption>
</figure>
<p>The structure of the environment is also critical. People attend to <a href="https://doi.org/10.1037/0022-3514.61.2.195">available information</a>, focusing on information that confirms their prior beliefs. By increasing the frequency of images, ideas and other media, we increase people’s <a href="https://doi.org/10.1037/0022-3514.82.5.722">confidence in their own knowledge</a> and <a href="https://doi.org/10.1016/j.cognition.2022.105023">the illusion of consensus</a>.</p>
<h2>Social networks and contagion</h2>
<p>While we look for <a href="https://doi.org/10.1111/j.1559-1816.2004.tb02547.x">credible sources of information</a> — experts or peers — our memory stores information separately from its source. Over time, this <a href="https://doi.org/10.1037/0033-2909.114.1.3">failure of source monitoring</a> results in our retrieval of information from memory without understanding its origin. </p>
<p>Through <a href="https://doi.org/10.1207/s15506878jobem5004_1">product placement</a> and <a href="https://www.nytimes.com/2021/09/21/technology/zuckerberg-facebook-project-amplify.html">algorithms that control our exposure to media</a>, marketers and <a href="https://freedomhouse.org/report/freedom-net/2018/rise-digital-authoritarianism">governments</a> have exploited these techniques for generations. Most recently, <a href="https://www.nytimes.com/2021/07/25/world/europe/disinformation-social-media.html">social media influencers have been paid to spread disinformation</a>. </p>
<p>The introduction of AI will only accelerate this process by permitting tighter control of the information environment through dark <a href="https://doi.org/10.1145/3173574.3174108">patterns of design</a>.</p>
<h2>Legal, social and moral issues</h2>
<p>Producing, managing and disseminating information grants people authority and power. When information ecosystems become flooded with disinformation, truth is debased.</p>
<p>The accusation of “fake news” has become a tactic used to discredit any argument. Deepfakes are variations on this theme. Social media users have already falsely claimed that real videos of <a href="https://www.bbc.com/news/62338593">U.S. President Joe Biden</a> and <a href="https://www.buzzfeednews.com/article/katienotopoulos/no-that-trump-video-isnt-green-screened">former U.S. president Donald Trump</a> are fake.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fake-news-grabs-our-attention-produces-false-memories-and-appeals-to-our-emotions-124842">Fake news grabs our attention, produces false memories and appeals to our emotions</a>
</strong>
</em>
</p>
<hr>
<p>Social movements such as <a href="https://www.brookings.edu/techstream/the-threat-posed-by-deepfakes-to-marginalized-communities/">Black Lives Matter</a> or claims about <a href="https://www.dw.com/en/xinjiang-footage-sheds-new-light-on-uyghur-detention-camps/a-59880898">the treatment of the Uyghurs in China</a> rely on the compelling qualities of videos. </p>
<p><a href="https://doi.org/10.1016/j.jesp.2007.04.010">Once we form a belief, it is difficult to counter</a>. The time required for verification — especially if left to the user — allows disinformation to propagate. <a href="https://www.snopes.com/fact-check/">Private</a> and public fact-checking websites can help. But they need legitimacy to foster trust. </p>
<p>Brazil provides a recent demonstration of such an attempt. After the government launched a verification website, <a href="https://latamjournalismreview.org/articles/brazilian-government-launches-official-fact-checking-website-and-draws-criticism-from-independent-agencies/">critics</a> accused it of pro-government bias. However, <a href="https://en.mercopress.com/2023/04/07/brazilian-gov-t-s-website-not-enough-against-fake-news-pimenta-says">government officials</a> noted that the site was not meant to replace private initiatives.</p>
<p>There is <a href="http://www.jatit.org/volumes/Vol97No22/7Vol97No22.pdf">no simple solution to unmasking deepfakes</a>. Rather than passive consumers of media, we must actively challenge our own beliefs. </p>
<p>The only way to combat harmful forms of artificial intelligence is to cultivate human intelligence.</p><img src="https://counter.theconversation.com/content/203368/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr has previously received funding from Army Research Laboratory and has served as a visiting scholar at the United States Military Academy and has worked as a consultant for the Canadian Department of National Defence.</span></em></p>The use of deepfakes and AI by groups with various interests, including governments and media, is the latest and most sophisticated tool in information and disinformation campaigns.Jordan Richard Schoenherr, Assistant Professor, Psychology, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2028732023-03-31T11:41:33Z2023-03-31T11:41:33ZThe Pope Francis puffer coat was fake – here’s a history of real papal fashion<figure><img src="https://images.theconversation.com/files/518162/original/file-20230329-24-yl530w.png?ixlib=rb-1.1.0&rect=0%2C0%2C1748%2C1153&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The AI-generated images of Pope Francis that fooled much of the internet. </span> <span class="attribution"><a class="source" href="https://www.reddit.com/r/midjourney/comments/120vhdc/the_pope_drip/">Created by Midjourney</a></span></figcaption></figure><p>Before news of his <a href="https://www.bbc.co.uk/news/world-65125655">hospitalisation for a respiratory infection</a> this week, a fake image of Pope Francis wearing a <a href="https://time.com/6266606/how-to-spot-deepfake-pope/">Balenciaga-style</a> white puffer jacket was <a href="https://www.reddit.com/r/midjourney/comments/120vhdc/the_pope_drip/">posted to Reddit</a> and <a href="https://twitter.com/singareddynm/status/1639655045875507201?s=20">Twitter</a>. The image – created through AI programme <a href="https://www.midjourney.com/home/?callbackUrl=%2Fapp%2F">Midjourney</a> – had many viewers fooled into believing that the head of the Catholic church had dramatically updated his style.</p>
<p>As an art historian and an ecclesiastical historian, the image has fascinated me, not least in thinking about the rich history of papal fashion.</p>
<p>First of all, it caught my eye because it looks like shot silk (fabric made of silk woven from two or more colours producing an iridescent appearance). Intentionally or not, it’s a nice nod to the <a href="https://aleteia.org/2019/08/28/why-does-pope-francis-wear-a-sash/"><em>fascia</em></a>, a sash worn by clerics over their cassocks.</p>
<p>This detail hints at the way papal dress and indeed the attire of many people in formal positions works. It not usually just about the shape and colour, but also the quality or materials used. </p>
<p>Being the pope is a bit like dressing for a wedding every day: even as a guest you wouldn’t turn up in your denims. You honour your hosts by wearing the best you possibly can.</p>
<h2>The palette of the Pope</h2>
<p>In the 21st century, popes have increasingly worn only white, now generally identified as the papal colour. But red is also a pope hue of choice – for example, John Paul II (1920-2005) usually wore white, but he also wore <a href="https://www.reddit.com/r/Colorization/comments/nbfhyj/pope_john_paul_ii_by_yousuf_karsh_in_1979/">red capes and cloaks</a>.</p>
<p>Benedict XVI (1927-2022) brought back the <em>camauro</em> – <a href="https://www.latimes.com/opinion/la-xpm-2013-feb-17-la-oe-allen-pope-fashion-20130217-story.html">nicknamed the “Santa hat”</a> – which is a red silk and velvet cap trimmed with ermine reserved for the pope’s use. The <em>camauro</em> goes back to at least the 12th century when it was related more closely to philosophers and teachers and the hat they wore, known as a <em>pileus</em>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Painting of Pope Gregory the Great writing at a desk wearing a shiny red cape." src="https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=842&fit=crop&dpr=1 600w, https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=842&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=842&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1058&fit=crop&dpr=1 754w, https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1058&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/518229/original/file-20230329-1469-5kepao.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1058&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Pope Gregory the Great (c. 540–604), in a painting by Carlo Saraceni (c. 1610).</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Gregorythegreat.jpg">National Gallery of Ancient Art, Rome</a></span>
</figcaption>
</figure>
<p>Historically, portraits of senators, lawyers and academics often show them wearing red which is used to communicate a message of “official”. </p>
<p>Cardinals, the most senior clerics in the Roman Catholic Church next to the pope, wear red precisely because it is a papal colour and their power (or more accurately, influence) derives entirely from the pope.</p>
<p>Pope Paul II (1417-1471) tried to ensure quality over quantity when, amid shortages, <a href="https://www.academia.edu/17934885/ONCE_UPON_A_TIME_THE_KERMES">he officially reserved</a> the very best red dye for himself and his cardinals.</p>
<p>As a result of the fall of Constantinople to the Ottoman Turks in 1453, trade from the eastern Mediterranean was disrupted. This meant that the supply of red dye kermes – which derives from the galls produced by parasitic wasps on oak trees indigenous to the Mediterranean basin and eastern Continent – was severely curtailed.</p>
<p>It was not until the middle of the 16th century that <a href="https://www.britannica.com/technology/cochineal">cochineal</a> – which comes from parasitic insects on prickly pear cactuses – became available in Europe because of Spanish and Portuguese expansion into South America. </p>
<p>Whatever the dye, papal quality is also communicated by fabrics which hold unparalleled depths of hue: silk, not cotton or linen, alpaca not ordinary wool.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Portrait of John Paul II wearing a red cape and holding a wooden crucifix." src="https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=773&fit=crop&dpr=1 600w, https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=773&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=773&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=972&fit=crop&dpr=1 754w, https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=972&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/518230/original/file-20230329-22-h8xtel.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=972&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Portrait of John Paul II wearing a red cape, by Guido Greganti (1983).</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/rome-italy-august-28-2021-portrait-2060097083">Renata Sedmakova/Shutterstock</a></span>
</figcaption>
</figure>
<p>Historians know from <a href="https://archive.org/details/RationaleDivinorumOfficiorumDurandoEBeletho/page/n7/mode/2up">13th century sources</a> that popes have always worn white next to their skin (though from at least the 15th century <a href="https://www.newyorker.com/culture/on-and-off-the-avenue/where-the-pope-gets-his-socks">their socks have been red</a>). </p>
<p>White represents Christlike purity, innocence and charity, while red symbolises compassion and the pope’s willingness to sacrifice himself for his people.</p>
<p>In ancient Rome, red was the colour of imperial power whereas white was associated specifically with the city. So, the papal colours represent the pope’s universal significance as head of the Catholic church as well as his local position as Bishop of Rome.</p>
<p>Popes can also wear blue – <a href="https://archive.org/details/diuominiillu00vesp/page/30/mode/2up">Pope Nicholas V</a> (1997-1455) particularly liked this colour. John Paul II, on one of his famous hiking trips, wore his white cassock <a href="https://www.monacosporthotel.com/en/activities/itineraries/the-hiking-trails-of-pope-john-paul-ii-_63c20.html">under a padded blue jacket</a>.</p>
<p>For papal fashion purposes, blue can stand in for red. In penitential seasons (<a href="https://www.catholicnewsagency.com/news/42900/what-is-advent-anyway-a-cna-explainer">Advent</a> and <a href="https://christianity.org.uk/article/what-is-lent">Lent</a>) or during periods of mourning, bright colours are not appropriate. But dip your bright red silks in a final dye bath of indigo and you get peacock (<em><a href="https://brill.com/display/book/edcoll/9789004415447/BP000041.xml">pavonazzo</a></em>) which has the iridescence of the bird’s feathers.</p>
<p>Someone with the social conscience of Pope Francis I probably doesn’t give two hoots about what he wears. But as a Jesuit – one of the most highly educated, intellectual and thoughtful of all the groups in the Roman Catholic Church – he would understand the values of continuity and devotion communicated by both what he wears and how he wears it.</p>
<p>I would like to imagine, as he recovers from his respiratory infection, that he would be cheered up by high-tech mashups, such as this image of himself in a puffer coat – so long as they play within the rules of such a dignified man in such a venerable office.</p><img src="https://counter.theconversation.com/content/202873/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Carol Richardson receives funding from British Academy/Leverhulme Trust. </span></em></p>Popes wear white to represent Christlike purity and red to symbolise compassion.Carol Richardson, Professor of Early Modern Art History, History of Art, The University of EdinburghLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2011122023-03-08T15:08:47Z2023-03-08T15:08:47ZWhy you shouldn’t be scared of spiders<figure><img src="https://images.theconversation.com/files/513371/original/file-20230303-24-sec31s.jpg?ixlib=rb-1.1.0&rect=4%2C22%2C2998%2C1972&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Specimen of the spider Thwaitesia nigronodosa.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Bling_Spider_-_Neon_Spider_-_Thwaitsia_sp._from_the_NSW_Central_Coast_(7).jpg">Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>A school in Gloucestershire, in the west of England, was closed for several days due to an “invasion of poisonous spiders”. Experts claimed they were not aggressive, but the school was closed, alarm spread and some media outlets were quick to call them “<a href="https://www.dailymotion.com/video/x16c7ax">eight-legged monsters</a>”.</p>
<p>In another case, the alleged severity of a spider bite on a woman triggered alarm in Mallorca (Spain). Social networks were flooded with messages and photographs of bites. Although the Conselleria de Salut and the leading hospital on the islands made it public that <a href="https://www.ultimahora.es/noticias/sociedad/2016/09/23/221600/hay-ninguna-alarma-activa-por-picadura-aranas-balears.html">there was no health alarm</a> and experts explained that no dangerous spiders are present in Mallorca, messages continued for days and days on social media.</p>
<p>Such news is likely to produce fear, a visceral reaction often sought by the most sensationalist tabloids. These stories do not remain in the local or regional press, but are disseminated on a global scale almost immediately.</p>
<p>The news carries with it political or social actions that can be costly, often unnecessarily so. For example, not taking children to school for several days or environmental pollution from unnecessary pesticide treatments. They also fuel a global sentiment based on misinformation: spider panic.</p>
<p>Contrary to the impression we get from reading these news stories, the risk of being exposed to a spider is minimal. <a href="https://www.sciencedirect.com/science/article/abs/pii/S0041010113002535">Studies in Switzerland</a> estimate that the annual probability of being bitten by a spider is between 10 and 100 cases per million inhabitants. <a href="https://academic.oup.com/qjmed/article/95/11/723/1543044">Another study in Australia</a> found that only 6% of confirmed spider bites were of medical importance.</p>
<h2>The global spread of misinformation about spiders</h2>
<p>A recent study by more than 60 researchers, published in the journal <a href="https://www.sciencedirect.com/science/article/pii/S0960982222011277">Current Biology</a>, has examined the global spread of misinformation about spiders. This collective effort has resulted in <a href="https://figshare.com/articles/dataset/Global_Spider_News_Database/14822301">the compilation</a> of more than 5,000 news items on spider-human encounters published on the internet between 2010 and 2020.</p>
<p>The news items were evaluated in terms of their quality (presence or absence of errors) and their level of sensationalism.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&rect=20%2C324%2C4487%2C2680&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&rect=20%2C324%2C4487%2C2680&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/485391/original/file-20220919-6421-k96p53.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/es/image-photo/large-spider-weaving-web-garden-tiger-2201791945">Shutterstock / neroski</a></span>
</figcaption>
</figure>
<h2>Reporting errors and sensationalist language</h2>
<p>Almost half of the news items analysed contained errors or inaccurate information, such as incorrect identification of the spider involved. Some articles report species that do not even live in the area, and sometimes there is no certainty that the bite occurred.</p>
<p>In up to 43% of cases, the news stories used sensationalist language. Although the language used in the news was less sensational when experts in <a href="https://en.wikipedia.org/wiki/Arachnology">arachnology</a> had been consulted.</p>
<p>Errors often started at the regional level, and the story was amplified in national and international media. According to experts, this is a defining characteristic of modern misinformation: the amplification of small errors that support a false narrative. It is as present in spider news as it is in political news.</p>
<p>The likelihood of a country being a distributor of sensational news stories about human-spider encounters was positively related to several factors. These included the proportion of sensational news stories published in the country, the presence of spiders considered deadly, and a high number of internet users.</p>
<p>There are more dangerous spiders in Australia than in almost any other country, yet news about spiders is accurate and rarely emotionally charged. According to the analysis, the UK generates the most misinformation about arachnids, despite having very few dangerous venomous spider species.</p>
<p>The implications of the misinformation generated are no less significant. They reinforce a feeling of public animosity towards these arthropods. This leads to what we mentioned at the beginning of the article: the avoidance of their presence in public or private spaces, and the use of unnecessary pesticide treatments. Moreover, false alarms may lead to school closures or tourism suffering.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/485389/original/file-20220919-22-42yat0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/es/image-photo/jumping-spiders-soft-focus-415995106">Shutterstock / MR.AUKID PHUMSIRICHAT</a></span>
</figcaption>
</figure>
<h2>The other side of spiders</h2>
<p>The first clear spider-like representations date back to <a href="http://sea-entomologia.org/aracnet/10/03mitologia/">10,000 years ago</a>. Because of their distribution, occupying all continents and habitats, as well as their biology and ecology, they have been admired and feared in equal measure.</p>
<p>They have often been associated with divinities, with creative powers (due to their great fertility, ability to make and weave silk and their cunning) and destructive powers (related to their hunting ways and the presence of poison).</p>
<p>All spiders, with the exception of the <a href="https://www.faculty.biol.vt.edu/opell/publicaton_pdfs/2005%20SGNA%20Uloboridae.pdf">Uloboridae</a> family, produce venom, but this, with rare exceptions, is imperceptible to humans. They use it, along with silk, to trap or immobilise their prey.</p>
<p>Only four genera of spiders have been described whose venom is of medical interest (Phoneutria, Loxosceles, Latrodectus and Atrax), and only 4% of the known species may be dangerous to humans. This means that of the approximately 45,000 known species, more than 43,200 are harmless.</p>
<p>Contrary to popular belief, spiders have many beneficial aspects. Firstly, they <a href="https://www.biotaxa.org/Zootaxa/article/view/zootaxa.4979.1.14">contribute to the total biodiversity</a> of the planet, being one of the largest groups of invertebrate animals. In addition, they play an essential role in <a href="http://sedici.unlp.edu.ar/bitstream/handle/10915/93298/Efecto_de_las_ara%C3%B1as__Arachnida__Araneae__como_depredadoras_de_insectos_plaga_en_cultivos_de_alfalfa__Medicago_sativa___Fabaceae__en_Argentina.a1de8d21-8e06-4f6b-8171-401826ca5375_D.pdf-PDFA.pdf?sequence=1">crop pest remediation</a> due to their status as <a href="https://link.springer.com/article/10.1007/S00114-017-1440-1">insect predators</a> and are an important component of <a href="https://www.researchgate.net/publication/272740185_Las_aranas_en_agroecosistemas_bioindicadores_terrestres_de_calidad_ambiental">bio-indicators of environmental quality</a>.</p>
<p>Once the criminalised view of arachnids has been debunked, the best thing to do when faced with a spider is to be kind to it, because it is a natural treasure.</p><img src="https://counter.theconversation.com/content/201112/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Saioa Legarrea Imizcoz carries out her research work at the University of La Rioja thanks to the European Union Next Generation EU funds articulated through the María Zambrano call (Royal Decree 289/2021 of 20 April).</span></em></p><p class="fine-print"><em><span>Tania A. García de la Parra Bañares no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.</span></em></p>A study has reviewed 5,000 news stories about spiders published on the internet. Most of them contain false and sensationalist information. Spider infodemics has its poison.Saioa Legarrea Imizcoz, Investigadora en Entomología Agrícola, Universidad de La RiojaTania A. García de la Parra Bañares, Estudiante de Doctorado, Universidad de La RiojaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2009492023-03-06T15:04:28Z2023-03-06T15:04:28ZReporting Ukraine 90 years ago: the Welsh journalist who helped uncover Stalin’s genocide<p>Ninety years ago, a young Welsh investigative journalist reported on the Soviet Union’s genocide in Ukraine, Stalin’s attempt to stamp down on rising nationalism. <a href="https://www.history.com/news/ukrainian-famine-stalin">The Holodomor</a>, as it became known, was responsible for the deaths of some 4 million Ukrainians through deliberate starvation. </p>
<p><a href="https://www.garethjones.org">Gareth Jones</a>’ eyewitness reports, gathered at significant risk, were initially disbelieved and dismissed at a time when many in the west were supportive of Stalin as a potential ally against the growing Nazi threat in the early 1930s. It was only later, after the journalist was murdered in murky circumstances, that the full scale of what had taken place was recognised. </p>
<p>Jones, a linguist and political advisor before he turned to journalism, has become the subject of a <a href="https://www.youtube.com/watch?v=-o7VoM1jlOs">feature film,</a> several <a href="https://www.bbc.co.uk/news/uk-wales-south-east-wales-18691109">documentaries</a> and numerous <a href="https://nation.cymru/culture/review-mr-jones-the-man-who-knew-too-much-the-life-death-of-gareth-jones-by-martin-shipton/">biographies.</a> Yet his achievements, which hold lessons for today’s reporters, are still not well known.</p>
<p>Jones was born in Barry, south Wales, in 1905. His mother had worked in Ukraine as a tutor to the <a href="https://www.bbc.co.uk/news/uk-wales-40345030">Hughes family</a>, Welsh steel industrialists, who had founded what is now the city of <a href="https://www.britannica.com/place/Donetsk-Ukraine#ref197147">Donetsk</a>. </p>
<p>He had a talent for languages and graduated from Aberystwyth University with first class honours in French and then later from Cambridge with another first in French, German and Russian. In 1930, he was hired as a foreign affairs advisor to the MP and former prime minister David Lloyd George while also developing his freelance journalism.</p>
<p>In early 1933, Jones was in Germany covering <a href="https://www.garethjones.org/german_articles/german_articles.htm">Hitler’s rise to power</a>. He was there on the day Hitler was <a href="https://www.history.com/this-day-in-history/adolf-hitler-is-named-chancellor-of-germany">pronounced chancellor</a> and flew with him and Goebbels to Frankfurt where he reported for the <a href="https://www.walesonline.co.uk">Western Mail</a>, a Welsh daily newspaper. </p>
<p>In March 1933, he made a third and final trip to the Soviet Union. He had earlier <a href="https://www.garethjones.org/soviet_articles/gareth_jones_diary.htm">reported more explicitly</a> than most on the economic crisis and starvation that was emerging. This time, he went undercover into Ukraine and <a href="https://www.garethjones.org/soviet_articles/gareth_jones_diary.htm">kept notes</a> of all he saw:</p>
<blockquote>
<p><a href="https://www.garethjones.org/margaret_siriol_colley/The%20exhibition/press_release.htm">I walked along through villages and twelve collective farms</a>. Everywhere was the cry, “There is no bread. We are dying.” This cry came from every part of Russia, from the Volga, Siberia, White Russia, the North Caucasus, and Central Asia. I tramped through the black earth region because that was once the richest farmland in Russia and because the correspondents have been forbidden to go there to see for themselves what is happening.</p>
</blockquote>
<p>The report was denounced by the Soviets and also in the New York Times by its Moscow correspondent, Walter Duranty. It was an <a href="https://www.atlanticcouncil.org/blogs/ukrainealert/mr-jones-film-exposes-the-fake-news-campaign-behind-stalins-ukrainian-genocide/">early example of crying “fake news”</a> to undermine uncomfortable truths. </p>
<figure class="align-center ">
<img alt="People lie strewn in a black and white scene. Other people walk past looking at the bodies." src="https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=447&fit=crop&dpr=1 600w, https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=447&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=447&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=561&fit=crop&dpr=1 754w, https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=561&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/513619/original/file-20230306-22-it1g4d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=561&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Starved people on a street in Kharkiv, Ukraine in 1933.</span>
<span class="attribution"><a class="source" href="https://books.google.co.uk/books/about/Famine_in_the_Soviet_Ukraine_1932_1933.html?id=k0K5AAAAIAAJ&redir_esc=y">Famine in the Soviet Ukraine, 1932–1933: a memorial exhibition, Widener Library, Harvard University.</a></span>
</figcaption>
</figure>
<p>Jones <a href="https://www.garethjones.org/soviet_articles/jones_replies.htm">rebutted</a> the criticism with a detailed analysis of the famine and its causes – but the mud stuck. He was banned from the Soviet Union and returned to Wales, unable to find work with major newspapers until he <a href="https://www.garethjones.org/margaret_siriol_colley/randolph_hearst1934.htm">met the American press magnate William Randolph Hearst</a>. Hearst had bought St Donat’s castle, a few miles from Jones’ home in Barry and supported him by publishing his articles in full.</p>
<p>The following year, he embarked on a world tour, focusing <a href="https://www.garethjones.org/articles_far_east/contents.htm">on Asia</a>.
He spent time in Japan and then went to China, moving on to Inner Mongolia with a German journalist. The pair were kidnapped by bandits and held hostage. </p>
<p>Jones’ body was found in August 1935. He had <a href="https://www.garethjones.org/articles_far_east/berliner_tageblatt.htm">apparently been shot</a> the day before his 30th birthday. Biographers have pointed to circumstantial evidence that the Soviet secret services, the NKVD, were involved in his kidnap and murder as revenge for his reporting. But there is no concrete proof of this. </p>
<p>Lloyd George paid <a href="http://news.bbc.co.uk/local/cambridgeshire/hi/people_and_places/history/newsid_8357000/8357028.stm">tribute to him</a> in the London Evening Standard newspaper following news of his death:</p>
<blockquote>
<p>That part of the world is a cauldron of conflicting intrigue and one or other interests concerned probably knew that Mr Gareth Jones knew too much of what was going on. He had a passion for finding out what was happening in foreign lands wherever there was trouble, and in pursuit of his investigations he shrank from no risk. I had always been afraid that he would take one risk too many. Nothing escaped his observation, and he allowed no obstacle to turn from his course when he thought that there was some fact, which he could obtain. He had the almost unfailing knack of getting at things that mattered.</p>
</blockquote>
<p>Today, as another generation of journalists reports on Russia’s invasion of Ukraine, Jones’ story holds a number of relevant lessons. Even as we are swamped with digital media, there is no substitute for eyewitness reporting and for reporters taking the risks to see for themselves what is happening. </p>
<p>Attempts to hold power to account will often be meet with denial – including from other media – but cries of “fake news” must be countered with hard evidence. </p>
<p>Reporting can be a dangerous occupation. The press watchdog, Committee to Protect Journalists, <a href="https://www.theguardian.com/media/2023/jan/24/more-journalists-killed-latin-america-caribbean-ukraine-2022-cpj">reported</a> that 67 journalists had been killed last year – including 15 in Ukraine after Russia’s invasion in February 2022.</p>
<p>Despite the risks, international reporting is as essential today as it was in the 1930s when Gareth Jones set out to tell the world what he had seen.</p><img src="https://counter.theconversation.com/content/200949/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Sambrook does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Gareth Jones reported on Moscow’s genocide against the Ukrainian people in the 1930s. His story holds lessons and an example for those reporting on the latest conflict.Richard Sambrook, Emeritus Professor of Journalism, Cardiff UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1991232023-02-23T12:52:04Z2023-02-23T12:52:04ZMisinformation: why it may not necessarily lead to bad behaviour<figure><img src="https://images.theconversation.com/files/511120/original/file-20230220-16-i4wcof.jpg?ixlib=rb-1.1.0&rect=73%2C128%2C5978%2C4092&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Lies are nothing new.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/pop-art-style-comic-book-panel-654506083">durantelallera/Shutterstock</a></span></figcaption></figure><p>“So far as the influence of the newspaper upon the mind and morals of the people is concerned, there can be no rational doubt that the telegraph has caused vast injury.” So <a href="https://www.theatlantic.com/technology/archive/2014/07/in-1858-people-said-the-telegraph-was-too-fast-for-the-truth/375171/">said the The New York Times</a> in 1858, when the transatlantic cable linking North America and Europe was completed.</p>
<p>The telegraph was assumed to be a means of spreading propaganda that would destabilise society. It was also seen as <a href="https://monoskop.org/images/b/bf/Lippman_Walter_Public_Opinion.pdf">a vehicle used to disconnect people</a> from the real world by introducing false ideas in their heads. Today, we might dismiss this as an irrational fear – a <a href="https://www.taylorfrancis.com/books/mono/10.4324/9780203828250/folk-devils-moral-panics-stanley-cohen">moral panic</a>. </p>
<p>Go back further and there are examples of questionable information recorded and disseminated via information technologies available to the ancients – <a href="https://journals.ala.org/index.php/ltr/article/view/6497">in clay, stone and papyrus</a>. Fast forward to today, and the exact same concern exists around social media. So are we overreacting? We have <a href="https://doi.org/10.1177/17456916221141344">interrogated the evidence</a> suggesting that misinformation leads to bad beliefs and behaviour and found we might be.</p>
<p>The concern about misinformation is certainly growing. If you type “misinformation” into an academic search engine, you get about 100,000 hits between 1970 and 2015. In the past seven years alone, there are over 150,000 hits. </p>
<p>In <a href="https://www.mpf.se/en/mission/">Sweden</a>, <a href="https://www.aec.gov.au/About_AEC/files/eiat/eiat-disinformation-factsheet.pdf">Australia</a>, <a href="https://www.canada.ca/en/canadian-heritage/services/online-disinformation.html">Canada</a>, the <a href="https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf">United Kingdom</a>, the <a href="https://www.state.gov/disarming-disinformation/">United States</a>, <a href="https://digital-strategy.ec.europa.eu/en/policies/online-disinformation">European Union</a>, <a href="https://www.who.int/health-topics/infodemic">World Health Organization</a> and the <a href="https://www.un.org/en/un-coronavirus-communications-team/un-tackling-infodemic-misinformation-and-cybercrime-covid-19">United Nations</a>, there is intense research on the topic. This is linked to the introduction of laws, bills, task forces and units to block the spread of the misinformation virus. It seems the consensus is that misinformation is a problem, and a big one. </p>
<p>What drives this consensus? When we reviewed the research <a href="https://doi.org/10.1177/17456916221141344">across a number of different disciplines</a> – including sociology, psychology computer science, philosophy and media studies – we found the finger pointing at the evolution of the internet. The advent of social media has turned passive consumers of information into active producers and distributors. The result is unchecked and uncontrolled information that may boost beliefs in false claims. </p>
<p>This research suggests misinformation may lead to increased distrust in news media and governments or increased illiberal political behaviours, such as violent attacks on ethnic groups. Or that it may destabilise economic behaviours. After all, Pepsi’s stock <a href="https://www.nim.org/en/publications/gfk-marketing-intelligence-review/all-issues/brand-risk-matters/how-truthiness-fake-news-and-post-fact-endanger-brands-and-what-do-about-it">fell by about 4%</a> because a <a href="https://www.snopes.com/fact-check/pepsi-ceo-tells-trump-supporters-to-take-their-business-elsewhere/">fake story went viral</a> about their CEO, Indra Nooyi, allegedly telling Trump supporters to “take their business elsewhere”.</p>
<p>Yet, the presumed relationship between social media and such social unrest is frequently based on tacit assumptions, not direct empirical evidence. These assumptions commonly take the form of a causal chain, which goes like this: misinformation → bad beliefs → bad behaviour.</p>
<p>Such an oversimplistic causal relationship between beliefs and behaviour has been questioned in both <a href="https://mitpress.mit.edu/9780262530743/matter-and-consciousness/">philosophy</a> and <a href="https://www.penguin.co.uk/books/285465/the-mind-is-flat-by-chater-nick/9780241208779">psychology</a>. In reality, there’s a <a href="https://gcs.civilservice.gov.uk/wp-content/uploads/2022/09/Wall_of_Beliefs_-publication.pdf">dynamic relationship between belief and behaviour</a> – each can fuel the other in complex ways.</p>
<figure class="align-center ">
<img alt="Hooded hacker person using smartphone in infodemic concept with digital glitch effect." src="https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/511121/original/file-20230220-14-56d1sa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">We often spot untrustworthy sources.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hooded-hacker-person-using-smartphone-infodemic-2111213687">Shutterstock</a></span>
</figcaption>
</figure>
<p>In principle, people should be capable of assessing the quality of information and its source. After all, we have been dealing with lies and inaccuracies for millennia. And although advertisers can sometimes trick us, there’s no perfect model of how a particular communication channel with particular content can establish beliefs that will spur people to action on a large scale. </p>
<h2>Blind spots in research</h2>
<p>Just because a lot of researchers agree that there is an infodemic that is causing societal ills – distrust in institutions, for example – doesn’t mean that the issue is settled or that the evidence is secure. By combining a historical and psychological perspective, we discovered blind spots in this reasoning.</p>
<p>The causal chain described requires that we all agree on what misinformation is – and that this doesn’t change over time. But what happens when over time what is initially labelled as misinformation becomes information, or information becomes misinformation? Galileo’s 1632 challenge of the geocentric astronomical model, which assumed the Earth was at the centre of the solar system, is a classic example. Despite the fact that he was right, the Catholic church did not officially <a href="https://www.nytimes.com/1992/10/31/world/after-350-years-vatican-says-galileo-was-right-it-moves.html">pardon him for heresy</a> until 1992. So, for several centuries Galileo’s truth was seen as misinformation.</p>
<p>A recent case concerns the origin of the SARS-CoV-2 virus: the possibility that it was developed in a lab was initially widely labelled a <a href="https://www.tandfonline.com/doi/full/10.1080/13669877.2020.1758193">conspiracy theory</a>, before subsequently being seen as a <a href="https://www.science.org/content/article/who-chief-sharpens-call-china-further-help-probe-origin-pandemic">viable hypothesis</a>.</p>
<p>These difficulties resonate with debates and disagreement about the definition of the term misinformation and related notions such as fake news and disinformation, with several <a href="https://journals.sagepub.com/doi/10.1177/17456916221141344">proposals for definitions</a> and characteristics in the scientific literature. </p>
<p>If there is no agreement on a definition of misinformation, it’s no surprise that there is no clear cut way to determine its role in shaping beliefs and, in turn, how those beliefs affect behaviour.</p>
<p>A second blind spot relates to the accessibility of information. Technological advances have not only given rise to new ways of accessing and sharing information. They also provide new opportunities for journalists, governments and researchers to analyse various forms of human communication at an unprecedented scale.</p>
<p>A common impression is that people on social media are going it alone in curating their own facts about the world, and that this is causing a perfect storm where there is mistrust in various institutions (news media, governments, science) and society appears fractured. But just because we have greater access to knowing the sheer volume of communication between people online doesn’t mean that it directly causes societal ills. We may merely be observing part of the fabric of human communication that has always taken place in market squares, pubs and family dinners.</p>
<p>There is still a case to be made about addressing misinformation. But it isn’t clear how regulatory measures designed to impede the spread of, say, misleading scientific claims would work. Regulatory measures are necessary to limit unethical research and practices, but if taken to extreme they can erode the foundations of democratic societies. </p>
<p>History shows us the problems with censoring ideas, which <a href="https://www.perlego.com/book/2646812/dangerous-ideas-a-brief-history-of-censorship-in-the-west-from-the-ancients-to-fake-news-pdf">often backfires</a> – leading in turn to even less trust in institutions. While there is no easy solution, the goal must be to <a href="https://royalsociety.org/topics-policy/projects/online-information-environment">adequately balance</a> freedom of expression and democratic values against interventions designed to manage the fall out from misinformation.</p><img src="https://counter.theconversation.com/content/199123/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We often assume misinformation leads to bad beliefs which lead to antisocial behaviour. But there’s so far little evidence for this.Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business SchoolBjörn Meder, Professor of Psychology, Health and Medical UniversityChristos Bechlivanidis, Associate Professor - Experimental Psychology, UCLZoe Adams, Research associate, Cambridge Judge Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1981442023-02-01T20:44:38Z2023-02-01T20:44:38Z5 expert tips to protect yourself from online misinformation<figure><img src="https://images.theconversation.com/files/507206/original/file-20230130-26-hvkjbd.jpg?ixlib=rb-1.1.0&rect=0%2C68%2C5119%2C3325&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Online misinformation is a serious issue. But experts have helpful tips that can help us navigate it. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/5-expert-tips-to-protect-yourself-from-online-misinformation" width="100%" height="400"></iframe>
<p>The spread of misinformation is a major problem impacting many areas of society from <a href="https://www.cbc.ca/news/politics/cost-of-covid-19-misinformation-study-1.6726356">public health</a>, to science and <a href="https://cca-reports.ca/reports/the-socioeconomic-impacts-of-health-and-science-misinformation/">even democracy itself</a>.</p>
<p>But online misinformation is a problem that is very difficult to address. Policing social media is like playing an infinite game of whack-a-mole. Even if we could address one type of misinformation, others quickly spring up in its place. Furthermore, there are valid concerns about how governments and corporations might address this problem and the dangers of censorship.</p>
<h2>Talking to experts</h2>
<p>We wanted to determine how people could best protect themselves from misinformation online, so in a <a href="https://www.dpicollective.com/our-projects/">recent project</a>, funded by the Social Sciences and Humanities Research Council, we created <a href="https://www.dpicollective.com/the-podcast/">a podcast</a> where we interviewed a group of experts from North America and the UK about misinformation. </p>
<p>We found their answers could be grouped into 5 broad themes.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A smartphone with the Facebook app open." src="https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507211/original/file-20230130-12383-10wqq7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Social media giants have faced significant criticism over the prevalence of misinformation on their platforms.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>1) <strong>Alter your sharing behaviour</strong> and take more time to consider the source of the information, as <a href="https://socialmedialab.ca/">Philip Mai from Toronto Metropolitan University’s Social Media Lab suggests</a>:</p>
<blockquote>
<p>“Don’t be so trigger happy with that retweet button or that share, but know your source. So if something is emotionally triggering you before you share it stop and see who’s sharing…how did they get that information so it’s not just who is sharing it but <a href="https://open.spotify.com/episode/4ze8INTfRmfwF7x6Yjen1p?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">how did they get that information before you share it</a>.”</p>
</blockquote>
<p>Lateral reading can also help people identify the quality of information. Lateral reading involves seeking out additional sources that speak to the trustworthiness of what you’re about to share. For example, <a href="https://research-information.bris.ac.uk/en/persons/stephan-lewandowsky">cognitive psychology professor Stephan Lewandowsky</a> says: </p>
<blockquote>
<p>“Look for other sites that can tell you something about your target. So you know Wikipedia may pop up and say that website is a front for the fossil fuel industry or…it’s funded by unknown sources or whatever. <a href="https://open.spotify.com/episode/48tCz3LhQjPRHQYNGMIsG2?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">And the moment you know that, then you have the means to dismiss sources as being likely untrustworthy</a>.”</p>
</blockquote>
<p>2) <strong>Seek out a variety of different news sources</strong> and consider paying for access to reputable news sources, if you are in a position to do so, to ensure that accurate news is available when you need it. <a href="https://www.ualberta.ca/law/faculty-and-research/health-law-institute/people/timothycaulfield.html">Timothy Caulfield, Canada Research Chair in Health Law and Policy at the University of Alberta</a> suggests:</p>
<blockquote>
<p>“Read news and commentary from across the ideological spectrum and subscribe to newspapers across the ideological spectrum…so we know you’re kind of contributing to the marketplace of ideas and <a href="https://open.spotify.com/episode/3LhGwMSR8ujWTc6jlsA64U?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">you’re also doing the best to get outside your echo chamber</a>.”</p>
</blockquote>
<p>It can be difficult to identify quality news sources when there are so many inaccurate ones out there, but there are tools to help. Philosophy scholar Cailin O’Connor, co-author of the book <em><a href="https://yalebooks.yale.edu/book/9780300251852/the-misinformation-age/">The Misinformation Age</a></em>, told us:</p>
<blockquote>
<p>“The website <a href="https://www.propwatch.org/">Prop Watch</a> is all about teaching people what different propaganda techniques look like, as used by politicians and members of the media online, <a href="https://open.spotify.com/episode/0Jbwdj5vFGbhzdHev0XGGn?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">there are things like this that people can use to train themselves</a>.” </p>
</blockquote>
<p>Prop Watch is an educational non-profit. It provides a catalogue of searchable propaganda that people can access to learn what propaganda looks like so they can better identify it online.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Wooden blocks that spell the words Fake and Fact" src="https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=300&fit=crop&dpr=1 600w, https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=300&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=300&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=377&fit=crop&dpr=1 754w, https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=377&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/507248/original/file-20230131-17461-ngiuuv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=377&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Take more time to consider the sources of information and seek additional sources that speak to the trustworthiness of posts before you share them.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>3) <strong>Educate yourself and be skeptical</strong> of information you encounter. Arming yourself with a critical filter may help protect you against misinformation that you would otherwise accept at face value. <a href="https://hls.harvard.edu/faculty/yochai-benkler/">Yochai Benkler, faculty co-director of the Berkman Klein Center for Internet and Society at Harvard University</a>, advises:</p>
<blockquote>
<p>“You can prevent yourself from falling into a trap by having an appropriately skeptical view of most everything you hear. Whatever the outlet…The stance is one of skepticism without cynicism. <a href="https://open.spotify.com/episode/4YKKkgmY8wSpUGlP1ykX8B?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">You don’t have to think everyone is lying to understand that everything is prone to error</a>.”</p>
</blockquote>
<p>One way to practice healthy skepticism is to look for power in every story you come across. <a href="https://noraloreto.ca/">Journalist and author of the book <em>Spin Doctors</em>, Nora Loreto</a>, suggests asking questions like: “Who has power? Who does not have power? Who’s challenging power? How is power being employed? And <a href="https://open.spotify.com/episode/4tu9S1zFl4sC9PvX03qTzS?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">how is power being protected?</a>.”</p>
<p>4) <strong>Reconnect with yourself and your communities</strong> so you can have better relationships with information and the world around you. We are constantly inundated with information and stimulation in our current <a href="https://econreview.berkeley.edu/paying-attention-the-attention-economy/">attention economy</a>. </p>
<p>As education and technology scholar <a href="https://www.royalroads.ca/people/shandell-houlden">Shandell Houlden</a> describes, “the attention economy really is a disconnection economy and it disconnects us from ourselves.” <a href="https://open.spotify.com/episode/0E2tVVUWkCjULCzcc7RBgd?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">She suggests that we should pay greater attention to our senses and to how things are trying to make us feel</a>. </p>
<p>Social media platforms and online spaces can leave us disconnected. Reconnecting with our communities can help us combat misinformation by encouraging dialogue with people we disagree with. <a href="https://www.royalroads.ca/people/geo-takach">Communications scholar and artist Geo Takach recommends</a>: “Engage with people, listen even if you disagree with them and <a href="https://open.spotify.com/episode/2tVYNWbNlVGQlC7q9kCrCZ?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">try to find common ground based on values</a>.”</p>
<p>5) <strong>Advocate for systemic change</strong> by, for example, electing politicians that care about misinformation, helping people feel less disenfranchised and supporting reliable sources of information. Misinformation is a symptom of much larger systemic issues, ranging from social inequalities to inadequate legal infrastructures. As O’Connor says:</p>
<blockquote>
<p>“Honestly I would say the most important thing you can do is work to elect politicians who care about it… because again <a href="https://open.spotify.com/episode/0Jbwdj5vFGbhzdHev0XGGn?go=1&sp_cid=be615d786320de6de0e9cad28bfa4909&utm_source=embed_player_p&utm_medium=desktop&nd=1">sweeping changes are going to be more important than anything an individual can do</a>.”</p>
</blockquote>
<p>By mobilizing to address the systematic structures that support a healthier information environment, individuals can do more to mitigate misinformation. Overall, it will take action at individual, organizational and systemic levels, but there are meaningful steps we can all take to fight back against misinformation if we have the will to do so.</p><img src="https://counter.theconversation.com/content/198144/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jaigris Hodson receives funding from SSHRC CRC and Connections grant programs.</span></em></p><p class="fine-print"><em><span>Andrea Galizia does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The amount of content available online makes policing misinformation extremely difficult. But there are steps we can all take to better ensure the credibility of what we see online.Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads UniversityAndrea Galizia, Researcher, JD Candidate, Royal Roads UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1974892023-01-15T14:36:14Z2023-01-15T14:36:14ZInformation literacy courses can help students tackle confirmation bias and misinformation<figure><img src="https://images.theconversation.com/files/504497/original/file-20230113-26-659oki.jpg?ixlib=rb-1.1.0&rect=14%2C73%2C4898%2C2987&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Understanding our confirmation biases can help us tackle fake news and misinformation.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>When it comes to the news these days, what we choose to regard as trustworthy <a href="https://theconversation.com/republicans-and-democrats-see-news-bias-only-in-stories-that-clearly-favor-the-other-party-192282">has more to do with our own world view</a> than what kinds of news practices are worthy of trust. </p>
<p>Many people are seeking out news that <a href="https://www.asc.upenn.edu/news-events/news/cable-news-networks-have-grown-more-polarized-study-finds">aligns with their politics</a>. But there’s just one problem with this: we are not always good judges of what constitutes trustworthy information and news.</p>
<p>That’s why learning about <a href="https://doi.org/10.5206/cjsotl-rcacea.2020.2.9472">news and information literacy</a> is so important. An information literacy course I teach at the University of Windsor, <a href="https://ctl2.uwindsor.ca/cuma/public/courses/pdf/71241738-66f2-4eea-9a74-e8759c306c53">Information Searching and Analysis</a>, tries to show students that the same phenomenon which makes us poor judges can also be turned around to make us better, more critical consumers of news and information. </p>
<p>The process I use in this information literacy course does not encourage “trust” in mainstream or legacy news media per se. Rather, students learn to assess news based on the characteristics of a news story: multiple, adversarial sources, the use of statistics and data in which the sources are named and can be accessed independently, the kinds of advertising present and whether it is related to the story.</p>
<h2>First lesson: Check your confirmation bias</h2>
<p><a href="https://www.britannica.com/science/confirmation-bias">Confirmation bias</a> suggests that our prior knowledge and experiences often inform our opinions. However, by becoming aware of our confirmation bias tendencies, we can begin to self-critique the way we process information and learn more about ourselves and how we interpret news and information.</p>
<p>The solution comes in the form of an experiential assignment in which students realize their confirmation bias tendencies. Students are tasked with a weekend assignment in which they look for and report on examples of confirmation bias around them and in media reports. They are told to focus mostly on themselves — how they often engage in confirmation bias. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A row of newspapers." src="https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/504500/original/file-20230113-20-ounndj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">By becoming aware of our confirmation biases, we can self-critique the way we process information and news.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>The assignment is an eye opener. In their end-of-semester papers, <a href="https://doi.org/10.1177/1077695819893171">80 per cent of students in the Information Searching and Analysis class</a> noted that the assignment was an important element of the course. Here are a few examples:</p>
<p>“I knew that in some aspects of my life, I may have exhibited confirmation bias towards certain ideas. However, I did not think it was as prominent as it was after the completion of the assignment.”</p>
<p>“…relating to my personal life, this was the most important assignment.”</p>
<p>“I think it was the most impactful and (will) stick with me the longest.”</p>
<p>“It was an insanely enriching experience for me to pull my biases out of the woodwork, particularly for someone like myself who regards themselves as quite unbiased when it comes to anything.”</p>
<p>“…extremely valuable was the consciousness I developed in regard to (how) social media was exclusively forming my opinions… I believe this is perhaps the most universal function of the class.” </p>
<p>The course uses a <a href="https://teaching.berkeley.edu/flipping-your-classroom">flipped classroom approach</a>. Flipped classrooms use class time for discussion, group activities and experiential education instead of lectures and passive forms of learning.</p>
<p>The key is self-confrontation. All the ways to engage in confirmation bias cannot be conveyed through a dry explanation of the concept. The point is to not preach or lecture them about their “faults.” Rather, it is about letting them understand for themselves how confirmation biases can result in inaccurate learning that may have negative effects.</p>
<h2>Media framing</h2>
<p>Over the rest of the semester students explore a social justice issue by looking at how interest groups, journalists and academic researchers have treated the issue. This serves to give them a holistic view of the information field and leads to a better understanding of both the issue and the social dynamics that inform debate about it.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man scrolls through a webpage on a smartphone." src="https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/504499/original/file-20230113-21-oevkr3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Greater information literacy enables us to assess how trustworthy the news we see on social media is.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>It is also crucial that students understand the nature of <a href="https://www.indeed.com/career-advice/career-development/what-is-sponsored-content">sponsored content</a> and other <a href="https://support.google.com/admanager/answer/6366845?hl=en">native ads</a> which may look like news but embed a point of view.</p>
<p>News, information and misinformation play a significant role in improving and undermining democratic discourse and decision-making. Educators at all levels will need to give news and information literacy greater attention to ensure students know how to critique the news they encounter.</p><img src="https://counter.theconversation.com/content/197489/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James Wittebols does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Teaching students about information literacy can help them determine what kinds of practices make news reports trustworthy.James Wittebols, Professor of Political Science, University of WindsorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1961022022-12-06T20:53:47Z2022-12-06T20:53:47ZDisinformation is an epidemic. We’re the vaccine.<figure><img src="https://images.theconversation.com/files/499366/original/file-20221206-2849-rtiqpd.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C5591%2C3177&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> </figcaption></figure><p>Disinformation is the common denominator to many of the problems facing our democratic society. Disinformation is the wicked sister of misinformation: the latter happens when someone spreads false information, perhaps unwittingly; the former is a deliberate attempt to deceive by propagating misleading or fabricated facts. We’ve all encountered both in our daily lives, <a href="https://theconversation.com/5-ways-to-help-stop-the-infodemic-the-increasing-misinformation-about-coronavirus-137561">especially over the last three years</a>. </p>
<p>Facts matter. Science matters. Expertise matters.</p>
<p>The mission of <em>The Conversation</em> is simple: Share knowledge in order to help people make informed decisions. Our team of editors work with hundreds of academics from universities across Canada to tap into their research and expertise to produce our unique form of journalism. We’re proud of what we do. But we’d like to do more of it.</p>
<p>For the very first time, <em>The Conversation Canada</em> is asking its readers if they can help us expand our mission. Our content will always be free. It will never be hidden behind a paywall. But like other non-profit media organizations, <a href="https://donate.theconversation.com/ca?utm_campaign=2022+yearend+fundraising&utm_medium=scottarticle&utm_source=web">we are asking readers if they can make a financial contribution to help us expand on our mission</a>.</p>
<p>I’m going to be honest: asking for money isn’t something that comes naturally to me. And full disclosure: we cannot issue tax receipts for any donations at this time. That’s because we have so far been denied charitable status by the Canada Revenue Agency. The CRA has also ruled that we are not a <a href="https://www.canada.ca/en/revenue-agency/services/tax/businesses/topics/corporations/business-tax-credits/qualified-canadian-journalism-organization.html">Qualified Canadian Journalism Organization</a>, a designation that could allow us to issue tax receipts. (We are appealing both of these CRA decisions.)</p>
<p>Any donations received will go directly to building our editorial capacity. We want to hire more editors to work with more academics to produce more stories. As traditional media in Canada shrinks, we want to grow. We hope you can help.</p>
<p>Do you have any questions about why we’re doing our first donation campaign? Feel free to <a href="mailto:scott.white@theconversation.com">drop me an email</a>. </p>
<p>If you are able to, we would greatly appreciate if you would <a href="https://donate.theconversation.com/ca?utm_campaign=2022+yearend+fundraising&utm_medium=scottarticle&utm_source=web">make a contribution by clicking here</a> or on the banner below.</p>
<p><a href="https://donate.theconversation.com/ca?utm_campaign=2022+yearend+fundraising&utm_medium=scottarticle&utm_source=web"><img src="https://images.theconversation.com/files/499359/original/file-20221206-10480-ashu5x.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=216&fit=crop&dpr=2" width="100%" alt="Back the facts. Donate to The Conversation."></a></p><img src="https://counter.theconversation.com/content/196102/count.gif" alt="The Conversation" width="1" height="1" />
For the first time, we are asking readers if they can help support our mission to share knowledge in order to inform decisions.Scott White, CEO | Editor-in-Chief, The Conversation CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1951702022-12-05T13:25:56Z2022-12-05T13:25:56ZHow fake foreign news fed political fervor and led to the American Revolution<figure><img src="https://images.theconversation.com/files/498563/original/file-20221201-20-3xspsw.jpeg?ixlib=rb-1.1.0&rect=25%2C25%2C5683%2C3714&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An 1877 print called 'Concord - The First Blow For Liberty,' showing American patriots going off to fight the British on April 19, 1775.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/concord-the-first-blow-for-liberty-american-patriots-going-news-photo/1151166237?phrase=American%20Revolution%20Patriots&adppopup=true">Print Collector/Hulton Archive/Getty Images</a></span></figcaption></figure><p>Misinformation is often at the root of political extremism. During the <a href="https://www.nytimes.com/interactive/2022/11/09/us/politics/election-misinformation-midterms-results.html">2022 United States midterm election, some</a> of the most radical politicians in the Republican Party were fueled by the unfounded belief that the previous presidential election in 2020 was stolen.</p>
<p>Misinformation as motivation for political action is nothing new. As I explain in my new book, “<a href="https://www.press.jhu.edu/books/title/12848/misinformation-nation">Misinformation Nation</a>,” during the American Revolution, the self-declared “Patriot” faction that led the colonies through a bloody fight to independence was guided by a profoundly mistaken belief. </p>
<p>These Patriots thought that the British government aimed to control the colonies and extract their wealth, but that a supermajority of people in Britain nevertheless sympathized with colonists’ desire for autonomy. They imagined that this pro-American public was being silenced and suppressed by the leadership of the unpopular British government. </p>
<p>The notion that their protests would be supported by a receptive British public became central to American Patriots’ strategies for organizing against the British government. They boycotted, petitioned and even fought the British Empire in hopes that doing so would contribute to a turnover in government. The Patriot colonists believed that millions of disempowered Britons would soon overturn the empire’s illegitimate government.</p>
<p>Yet as many Britons understood, this notion that the Parliamentary leadership was unpopular and illegitimate was far from the truth.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Three men in colonial dress, one holding a newspaper." src="https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=469&fit=crop&dpr=1 600w, https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=469&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=469&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=590&fit=crop&dpr=1 754w, https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=590&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/498567/original/file-20221201-20-c5uz5x.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=590&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Benjamin Franklin, center, insisted in 1775 that ‘I am persuaded the body of the British people are our friends.’</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/benjamin-franklin-and-associates-at-franklins-printing-news-photo/525372985?phrase=Ben%20Franklin&adppopup=true">GraphicaArtis /Getty Images</a></span>
</figcaption>
</figure>
<h2>Bad sources</h2>
<p>In October 1774, three different reports published in Connecticut Patriot newspapers provided three different estimates of the British public’s favor for them, all of them degrees of overwhelming. While one report suggested that “two out of three” British people favored the colonists against the British government, the others indicated that “three quarters” and “ninety-nine in an hundred” supported the Patriot cause. </p>
<p>A couple of years later, a Massachusetts paper similarly insisted that seven-eighths of the “common people of England … are in favour” of the colonists’ protests against the British leadership.</p>
<p>The Patriots were taking these accounts from newspapers published by the English Whig party, which was out of power. This political opposition published newspapers and sent letters to America insisting that they were the true voice of the people. </p>
<p>Patriot newspapers recycled much of their material from the London opposition press, which provided exactly what they wanted to hear, while ignoring news sources associated with the British government.</p>
<p>Many leading American figures accepted this rumor. Benjamin Franklin <a href="https://founders.archives.gov/documents/Franklin/01-22-02-0136">insisted</a> in 1775 that “I am persuaded the body of the British people are our friends.” </p>
<h2>British support their leaders</h2>
<p><a href="https://www.historyofparliamentonline.org/volume/1754-1790/member/strahan-william-1715-85">A politically prominent London printer named William Strahan</a> often found himself perplexed by American newspapers’ reporting about British politics. </p>
<p>As he explained in letter after letter written to his friend <a href="https://www.britishmuseum.org/collection/term/BIOG218925">David Hall, who printed the Pennsylvania Gazette in Philadelphia</a>, these accounts of the unpopularity of the British leadership – and the popularity of the colonists’ cause – were figments of the opposition’s imagination.</p>
<p>The British government members, he wrote in 1772, not only “stand their Ground,” but “gather strength every day.” The next month, he wrote to Hall that the opposition had “melted away.” He insisted that if the colonists declared independence, “this Country will oppose [the colonists] to the last Extremity.” Because his newspaper was aligned with the Patriot cause, Hall did not republish the letters he received from Strahan and continued to share, instead, sources from the British opposition. </p>
<p>Strahan was correct: The people of Britain broadly supported their leaders in Parliament against the upstart colonists. When voters had a chance to register their dissent against <a href="https://www.britannica.com/biography/Frederick-North-Lord-North-of-Kirtling">Prime Minister Frederick North in an election in 1774</a>, they instead strengthened his hand with a larger majority. </p>
<figure class="align-center ">
<img alt="A man with a white wig and dressed in an 18th century light green suit, sitting with a piece of paper in his hand." src="https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=755&fit=crop&dpr=1 600w, https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=755&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=755&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=949&fit=crop&dpr=1 754w, https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=949&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/498569/original/file-20221201-24-pqpuge.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=949&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">British printer and politician William Strahan, who corresponded with Ben Franklin.</span>
<span class="attribution"><a class="source" href="https://en.wikipedia.org/wiki/William_Strahan_%28publisher%29#/media/File:A_Man_Called_William_Strahan_-_Google_Art_Project.jpg">Wikipedia</a></span>
</figcaption>
</figure>
<h2>‘Wicked factions’</h2>
<p>Yet despite this evidence, up until the moment that they declared independence, the Patriot coalition that drove the American Revolution insisted that it had earned the hearts and minds of the British people. They asserted that Prime Minister North’s unpopular leadership would soon collapse like a sandcastle in a hurricane.</p>
<p>Strahan did his best to correct his American contacts. In 1775, he <a href="https://franklinpapers.org/yale?vol=22&page=143a">explained to his friend Benjamin Franklin</a>, “your Countrymen may have in many Instances mistaken the Voice of Faction for the real Sense of the Nation at large.” </p>
<p>On another occasion, <a href="https://founders.archives.gov/documents/Franklin/01-22-02-0053">Strahan wrote to Franklin</a>, “All our Murmerings and Opposition to Government appear only in our Newspapers.” </p>
<p>Seeking political advantage, the British opposition party attempted to portray the government as illegitimate. But the American Patriots mistook these electioneering tactics for the facts. </p>
<p>This disagreement seems to have driven the two friends apart. By late 1775, with the beginnings of war underway, Strahan broached the subject a final time. With great care, <a href="https://founders.archives.gov/documents/Franklin/01-22-02-0140">he wrote to Franklin</a>, “I shall not trouble you farther upon the Subject than just to tell you once more” that “this unnatural Civil War has been chiefly, if not wholly, occasioned by our wicked Factions” in Britain spreading lies. </p>
<p>Surrounded with news and people who agreed with them, Patriots found it impossible to believe that the sentiments of the British people ran against them. Unchecked by polling or other deliberate efforts to measure public opinion – which did not exist in early America – it was impossible for the Patriots to believe that they were a minority.</p>
<h2>Denial paid dividends</h2>
<p>Today’s <a href="https://www.npr.org/2022/07/05/1109538056/election-deniers-are-spreading-misinformation-nationwide-here-are-4-things-to-kn">election deniers similarly surround themselves with like-minded people</a> and either use polling selectively or dismiss it as illegitimate. They <a href="https://www.american.edu/spa/news/matter-of-fact-measuring-how-much-people-care-about-the-truth-in-politics.cfm">ignore fact-checks from authoritative sources</a> and <a href="https://theconversation.com/coronavirus-responses-highlight-how-humans-are-hardwired-to-dismiss-facts-that-dont-fit-their-worldview-141335">trust only congenial sources</a>. Rejecting election results is undoubtedly dangerous, and the American public’s repudiation of election-deniers in the midterms is a victory for American democracy. </p>
<p>But American democracy has long faced such movements. Indeed, in the late 18th century, denying the legitimacy of an elected government was how American democracy came into being.</p><img src="https://counter.theconversation.com/content/195170/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Taylor does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Fuel for the American Revolution came from a source familiar today: distorted news reports used to drum up enthusiasm for overthrowing an illegitimate government.Jordan Taylor, Adjunct Instructor in History, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1928032022-11-28T13:32:28Z2022-11-28T13:32:28ZHow can you tell if something is true? Here are 3 questions to ask yourself about what you see, hear and read<figure><img src="https://images.theconversation.com/files/494543/original/file-20221109-24-smerzw.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C5751%2C3819&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Emotions can get in the way of knowing what’s true.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/girl-lying-on-bed-at-night-and-using-a-mobile-phone-royalty-free-image/1213627011">Elva Etienne/Moment via Getty Images</a></span></figcaption></figure><figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/281719/original/file-20190628-76743-26slbc.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><a href="https://theconversation.com/us/topics/curious-kids-us-74795">Curious Kids</a> is a series for children of all ages. If you have a question you’d like an expert to answer, send it to <a href="mailto:curiouskidsus@theconversation.com">curiouskidsus@theconversation.com</a>.</em></p>
<hr>
<blockquote>
<p><strong>How can I tell if what I am hearing is true? – Adam, age 10, Maui, Hawaii</strong></p>
</blockquote>
<hr>
<p>Have you ever heard a story so exciting you wanted to share it right away? Something like a shark swimming up a flooded highway?</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1575245113387024384"}"></div></p>
<p>An image that seems to show just that was shared by many people after Hurricane Ian struck Florida in 2022. It was also <a href="https://twitter.com/dalysshanson/status/901949237306515457?s=20&t=l_bsAXkKKIVmp_XwGWGkyw">widely shared after Hurricane Harvey</a> hit Houston, Texas, in 2017. It’s a fake – a flooded highway image combined with one of a great white shark. The fact-checking website <a href="https://www.snopes.com/fact-check/shark-street-hurricane/">Snopes found it circulating as far back as 2011</a> after Hurricane Irene slammed Puerto Rico.</p>
<p>Truth can be tricky to determine. Every message you read, see or hear comes from somewhere and was created by someone and for someone. </p>
<p><a href="https://scholar.google.com/citations?hl=en&user=XV5-t-YAAAAJ">I teach media literacy</a>, which is a way to think about <a href="https://mediaeducationlab.com/what-media-literacy-0">information you get in the messages you receive via media</a>. You might think media means the news, but it also includes TikTok posts, television, books, advertisements and more. </p>
<p>When deciding whether to trust a piece of information, it’s good to start with three main questions – who said it, what evidence did they give and how much do you want to believe it? The last one might seem a little strange, but you’ll see why it’s important by the end.</p>
<h2>Who said it?</h2>
<p>Let’s say you’re really excited about a game that’s coming out later this year. You want to be the first to learn about the new creatures, characters and game modes. So when a YouTube video pops up saying, “GAME COMING TWO WEEKS EARLY,” you can’t wait to watch. But when you click, it’s just a guy making predictions. Do you trust him?</p>
<p>A source is where information comes from. You get information from sources every day – from teachers, parents and friends to people you’ve never met on news sites, fan channels and social media. You probably have sources you trust and ones you don’t. But why? </p>
<p>Would you trust your history teacher to tell you something about history? Probably, because they have a college degree that says they know their stuff. But what if your history teacher told you a fact about science your science teacher said was untrue? You’d probably be better off going with the science teacher for your science facts. Just because a source is trustworthy in one subject doesn’t mean they’re trustworthy in every subject. </p>
<p>Let’s go back to the YouTuber. If you’ve watched him for a while and he’s reliably correct, that’s a good start. At the same time, make sure you don’t confuse his having an opinion with <a href="https://games.abc.net.au/education/interactive-lessons/fact-opinion-analysis/">actually having knowledge</a>. Just because you like a source doesn’t make it trustworthy.</p>
<p>This is true for websites, too. When a site grabs your attention, take a second to check the source at the top. Some fake sites use names that sound trustworthy – like “Boston Tribune” instead of “Boston Globe” or “www.cbs.com.co” instead of “www.cbs.com.” You can click the “About” page to see where they’re really coming from, use <a href="https://www.snopes.com/news/2016/01/14/fake-news-sites/">lists of known fake sites</a> and <a href="https://www.iste.org/explore/Digital-and-media-literacy/Top-10-sites-to-help-students-check-their-facts">other fact-checking resources</a> to avoid getting played.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5991%2C3988&q=45&auto=format&w=1000&fit=clip"><img alt="Boy in baseball cap looking at his phone outside on street corner." src="https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5991%2C3988&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/490701/original/file-20221019-13-hjntlz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Don’t believe everything you see.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/asian-teenage-boy-playing-with-cellphone-outdoors-royalty-free-image/1345235981">imtmphoto/iStock/Getty Images Plus</a></span>
</figcaption>
</figure>
<h2>What’s the evidence?</h2>
<p>Evidence is what you show when someone says “prove it!” It’s the details that support what a source is saying. </p>
<p>Primary sources – people or groups who are directly involved with the information – are best. If you want to learn about the release of a new game, the company’s official accounts or channels would be primary sources. </p>
<p>Secondary sources are one step removed – for example, news stories based on primary sources. They aren’t as strong as primary sources but are still useful. For example, most news on <a href="https://www.youtube.com/watch?v=1FLlHr38_bI">gaming site IGN</a> is based on information from game company sources, so it’s a good secondary source. </p>
<p>Can a blogger or YouTuber be a secondary source? If their claims start by referencing primary sources like “Electronic Arts says,” that’s good. But if they start with “I think” or “There’s a lot of buzz,” be careful.</p>
<h2>Do you want to believe it?</h2>
<p>Emotions can get in the way of knowing what’s true. Messages that make you feel strong emotions – especially ones that are funny or make you angry – are the most important ones to check, but <a href="https://www.middleweb.com/34145/how-media-appeals-to-our-emotions/">they’re also the hardest to ignore</a>.</p>
<p>Advertisers know this. Many ads try to be funny or make the things they’re selling look cool because they want you to focus on how you feel rather than what you think. And being older doesn’t mean you’re automatically better at spotting false information: 41% of 18-to-34-year-olds and 44% of adults 65 and older <a href="https://newslit.org/tips-tools/did-you-know-oldest-youngest-fake-news/">admitted to having fallen for a fake news story</a> in a 2018 study. Other research showed adults over 65 were seven times as likely to <a href="https://www.science.org/doi/10.1126/sciadv.aau4586">share articles from fake sites</a> as younger people were.</p>
<p>So if you’ve been eagerly waiting for that new game, and somebody posts a video that says it’s coming out early, your wanting it to be true can make you ignore your common sense – leaving you open to being fooled. </p>
<p>The best question you can ask yourself when you’re thinking about a message is, “Do I want to believe this?” If the answer is yes, it’s a good sign you should slow down and check the source and evidence more closely.</p>
<hr>
<p><em>Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to <a href="mailto:curiouskidsus@theconversation.com">CuriousKidsUS@theconversation.com</a>. Please tell us your name, age and the city where you live.</em></p>
<p><em>And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.</em></p><img src="https://counter.theconversation.com/content/192803/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bob Britten does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>What’s true and what’s not? An expert in media literacy explains how to evaluate information.Bob Britten, Teaching Associate Professor of Media, West Virginia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1940902022-11-09T10:38:34Z2022-11-09T10:38:34ZHow maths can help the BBC with impartial reporting<figure><img src="https://images.theconversation.com/files/493808/original/file-20221107-3517-k8cke4.jpg?ixlib=rb-1.1.0&rect=28%2C57%2C4793%2C2928&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/soest-germany-january-14-2018-man-797668045">Lutsenko_Oleksandr/Shutterstock</a></span></figcaption></figure><p>In her keynote MacTaggart lecture at the Edinburgh International Television Festival in August, former BBC presenter Emily Maitlis spoke of her misgivings about the way the UK’s public broadcaster interpreted the corporation’s core value of impartiality.</p>
<blockquote>
<p>It might take our (BBC) producers five minutes to find 60 economists who feared Brexit and five hours to find a sole voice who espoused it. But by the time we went on air, <a href="https://podfollow.com/1640878689/episode/d8256a0645e7365796ff07bbadc35909a87dfa7a/view">we simply had one of each</a>; we presented this unequal effort to our audience as balance. </p>
</blockquote>
<p>Maitlis believes this is not fair reporting. You may <a href="https://www.theguardian.com/media/2022/aug/27/david-dimbleby-defends-bbc-rebuke-of-emily-maitlis-newsnight-polemic">agree or disagree</a>, but either way, her comment raises an interesting question about what impartial reporting looks like. </p>
<p>This is where a maths concept called information theory can guide us. According to information theory, impartiality can be measured by a metric called <a href="https://link.springer.com/chapter/10.1007/978-94-017-2973-4_4">mutual information</a>. Mutual information measures the amount of knowledge about a topic of interest that you can extract from a message. </p>
<p>Suppose that you just landed on earth from outer space and you want to know which way the sun rises. You switch on BBC and they interview one person who says the sun rises from the west, then another interviewee says it rises from the east. </p>
<p>The BBC’s broadcast is impartial. But the amount of information contained in the programme about what you want to know (the mutual information) is zero. You are just as confused as before. Perfectly unbiased reporting will have no more effect than listening to white noise.</p>
<p>If the BBC reported that “99 out of 100 experts said the sun rises from east” the mutual information is one unit of information.</p>
<h2>A new age</h2>
<p>We live an age of information war. <a href="https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf">State-sponsored disinformation</a> can interfere with democratic processes such as elections and referndums. People can easily <a href="https://www.theguardian.com/uk-news/2022/oct/12/alex-belfield-jeremy-vine-stalker-bbc-presenter-jail-harassment">spread disinformation with devastating impact</a> on the lives of others, such as stalker and former BBC radio presenter Alex Belfield, whose online harassment made life a misery for fellow presenters Jeremy Vine, Liz Green and many others. He was sentenced to five and half years in prison in September 2022 for his online stalking. </p>
<p>The development of the internet over the past quarter of a century has exposed us to a volume of information our brains <a href="https://ijoc.org/index.php/ijoc/article/viewFile/1566/743">can’t handle</a>. On the internet, facts carry less significance than the way information is presented. The result is that impartial and fair reporting are no longer the same thing. </p>
<figure class="align-center ">
<img alt="Friends sitting at home watching news and relaxing on a sofa" src="https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/493810/original/file-20221107-25-bgfzm4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The internet has changed the way we consume news.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/happy-friends-sitting-home-watching-news-1519298672">Gorodenkoff/Shutterstock</a></span>
</figcaption>
</figure>
<p>Information overload has pushed us to adopt views we can believe in with certainty. It is more <a href="https://www.psychologytoday.com/gb/blog/anger-in-the-age-entitlement/202109/the-epidemic-certainty">rewarding to our brains</a>) to exist in a state of certainty than uncertainty. Information that <a href="https://www.researchgate.net/publication/11597119_Judgment_Under_Emotional_Certainty_and_Uncertainty_The_Effects_of_Specific_Emotions_on_Information_Processing">plays on our emotions</a> is more likely to give us this feeling of no uncertainty. There is so much information available that we don’t have the capacity to make a reasoned judgement about everything we read or watch.</p>
<p>Rational thinking means doing research and using the information available to make the most rational decision. While I was researching how people’s views change when we digest information, I found rational thinkers have a <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2022.797904/full">worrying tendency</a>. When people are shown a range of explanations, where at most one is true, they will feel the strongest pull towards information that complements their current beliefs.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/DMoItVbkGfw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>The equation in my work looked further into how these pulls change over time as we digest information and identified something that had previously been overlooked. It shows that if people have strong but misguided views on a matter, then – even if they are gradually exposed to the truth – they won’t change their views, possibly for decades, unless they experience a dramatic event.</p>
<p>People have to unconvince themselves of the original view before they can be re-convinced by the true explanation. This process forces you to experience increased uncertainty before it decreases again. Mathematics shows that <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2022.797904/full">rational thinkers</a> do not like this.</p>
<h2>The solution</h2>
<p>Sometimes reality catches up and bites you. This might take the form of a <a href="https://www.bbc.co.uk/news/world-us-canada-63251824">court room ruling</a> on the dissemination of disinformation. For example conspiracy theorist Alex Jones was handed a US$965 million (£860 million) court judgment in October 2022. People can choose to share <a href="https://www.globalwitness.org/en/blog/what-climate-disinformation/">climate change denial</a> content. But there is no judge who can repair the damage this type of disinformation does. </p>
<p>The BBC is in a difficult position. Like the monarch, the BBC is <a href="https://www.globalwitness.org/en/blog/what-climate-disinformation/">legally bound</a> to produce neutral broadcasts.</p>
<p>Impartiality is <a href="https://dictionary.cambridge.org/dictionary/english/impartial">underpinned by two or more</a> opposing opinions. So what about the BBC reporting of the vaccination programme during the (ongoing) COVID pandemic? There is a small but sizeable proportion of the population which opposes the vaccination programme, but the BBC did not present their views on an equal footing with those of medical experts. </p>
<p>This is what the BBC should do on a wider scale – even if there are political ramifications. In the age of information overload, impartial reporting carries little value. It is time for the BBC to shelve the concept and replace it with a public interest approach that is led by science. While experts do get things wrong, a consensus view of an overwhelming majority of experts, such as those of the 60 economists who feared Brexit, has a high likelihood of getting the facts right.</p><img src="https://counter.theconversation.com/content/194090/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dorje C. Brody does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The disinformation age is changing what it means to produce fair or balanced reporting.Dorje C. Brody, Professor of Mathematics, University of SurreyLicensed as Creative Commons – attribution, no derivatives.