tag:theconversation.com,2011:/africa/topics/misinformation-28847/articlesMisinformation – The Conversation2024-03-19T19:10:23Ztag:theconversation.com,2011:article/2249292024-03-19T19:10:23Z2024-03-19T19:10:23ZThe Online Harms Act doesn’t go far enough to protect democracy in Canada<figure><img src="https://images.theconversation.com/files/582655/original/file-20240318-20-5g48qj.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5648%2C3762&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Online Harms Act aims to protect Canadians from harmful content.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The Liberal government’s recent proposal for regulating social media platforms, <a href="https://www.canada.ca/en/canadian-heritage/services/online-harms.html">the Online Harms Act (Bill C-63)</a>, comes as the final act in a <a href="https://nationalpost.com/news/politics/the-first-100-days-major-battle-over-free-speech-internet-regulation-looms-when-parliament-returns">promised trilogy of bills</a> aimed at bringing some order to the digital world. </p>
<p>After contentious <a href="https://www.canada.ca/en/canadian-heritage/services/online-news.html">attempts to address the fallout from the Online News Act</a> and the <a href="https://www.canada.ca/en/radio-television-telecommunications/news/2023/09/crtc-takes-major-step-forward-to-modernize-canadas-broadcasting-framework.html">threat from online streaming platforms to Canadian content</a>, this final bill attempts to identify and regulate harmful content. The Online Harms Act follows <a href="https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en">Europe</a>, the <a href="https://www.legislation.gov.uk/ukpga/2023/50/contents/enacted">United Kingdom</a> and <a href="https://www.esafety.gov.au/newsroom/whats-on/online-safety-act">Australia</a> in setting up a new regulator in an attempt to address the spread of what is considered harmful content.</p>
<p>The idea that such efforts are necessary is not controversial — content that sexually exploits children, for instance, has already been <a href="https://calgarysun.com/news/crime/edmonton-man-who-lured-92-children-into-sending-child-porn-sentenced-to-18-years-in-prison">a target for law enforcement</a>, and hate speech has been illegal for decades in <a href="https://doi.org/10.1080/17577632.2022.2092261">most industrialized democracies</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/9HsPnK9HMT0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC News looks at the Online Harms Act.</span></figcaption>
</figure>
<h2>Platform responsibility</h2>
<p>Online harms laws are based on the idea of “<a href="https://cyberlaw.stanford.edu/focus-areas/intermediary-liability">intermediary liability</a>”: making the platforms legally responsible when users use them to distribute content that breaks laws. </p>
<p>Under the Online Harms Act, platforms will be required to promptly remove two forms of content — that which “sexually victimizes a child or revictimizes a survivor” and “intimate images posted without consent” — or face large fines. </p>
<p>But it also includes less strict measures to deal with other forms of harmful content, including promotion of terrorism or genocide, incitement to violence or hate speech. Platforms will be required to develop, and make public, plans to “mitigate the risk that users will be exposed to harmful content on the services and submitting digital safety plans to the Digital Safety Commission of Canada.”</p>
<h2>Crime and punishment</h2>
<p>There are also <a href="https://www.cbc.ca/news/politics/liberals-table-online-harms-legislation-1.7126080">new criminal offences and penalties</a> for users who upload these forms of content. These provisions have been the subject of <a href="https://www.theglobeandmail.com/politics/article-justice-minister-defends-house-arrest-power-for-people-feared-to/">much of the debate over the bill</a>. </p>
<p>Many civil libertarians argue that they <a href="https://www.michaelgeist.ca/2024/02/why-the-criminal-code-and-human-rights-act-provisions-should-be-removed-from-the-online-harms-act/">go too far</a>, while advocates for marginalized groups believe that they are <a href="https://www.thestar.com/opinion/contributors/finally-a-tool-to-combat-online-hate/article_41ec2db0-d664-11ee-b404-bf5272436be5.html">long overdue</a>. </p>
<p>But much of the debate over these specific details misses a deeper failing of the bill, which derives from the way the idea of “online harm” is understood.</p>
<h2>‘Lawful but awful’</h2>
<p>For much of the last decade, digital media scholars have also been directing attention to different ways in which platform communication <a href="https://doi.org/10.1386/jdmp_00061_1">ought to be considered harmful</a>. The definition of harmful content in Bill C-63 focuses on harms that are experienced by users when they encounter particular forms of content posted by others. </p>
<p>But platforms aren’t merely empty spaces for users to send messages to other users — <a href="https://arstechnica.com/tech-policy/2020/12/the-christchurch-shooter-and-youtubes-radicalization-trap/">they play an active role</a> in shaping the communication that takes place, determining how messages are combined and sorted, and how their distribution is prioritized and limited. </p>
<p>For this reason, <a href="https://heinonline.org/HOL/Page?handle=hein.journals/jtelhtel13&id=227&collection=journals&index=">algorithms that amplify or suppress particular kinds of messages should also be seen as a source of harm</a>.</p>
<p>This is often understood as the reason why fake news or hyper-partisan political commentary is so problematic on platforms. Even perfectly legal communication — what is called “<a href="https://www.cbc.ca/radio/thecurrent/online-harms-act-arif-virani-1.7127037">lawful but awful</a>” content — can contribute to a pattern of serious harm. </p>
<p>One person <a href="https://theconversation.com/close-to-home-the-canadian-far-right-covid-19-and-social-media-178714">denying the scientific consensus on vaccines</a>, promoting <a href="https://theconversation.com/qanon-is-spreading-outside-the-us-a-conspiracy-theory-expert-explains-what-that-could-mean-198272">entirely baseless conspiracy theories about political figures</a> or <a href="https://www.npr.org/2020/10/24/927300432/robocalls-rumors-and-emails-last-minute-election-disinformation-floods-voters">discouraging people from voting</a>, might not be “harmful” in the sense that Bill C-63 defines the concept. </p>
<p>But when social media algorithms ensure that many users don’t see counter-evidence from outside their “<a href="https://www.penguinrandomhouse.com/books/309214/the-filter-bubble-by-eli-pariser/">filter bubble</a>,” the dangers are real. This is also true of any number of other kinds of <a href="https://academic.oup.com/book/26406?login=false">platformed deception</a>, such as <a href="https://www.washingtonpost.com/technology/2023/12/17/ai-fake-news-misinformation/">AI-generated deep fake videos</a> of political candidates.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a face with a long nose, at the end is a mask of a face with a regular nose" src="https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/582654/original/file-20240318-26-nc5uy0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Misinformation, such as deepfakes of politicians, can spread unregulated on online platforms.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Democracy at risk</h2>
<p>Democracy relies on open and rational deliberation. The conditions for that kind of communication can be degraded by the way that algorithms operate. That algorithms are operated by private, for-profit corporations that seek to maximize “engagement” makes the problem even worse; this creates an <a href="https://www.theguardian.com/technology/2021/oct/22/facebook-whistleblower-hate-speech-illegal-report">incentive for content that provokes outrage</a> and further <a href="https://doi.org/10.1177/14614448231161880">polarizes political opinion</a>.</p>
<p>Exactly how algorithms should be regulated is not a simple question. Some of the provisions in Bill C-63 might be a step in the right direction: requirements for risk mitigation plans, an ombudsperson who can help the public submit complaints about platforms to a regulator and obligations to provide information about content. And importantly, all of this can be done without unnecessarily violating users’ freedom of expression.</p>
<p>But a more specific legal obligation on platforms to deprioritize content that is clearly false — such as public health messaging or information related to elections — would be necessary to stop increasing online polarization and promoting <a href="https://www.uwestminsterpress.co.uk/site/books/e/10.16997/book30/">anti-democratic populism</a>. </p>
<p>While the Online Harms Act might protect individuals from being exposed to specific kinds of content, protecting the democratic nature of our society will require a more robust set of regulations than what has been proposed.</p><img src="https://counter.theconversation.com/content/224929/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Derek Hrynyshyn does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Algorithms that amplify or suppress particular kinds of messages should be seen as a source of harm.Derek Hrynyshyn, Contract Faculty, Communication & Media Studies, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2247862024-03-19T18:17:34Z2024-03-19T18:17:34ZDeepfakes are still new, but 2024 could be the year they have an impact on elections<figure><img src="https://images.theconversation.com/files/580733/original/file-20240308-30-tf2e5r.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3865%2C2582&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/deep-fake-ai-face-swap-video-2376208005">Tero Vesalainen / Shutterstock</a></span></figcaption></figure><p>Disinformation caught many people off guard during the <a href="https://www.europarl.europa.eu/RegData/etudes/ATAG/2018/620230/EPRS_ATA(2018)620230_EN.pdf">2016 Brexit referendum</a> and <a href="https://www.nature.com/articles/s41467-018-07761-2">US presidential election</a>. Since then, a mini-industry has developed to analyse and counter it.</p>
<p>Yet despite that, we have entered 2024 – a year of <a href="https://en.wikipedia.org/wiki/List_of_elections_in_2024">more than 40 elections</a> worldwide – more fearful than ever about disinformation. In many ways, the problem is more challenging than it was in 2016. </p>
<p>Advances in technology since then are one reason for that, in particular the development that has taken place with synthetic media, otherwise known as deepfakes. It is increasingly difficult to know whether media has been fabricated by a computer or is based on something that really happened. </p>
<p>We’ve yet to really understand how big an impact deepfakes could have on elections. But a number of examples point the way to how they may be used. This may be the year when lots of mistakes are made and lessons learned.</p>
<p>Since the disinformation propagated around the votes in 2016, researchers have produced countless books and papers, journalists have retrained as <a href="https://www.poynter.org/fact-checking/2022/391-global-fact-checking-outlets-slow-growth-2022/">fact checking and verification experts</a>, governments have participated in <a href="https://www.igcd.org/">“grand committees”</a> and centres of excellence. Additionally, <a href="https://royalsociety.org/blog/2022/03/how-libraries-can-fight-disinformation/">libraries</a> have become the focus of resilience building strategies and a range of new bodies has emerged to provide analysis, training, and resources.</p>
<p>This activity hasn’t been fruitless. We now have a more nuanced understanding of disinformation as a social, psychological, political, and technological phenomenon. Efforts to support public interest journalism and the cultivation of critical thinking through education are also promising. Most notably, major tech companies <a href="https://www.reuters.com/technology/meta-set-up-team-counter-disinformation-ai-abuse-eu-elections-2024-02-26/">no longer pretend to be neutral platforms</a>. </p>
<p>In the meantime, policymakers have rediscovered their duty to <a href="https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en">regulate technology</a> in the public interest. </p>
<h2>AI and synthetic media</h2>
<p>Regulatory discussions have added urgency now that AI tools to create synthetic media – media partially or fully generated by computers – have gone mainstream. These deepfakes can be used to imitate the voice and appearance of real people. Deepfake media are impressively realistic and do not require much skill or resources. </p>
<p>This is the culmination of the wider digital revolution whereby successive technologies have made high-quality content production accessible to almost anyone. In contrast, regulatory structures and institutional standards for media were mostly designed in an era when only a minority of professionals had access to production.</p>
<p>Political deepfakes can take different forms. The recent Indonesian election saw a <a href="https://edition.cnn.com/2024/02/12/asia/suharto-deepfake-ai-scam-indonesia-election-hnk-intl/index.html">deepfake video “resurrecting” the late President Suharto</a>. This was ostensibly to encourage people to vote, but it was accused of being propaganda because it produced by the political party that he led.</p>
<p>Perhaps a more obvious use of deepfakes is to spread lies about political candidates. For example, <a href="https://ipi.media/slovakia-deepfake-audio-of-dennik-n-journalist-offers-worrying-example-of-ai-abuse/">fake AI-generated audio</a> released days before Slovakia’s parliamentary election in September 2023 attempted to portray the leader of Progressive Slovakia, Michal Šimečka, as having discussed with a journalist how to rig the vote.</p>
<p>Aside from the obvious effort to undermine a political party, it is worth noting how this deepfake, whose origin was unclear, exemplifies wider efforts to scapegoat minorities and demonise mainstream journalism. </p>
<p>Fortunately, in this instance, the audio was not high-quality, which made it quicker and easier for fact checkers to confirm its inauthenticity. However, the integrity of democratic elections cannot rely on the ineptidude of the fakers.</p>
<p>Deepfake audio technology is at a level of <a href="https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/">sophistication that makes detection difficult</a>. Deepfake videos still struggle with certain human features, such as the representation of hands, but the technology is still young.</p>
<p>It is also important to note the Slovakian video was released during the final days of the election campaign. This is a prime time to launch disinformation and manipulation attacks because the targets and independent journalists have their hands full and therefore have little time to respond.</p>
<p>If it is also expensive, time-consuming, and difficult to investigate deep fakes, then it’s not clear how electoral commissions, political candidates, the media, or indeed the electorate should respond when potential cases arise. After all, a false accusation from a deepfake can be as troubling as the actual deepfake.</p>
<p>Another way deepfakes could be used to affect elections can be seen in the way they are already widely used to <a href="https://www.euronews.com/next/2023/04/22/a-lifelong-sentence-the-women-trapped-in-a-deepfake-porn-hell">harass and abuse</a> women and girls. This kind of sexual harassment fits an <a href="https://theconversation.com/online-abuse-could-drive-women-out-of-political-life-the-time-to-act-is-now-214301">existing pattern</a> of abuse that limits political participation by women. </p>
<h2>Questioning electoral integrity</h2>
<p>The difficulty is that it’s not yet clear exactly what impact deepfakes could have on elections. It’s very possible we could see other, similar uses of deepfakes in upcoming elections this year. And we could even see deepfakes used in ways not yet conceived of.</p>
<p>But it’s also worth remembering that not all disinformation is high-tech. There are other ways to attack democracy. Rumours and conspiracy theories about the integrity of the electoral process are an insidious trend. <a href="https://www.ft.com/content/1abd7fde-20b4-11e9-a46f-08f9738d6b2b">Electoral fraud is a global concern</a> given that many countries are only democracies in name. </p>
<p>Clearly, social media platforms enable and drive disinformation in many ways, but it is a mistake to assume the problem begins and ends online. One way to think about the challenge of disinformation during upcoming elections is to think about the strength of the systems that are supposed to uphold democracy. </p>
<p>Is there an independent media system capable of providing high quality investigations in the public interest? Are there independent electoral administrators and bodies? Are there independent courts to adjudicate if necessary? </p>
<p>And is there sufficient commitment to democratic values over self interest
amongst politicians and political parties? This year of elections, we may well find out the answer to these questions.</p><img src="https://counter.theconversation.com/content/224786/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eileen Culloty coordinates the Ireland Hub of the European Digital Media Observatory, which is part-funded by the European Commission to undertake fact-checks, analysis, and media literacy.</span></em></p>As technology has advanced, AI-generated deepfakes have become more convincing.Eileen Culloty, Assistant Professor, School of Communications, Dublin City UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2241192024-03-17T12:56:19Z2024-03-17T12:56:19ZOnline wellness content: 3 ways to tell evidence-based health information from pseudoscience<figure><img src="https://images.theconversation.com/files/582218/original/file-20240315-20-1ijga2.jpg?ixlib=rb-1.1.0&rect=374%2C66%2C6941%2C4649&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Health information is increasingly being shared online, and often the borders between legitimate health expertise and pseudoscience aren't clear.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>“I drink borax!” proclaims the smiling TikToker. Holding up a box of the laundry additive, she rhymes off a list of its supposed health benefits: “Balances testosterone and estrogen. It’s a powerhouse anti-inflammatory…. It’s amazing for arthritis, osteoporosis…. And obviously it’s great for your gut health.” </p>
<p>Videos like these <a href="https://globalnews.ca/news/9860780/borax-drinking-tiktok-trend/">prompted health authorities to warn the public</a> about the dangers of ingesting this toxic detergent — and away from such viral messaging that promotes unsubstantiated and medically dangerous health claims.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-new-tiktok-trend-has-people-drinking-toxic-borax-an-expert-explains-the-risks-and-how-to-read-product-labels-210278">A new TikTok trend has people drinking toxic borax. An expert explains the risks – and how to read product labels</a>
</strong>
</em>
</p>
<hr>
<p>Health information is increasingly being shared online, and often the borders between legitimate health expertise and pseudoscience aren’t clear. While the internet can be a valuable and accessible way to learn about health, it’s also a place rife with disinformation and grift, as unscrupulous <a href="https://doi.org/10.1249/FIT.0000000000000829">influencers exploit</a> people’s fears about their bodies. </p>
<h2>Evidence and influencers</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Collage of quotes about drinking borax" src="https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Some TikTokers claimed drinking borax had health benefits. In fact, borax is toxic and shouldn’t be ingested.</span>
<span class="attribution"><span class="source">(Michelle Cohen)</span></span>
</figcaption>
</figure>
<p>In my medical practice, I can usually track online wellness trends, such as a patient refusing a medication because of online claims — many of which are false — that it <a href="https://thefeelgoodagaininstitute.com/medications-that-lower-testosterone/">lowers testosterone</a>, or the several months when it seemed everyone was <a href="https://theconversation.com/turmeric-heres-how-it-actually-measures-up-to-health-claims-205613">taking turmeric</a> for joint pain, or the patients who request an <a href="https://theconversation.com/ivermectin-whether-formulated-for-humans-or-horses-is-not-a-treatment-for-covid-19-167340">ivermectin prescription</a> in case they catch COVID. </p>
<p>So how does someone who simply wants to learn more about the human body sift through the information? How to separate bad-faith grift from good advice? </p>
<p>Wellness influencers tap into a truth about how we process information: it’s <a href="https://lab.research.sickkids.ca/anthony/wp-content/uploads/sites/75/2019/07/Health-misinformation-and-the-power-of-narrative-messaging-in-the-public-sphere..pdf">more trustworthy</a> when it comes from a person we feel like we know. That’s why a charismatic personality’s Instagram account that uses <a href="https://doi.org/10.1177/1440783319846188">intimate stories</a> to promote <a href="https://digitalcommons.liberty.edu/doctoral/4920/">parasocial attachment</a> — the sense of being part of a community — is more memorable than a website offering dry recitations of evidence.</p>
<p>But as social media has become ubiquitous, <a href="https://www.instagram.com/daniellebelardomd/?hl=en">health experts</a> have caught on that sharing their personal side alongside reliable advice can be a good use of their platform. At first glance, these two groups may seem similar, but the following tips can help determine if the person posting health advice is actually knowledgeable on the topic:</p>
<h2>1) Are they selling something?</h2>
<p>Rarely do popular wellness influencers post out of the goodness of their hearts. Almost invariably these accounts are <a href="https://www.conspirituality.net/transmissions/the-wellness-grift-of-jp-sears">trying to profit</a> from the <a href="https://doi.org/10.1002/ace.20486">virality of their content</a>. </p>
<p>Whether it’s a <a href="https://doi.org/10.1080%2F08998280.2022.2124767">supplement store</a>, a <a href="https://www.independent.co.uk/news/health/social-media-weight-loss-diet-twitter-influencers-bloggers-glasgow-university-a8891971.html">diet book</a>, a subscription to a lifestyle community or a Masterclass series, the end goal is the same: transform social media influence into sales. Gushing over life-changing benefits from something the promoter is selling should always prompt skepticism. </p>
<p>Some legitimate health experts also sell advice, usually in the form of newsletters, books or <a href="https://www.bodyofevidence.ca/">podcasts</a>, and this is worth keeping in mind. However, there’s a big difference between selling a subscription to a <a href="https://vajenda.substack.com/">health newsletter</a> that discusses evidence and promoting your own supplement shop, where your financial motives shape how you present the information.</p>
<h2>2) What are the boundaries of their expertise?</h2>
<p>True expertise in a subject requires years of dedicated study and practice. That’s why people are rarely experts in more than one or two domains, and no one is a pan-expert on everything. </p>
<p>If a <a href="https://doi.org/10.1080/17439884.2021.2006691">wellness influencer</a> promotes themselves as erudite on all health topics, that’s actually an excellent indication of their lack of knowledge. A real health expert knows the limitations of their knowledge and can call on others’ expertise when needed. So the podcast host who opines on every health issue is substantially less worthwhile to listen to than the podcast host who brings on guest experts for topics outside their scope. </p>
<h2>3) How do they talk about science?</h2>
<p>Science is a process of discovery, not a static philosophy, so scientists emphasize talking about current evidence rather than “truth”, which is more of a faith-based concept. </p>
<p>If someone wants to post about their personal wellness philosophy or their spiritual journey and how it makes them feel, that’s fine. But dropping in biology jargon without explanation or name-checking one or two questionable studies without fulsome discussion isn’t a meaningful way to engage with the evidence on a health topic. </p>
<p>Science-based information should acknowledge where data are uncertain and where more research is needed. Using the pretext of science to lend credence to a personal “truth” is a <a href="https://www.mcgill.ca/oss/article/critical-thinking-pseudoscience/whats-trending-world-pseudoscience">form of pseudoscience</a> and should raise red flags.</p>
<p>These three principles are a good framework for deciding whether an influencer’s health content is worth consuming or whether they’re simply trying to sell a new supplement or spread viral disinformation about something like borax. </p>
<p>As online health information becomes easier to find (or harder to avoid), this framework can help people quickly scan a wellness influencer’s profile and make a more informed decision about engaging with their content. This is an important type of media literacy that anyone spending time online should cultivate — for the sake of their health.</p><img src="https://counter.theconversation.com/content/224119/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michelle Cohen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How do we distinguish between valuable information from legitimate health experts, and pseudoscientific nonsense from unscrupulous wellness influencers?Michelle Cohen, Adjunct Assistant Professor, Department of Family Medicine, Queen's University, OntarioLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2258712024-03-15T21:14:11Z2024-03-15T21:14:11ZDoes TikTok pose a security threat to Canadians?<figure><img src="https://images.theconversation.com/files/582280/original/file-20240315-30-xv5fae.jpg?ixlib=rb-1.1.0&rect=0%2C65%2C5472%2C3571&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">TikTok poses no more of a threat to democracy than other social media platforms.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Concerns about the threats TikTok poses to privacy and liberty were raised again, as a bill to divest TikTok of its Chinese ownership or ban it <a href="https://www.cbc.ca/player/play/2318307395724">gathered steam</a> in the United States Congress. And Canada’s federal government revealed that it began investigating months ago whether foreign control of the app <a href="https://www.cbc.ca/news/politics/tiktok-national-security-review-1.7143574">poses a threat to national security</a>. </p>
<p>Government officials see TikTok posing a threat to Canadians in two ways: violating our personal privacy by collecting too much data, and sabotaging our democracy through misinformation and manipulation.</p>
<p>Are these threats theoretical or real? And is there any proof supporting the concerns that the Chinese government exerts control over <a href="https://www.bloomberg.com/profile/company/1774397D:CH">ByteDance Ltd.</a>, the Beijing-based company that owns TikTok?</p>
<p>There is good reason to believe TikTok may pose a threat to our privacy, but not to our democracy. The platform may collect too much data, but fears that China will use TikTok to misinform or manipulate us for political purposes are misplaced. </p>
<p>China doesn’t need to control TikTok to influence our elections. It can do that quite easily without this. Canada’s ongoing efforts to minimize the national security threat TikTok poses won’t neutralize the threat social media poses to democracy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/CETjQv8aqw0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC News looks into the potential ban of TikTok.</span></figcaption>
</figure>
<h2>Privacy concerns are real</h2>
<p>But TikTok does pose a threat to our privacy. European regulators have <a href="https://www.priv.gc.ca/en/privacy-and-transparency-at-the-opc/proactive-disclosure/opc-parl-bp/ethi_20231025/is_20231025/">fined TikTok for collecting data</a> from users too young to provide valid consent, for misusing data and for “nudging” users toward privacy-invading conduct through default settings. </p>
<p>Class actions in Canada and the U.S. have made a <a href="https://www.priv.gc.ca/en/privacy-and-transparency-at-the-opc/proactive-disclosure/opc-parl-bp/ethi_20231025/is_20231025/">similar case</a>.</p>
<p>Cybersecurity experts have <a href="https://www.cbc.ca/news/canada/canada-tiktok-western-scrutiny-1.6760037">warned of how invasive the app can be</a>, as it tracks user location, incoming messages and which networks a user accessed. Permissions for this are buried deep in the app’s settings, but <a href="https://www.forbes.com/sites/emilsayegh/2022/11/09/tiktok-users-are-bleeding-data/">most users are unaware</a> or don’t bother to check.</p>
<p>In late March, Canada’s Privacy Commissioner and three provincial counterparts are set to <a href="https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230223/">table a report on an investigation</a> into how TikTok gathers and uses our data. The Commission will most likely recommend following Europe’s lead in passing legislation to require greater transparency in the data TikTok collects and further restrictions in how they can use it.</p>
<h2>Fears that China will interfere</h2>
<p>On March 1, the federal government issued a new <a href="https://www.canada.ca/en/innovation-science-economic-development/news/2024/03/canada-strengthens-guidelines-on-foreign-investments-in-the-interactive-digital-media-sector.html">policy that foreign-owned platforms</a> like TikTok would face “enhanced scrutiny” under powers in the <a href="https://laws-lois.justice.gc.ca/eng/acts/I-21.8/index.html">Investment Canada Act</a>. Under the act, the government can impose conditions on foreign investors or companies where there are “reasonable grounds to believe” their involvement in Canada “could be injurious to national security.”</p>
<p>Cabinet ministers were <a href="https://www.canadianlawyermag.com/practice-areas/crossborder/federal-government-issues-additional-directions-for-interactive-digital-media/384566">clear and direct about their concerns</a>: “hostile state-sponsored or state-influenced actors may try to leverage foreign investments in the interactive digital media sector to spread disinformation and manipulate information.”</p>
<p><a href="https://www.cbc.ca/news/canada/canada-tiktok-western-scrutiny-1.6760037">Twenty-six per cent of Canadians now use TikTok</a>. Could the Canadian subsidiary of TikTok take measures to prevent the Chinese government from engaging in misinformation or manipulation?</p>
<h2>Why concerns are misplaced</h2>
<p>In February, the U.S. Director of National Intelligence issued a <a href="https://www.dni.gov/files/ODNI/documents/assessments/ATA-2024-Unclassified-Report.pdf">threat assessment</a> that revealed TikTok accounts run by a “propaganda arm” of the Chinese government “targeted candidates from both political parties during the U.S. midterm election cycle in 2022.”</p>
<p>But as one commentator noted in the <em>New York Times</em>, the National Intelligence report did not say <a href="https://www.nytimes.com/2024/03/14/opinion/tiktok-ban-house-vote.html">whether TikTok’s algorithms promoted these nefarious accounts</a>. China may have used TikTok to misinform and manipulate, but it didn’t need to do so by directing ByteDance.</p>
<p>A 2021 study by the University of Toronto’s Citizen Lab dug deep into TikTok’s code and data collection abilities; its findings support the view that <a href="https://citizenlab.ca/2021/03/tiktok-vs-douyin-security-privacy-analysis/">TikTok is no more invasive</a> than Facebook, Instagram or other social media platforms. </p>
<p>The study found that both TikTok and its Chinese version, Douyin, “do not appear to exhibit overtly malicious behavior similar to those exhibited by malware.” And although Douyin contains “features that raise privacy and security concerns, such as dynamic code loading and server-side search censorship,” it found “TikTok does not contain these features.”</p>
<p>This doesn’t mean that China is not able to direct ByteDance to do things that could harm Canadians. But it does support the view that China doesn’t have to bother with ByteDance — an agent of the Chinese government (or any adversary) can readily do so by posing as an ordinary user.</p>
<p>In short, fears about Chinese interference in Canadian and American elections may be warranted. But just as Russia may have used fake accounts on Facebook to interfere in the <a href="https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume2.pdf">2016 U.S. presidential election</a>, China can misinform and manipulate us by using any and all social media against us. </p>
<p>This points to the real threat to our democracy: social media we can’t control.</p><img src="https://counter.theconversation.com/content/225871/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Diab does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>About 26 per cent of Canadians use TikTok. Regulating the app in Canada might be a better approach to avoiding external political influence.Robert Diab, Professor, Faculty of Law, Thompson Rivers UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2255622024-03-13T01:59:59Z2024-03-13T01:59:59ZWhere’s Kate? Speculation about the ‘missing’ princess is proof the Palace’s media playbook needs a re-write<p>Outside of two <a href="https://www.tmz.com/2024/03/04/kate-middleton-seen-spotted-public-first-time-mystery-hospitalization/">grainy</a> <a href="https://www.dailymail.co.uk/news/article-13184069/kate-middleton-photo-windsor-castle-prince-william-palace-royal-expert-theory.html">paparazzi</a> photos, Catherine, Princess of Wales, hasn’t been seen in public since Christmas Day 2023, when she <a href="https://www.usmagazine.com/celebrity-news/pictures/royal-family-attends-2023-sandringham-christmas-church-service/">attended a church service</a> at Sandringham.</p>
<p>In <a href="https://www.salon.com/2024/03/11/the-kate-middleton-mystery-a-complete-timeline-of-the-princess-of-wales-royal-family-pr-disaster/">January</a>, <a href="https://time.com/6899819/kate-middleton-appearances-surgery/">Kensington Palace announced</a> Kate Middleton (as she’s more popularly known) was to undergo “planned abdominal surgery” and wasn’t expected to return to public duties until after Easter.</p>
<p>Social media have been awash with speculation about Catherine’s health and whereabouts. Limited information has dripped out of Kensington Palace, inadvertently intensifying scrutiny. The information void has prompted onlookers to fill the space with their own theories.</p>
<p>As scrutiny reaches a fever pitch, we ask: why is the Palace’s typical media playbook no longer working? </p>
<h2>Not so ‘unprecedented’</h2>
<p>This isn’t the first time rumours about the British royal family have attracted public interest.</p>
<p>Anne Boleyn (circa 1501-1536), the second of six wives of Henry VIII, was executed after being found guilty of adultery, incest and treason. While historians differ in their interpretation of <a href="https://www.theguardian.com/books/2010/feb/23/anne-boleyn-guilty-adultery-biography-claims">Anne’s guilt or innocence</a>, it’s clear the charges were at least partially the result of gossip <a href="https://www.hrp.org.uk/tower-of-london/history-and-stories/anne-boleyn/#gs.5qeecd">instigated by rival factions</a> seeking power at the English court.</p>
<p>The long-reigning Queen Victoria (1819-1901) was widely regarded as as a <a href="https://www.hrp.org.uk/kensington-palace/history-and-stories/queen-victoria/#gs.5qc1ar">loyal wife and mother</a>. Yet she too became the target of gossip regarding her close friendship <a href="https://www.theguardian.com/uk/2004/dec/16/monarchy.stephenbates">with Scottish servant John Brown</a> after her husband, Prince Albert, died in 1861. </p>
<p>Then there were the rumours about Diana, Princess of Wales: that her son Harry was the <a href="https://www.vanityfair.com/style/2017/03/james-hewitt-prince-harry-father-princess-diana">product of an affair</a>, that she was <a href="https://www.dailymail.co.uk/news/article-2905239/Diana-pregnant-Dodi-s-child-died-Paris-car-smash-sensational-West-End-play-claim.html">pregnant with Dodi Fayed’s child</a> at the time of her death in 1997, and that <a href="https://www.independent.co.uk/life-style/royal-family/princess-diana-death-conspiracy-theories-b2248362.html">her death wasn’t accidental</a>.</p>
<p>The Palace typically refuses to comment on these kinds of sensational rumours. Sometimes, though, it will reject gossip via trusted media sources, as was the case in late 2018 when <a href="https://www.mirror.co.uk/news/uk-news/palace-denies-kate-middleton-slapped-13670961">it denied there was a feud</a> between Catherine and Meghan Markle, Duchess of Sussex.</p>
<h2>The Palace’s strategic communications</h2>
<p>The royal family has gradually adjusted to new media and technologies, though not as quickly as the public might like. </p>
<p>On one hand, the Palace <a href="https://www.prweek.com/article/1798408/queen-elizabeths-death-announced-mix-old-new">continues its age-old tradition</a> of announcing major news on a noticeboard at the gates of Buckingham Palace. On the other, a previous tendency to hide serious illnesses – such as the cancer that claimed <a href="https://www.independent.co.uk/health-and-wellbeing/king-charles-cancer-doctor-secrets-b2491426.html">King George VI’s life in 1952</a> – has been tempered by a more forthcoming approach. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-royals-have-historically-been-tight-lipped-about-their-health-but-that-never-stopped-the-gossip-222873">The royals have historically been tight-lipped about their health – but that never stopped the gossip</a>
</strong>
</em>
</p>
<hr>
<p>When Queen Camilla underwent a <a href="https://www.theguardian.com/uk/2007/mar/05/monarchy">hysterectomy in 2007</a>, the media were informed on the day of the surgery. The Palace was similarly open in its acknowledgement of Catherine’s <a href="https://www.forbes.com/sites/melaniehaiken/2012/12/03/pregnant-princess-kate-hospitalized-for-hyperemesis-gravidarum-which-is-what/?sh=4430a5ee3d55">hospitalisation for hyperemesis gravidarum</a> (severe nausea and vomiting) during her first pregnancy in 2012. It announced her second pregnancy in 2014 earlier than planned <a href="https://www.theguardian.com/uk-news/she-said/2014/sep/27/hyperemesis-gravidarum-kate-middletons-ongoing-condition-is-much-worse-than-just-morning-sickness">due to the same condition</a>. </p>
<p>On some level, we’ve become accustomed to such updates.</p>
<h2>Internet sleuthing and a manipulated image</h2>
<p>In response to limited information about Catherine’s health, memes stepped in to fill the space. Users on X joked about her recovering from a <a href="https://x.com/THISisLULE/status/1764731759340462360?s=20">Brazilian butt lift</a>, or growing out her <a href="https://x.com/VeryBadLlama/status/1762648638684053889?s=20">bangs</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1762648638684053889"}"></div></p>
<p>There were also more serious claims that she was in <a href="https://www.thelist.com/1526431/concha-calleja-kate-middleton-coma-claims/">a coma</a>, or <a href="https://x.com/holy_schnitt/status/1767283342880223466?s=20">dead</a>, or getting a <a href="https://stylecaster.com/entertainment/celebrity-news/1730035/kate-middleton-photo-wedding-ring/">divorce</a>.</p>
<p>In the midst of this speculation, <a href="https://www.tmz.com/2024/03/04/kate-middleton-seen-spotted-public-first-time-mystery-hospitalization/">TMZ published</a> a grainy photo of Catherine in the passenger seat of a car near Windsor Castle. She wears large, dark glasses in the long-distance shot. It could be anyone, internet sleuths point out. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1764827518471741834"}"></div></p>
<p>Significantly, no major UK news outlets published the photo, as per <a href="https://www.nytimes.com/2024/03/05/world/europe/princess-kate-middleton-royals.html">Kensington Palace’s request</a>. This is partly driven by a desire to preserve access to the Palace in the long term. UK news outlets are also constrained by the <a href="https://www.ipso.co.uk/editors-code-of-practice/">Editor’s Code of Practice</a> and the UK’s right-to-privacy legislation, <a href="https://theconversation.com/does-the-royal-family-have-a-right-to-privacy-what-the-law-says-224881">which applies to the royal family</a>. Nonetheless, the snap was widely circulated online.</p>
<p>The situation escalated further when Catherine shared a photo of her and her children <a href="https://www.instagram.com/p/C4U_IqTNaqU/?utm_source=ig_web_copy_link&igsh=MzRlODBiNWFlZA==">on Instagram</a> in honour of Mother’s Day. The public quickly realised the image was at best poorly photoshopped or at worst AI-generated. Online sleuths identified strangely shaped and misplaced hands, odd shadows and unseasonal plant life. </p>
<p>The Associated Press, Getty Images, AFP and Reuters subsequently <a href="https://www.bbc.com/news/uk-68526972">issued “kill notices” on the image</a>, stating concerns it had been digitally manipulated. In response, Kensington Palace released a <a href="https://twitter.com/KensingtonRoyal/status/1767135566645092616?ref_src=twsrc%5Etfw">brief statement from Catherine</a>, who explained that as an amateur photographer she likes to “occasionally experiment with editing”. The photo had previously been attributed to the Prince of Wales. </p>
<h2>Old media PR won’t work in a new media world</h2>
<p>The situation with Catherine’s absence from public life exposes the limits of old media strategies in a “new media” world. </p>
<p>The Palace is used to being able to control media coverage through the <a href="https://newsmediauk.org/industry-services/royal-rota/">royal rota</a>, a select group of press outlets in the UK given access to royal events. It typically doesn’t comment on the record in response to gossip and speculation. Yet the interest in Catherine’s health has prompted a <a href="https://www.etonline.com/palace-responds-to-theories-about-kate-middletons-health-and-whereabouts-220728">number</a> of <a href="https://x.com/KensingtonRoyal/status/1751938452721996034?s=20">statements</a> to the <a href="https://www.thesun.co.uk/royals/26254336/prince-william-return-work-thanksgiving-service/">press</a>. </p>
<p>These old media strategies don’t seem to be working, with news outlets that are part of the <a href="https://www.mirror.co.uk/news/uk-news/kate-middleton-photo-given-kill-32320496">royal rota reporting critically</a> on the manipulated image. </p>
<p>In a world increasingly plagued by synthetic and AI-generated images, it seems the Palace releasing a digitally manipulated image has also undermined the public’s trust in it, adding fuel to the fire. </p>
<p>The public has become increasingly sensitised to <a href="https://www.newyorker.com/culture/rabbit-holes/the-uncanny-failures-of-ai-generated-hands">AI-generated images</a> over the past year, and is generally much more sceptical and switched on. At the same time, the release of the first post-surgery image of Catherine was always going to attract scrutiny online. It seems the Palace was unprepared for this. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/yes-kate-middletons-photo-was-doctored-but-so-are-a-lot-of-images-we-see-today-225553">Yes, Kate Middleton's photo was doctored. But so are a lot of images we see today</a>
</strong>
</em>
</p>
<hr>
<p>Most social media users also treat royal rumours similarly to other types of viral celebrity gossip and <a href="https://theconversation.com/the-power-and-pleasure-and-occasional-backlash-of-celebrity-conspiracy-theories-221754">conspiracy</a> <a href="https://journal.media-culture.org.au/index.php/mcjournal/article/view/2871">theorising</a>, and evidence suggests the royal family’s popularity is <a href="https://time.com/6276478/british-monarchy-popularity-explained/">declining over time</a>. </p>
<p>Chaotic, fast-paced social media platforms such as X and TikTok are breeding grounds for misinformation – and #KateGate is arguably the first time the Palace has felt the full force of new-age online conspiracy.</p>
<p>Recent events demonstrate the Palace can no longer rely on favoured newspapers avoiding tricky topics. Now, everyone online can act as a reporter – and a sleuth – and the Palace will need to be much more forthcoming if it wants to preserve its image. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-power-and-pleasure-and-occasional-backlash-of-celebrity-conspiracy-theories-221754">The power and pleasure – and occasional backlash – of celebrity conspiracy theories</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/225562/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rumours are out of control following the Kate Middleton photo controversy. It seems the royal family’s PR train is running off its rails.Naomi Smith, Lecturer in Sociology, University of the Sunshine CoastAmy Clarke, Senior Lecturer in History specialising in architectural heritage and material culture, University of the Sunshine CoastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2255532024-03-12T03:25:40Z2024-03-12T03:25:40ZYes, Kate Middleton’s photo was doctored. But so are a lot of images we see today<figure><img src="https://images.theconversation.com/files/581154/original/file-20240312-26-tb4sa3.jpg?ixlib=rb-1.1.0&rect=425%2C221%2C2598%2C1694&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">The Conversation/Instagram/X</span></span></figcaption></figure><p>Rumours and conspiracies have been <a href="https://www.nytimes.com/2024/02/28/style/princess-kate-middleton-health.html">swirling</a> following the abdominal surgery and long recovery period of Catherine, Princess of Wales, earlier this year. They intensified on Monday when Kensington Palace released a photo of the princess with her three children.</p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/p/C4U_IqTNaqU","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<p>The photo had clear signs of tampering, and international wire services <a href="https://apnews.com/article/kate-princess-photo-surgery-ca91acf667c87c6c70a7838347d6d4fb">withdrew the image</a> amid concerns around manipulation. The princess later <a href="https://twitter.com/KensingtonRoyal/status/1767135566645092616">apologised for any confusion</a> and said she had “experimented with editing” as many amateur photographers do.</p>
<p>Image editing is extremely common these days, and not all of it is for nefarious purposes. However, in an age of rampant misinformation, how can we stay vigilant around suspicious images?</p>
<h2>What happened with the royal photo?</h2>
<p>A close look reveals at least eight inconsistencies with the image. </p>
<p>Two of these relate to unnatural blur. Catherine’s right hand is unnaturally blurred, even though her left hand is sharp and at the same distance from the camera. The left side of Catherine’s hair is also unnaturally blurred, while the right side of her hair is sharp.</p>
<p>These types of edits are usually made with a blur tool that softens pixels. It is often used to make the background of an image less distracting or to smooth rough patches of texture.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=358&fit=crop&dpr=1 600w, https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=358&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=358&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=450&fit=crop&dpr=1 754w, https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=450&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/581145/original/file-20240312-26-rhmkk1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=450&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">At least eight logical inconsistencies exist in the doctored image the Prince and Princess of Wales posted on social media.</span>
<span class="attribution"><a class="source" href="https://www.instagram.com/p/C4U_IqTNaqU/">Photo by the Prince of Wales/Chart by T.J. Thomson</a></span>
</figcaption>
</figure>
<p>Five of the edits appear to use the “clone stamp” tool. This is a Photoshop tool that takes part of the same or a different image and “stamps” it onto another part.</p>
<p>You can see this with the repeated pattern on Louis’s (on the left) sweater and the tile on the ground. You can also see it with the step behind Louis’s legs and on Charlotte’s hair and sleeve. The zipper on Catherine’s jacket also doesn’t line up.</p>
<p>The most charitable interpretation is that the princess was trying to remove distracting or unflattering elements. But the artefacts could also point to multiple images being blended together. This could either be to try to show the best version of each person (for example, with a smiling face and open eyes), or for another purpose.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1767135566645092616"}"></div></p>
<h2>How common are image edits?</h2>
<p>Image editing is increasingly common as both photography and editing are increasingly becoming more automated.</p>
<p>This sometimes happens without you even knowing.</p>
<p>Take HDR (high dynamic range) images, for example. Point your iPhone or equivalent at a beautiful sunset and watch it capture the scene from the brightest highlights to the darkest shadows. What happens here is your camera makes multiple images and automatically stitches them together to make an image <a href="https://www.adobe.com/creativecloud/photography/hub/guides/what-is-hdr-photography.html">with a wider range of contrast</a>.</p>
<p>While face-smoothing or teeth-whitening filters are nothing new, some smartphone camera apps apply them without being prompted. Newer technology like Google’s “Best Take” <a href="https://blog.google/products/photos/how-google-photos-best-take-works/">feature</a> can even combine the best attributes of multiple images to ensure everyone’s eyes are open and faces are smiling in group shots.</p>
<p>On social media, it seems everyone tries to show themselves in their best light, which is partially why so few of the photos on our <a href="https://www.tandfonline.com/doi/abs/10.1080/15551393.2020.1862663">camera rolls</a> make it onto our social media feeds. It is also why we often edit our photos to show our best sides.</p>
<p>But in other contexts, such as press photography, the <a href="https://www.ap.org/about/news-values-and-principles/telling-the-story/visuals">rules are much stricter</a>. The Associated Press, for example, bans all edits beyond simple crops, colour adjustments, and “minor adjustments” that “restore the authentic nature of the photograph”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/three-images-that-show-wartime-photographs-can-have-greater-impact-than-the-written-word-216508">Three images that show wartime photographs can have greater impact than the written word</a>
</strong>
</em>
</p>
<hr>
<p>Professional photojournalists haven’t always gotten it right, though. While the majority of lens-based news workers adhere to ethical guidelines like those published by the <a href="https://nppa.org/resources/code-ethics">National Press Photographers Association</a>, others have let deadline pressures, competition and the desire for exceptional imagery cloud their judgement.</p>
<p>One such example was in 2017, when British photojournalist Souvid Datta admitted to <a href="https://time.com/4766312/souvid-datta/">visually plagiarising</a> another photographer’s work within his own composition. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"859824132258537472"}"></div></p>
<p>Concerns around false or misleading visual information are at an all-time high, given advances in <a href="https://theconversation.com/nine-was-slammed-for-ai-editing-a-victorian-mps-dress-how-can-news-media-use-ai-responsibly-222382">generative artificial intelligence (AI)</a>. In fact, this year the World Economic Forum named the risk of misinformation and disinformation as the world’s greatest <a href="https://www.weforum.org/agenda/2024/01/ai-disinformation-global-risks/">short-term threat</a>. It placed this above armed conflict and natural disasters.</p>
<h2>What to do if you’re unsure about an image you’ve found online</h2>
<p>It can be hard to keep up with the more than <a href="https://theconversation.com/3-2-billion-images-and-720-000-hours-of-video-are-shared-online-daily-can-you-sort-real-from-fake-148630">3 billion photos</a> that are shared each day.</p>
<p>But, for the ones that matter, we owe it to ourselves to slow down, zoom in and ask ourselves a few simple <a href="https://www.aap.com.au/factcheck-resources/how-we-check-the-facts/">questions</a>:</p>
<p>1. Who made or shared the image? This can give clues about reliability and the purpose of making or sharing the image.</p>
<p>2. What’s the evidence? Can you find another version of the image, for example, using a <a href="https://tineye.com/">reverse-image search engine</a>?</p>
<p>3. What do trusted sources say? Consult resources like <a href="https://www.aap.com.au/factcheck/">AAP FactCheck</a> or <a href="https://factcheck.afp.com/">AFP Fact Check</a> to see if authoritative sources have already weighed in.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/deepfakes-how-to-empower-youth-to-fight-the-threat-of-misinformation-and-disinformation-221171">Deepfakes: How to empower youth to fight the threat of misinformation and disinformation</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/225553/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society. Thomson collaborated with the Australian Associated Press in 2021 to produce fact-checking resources for its "Check the Facts" campaign.</span></em></p>The Princess of Wales is caught in a social media storm after the release of a clearly edited photo. But image editing is increasingly common, and your phone can even do it without you knowing.T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2247892024-03-01T17:25:36Z2024-03-01T17:25:36ZIn 2024, we’ll truly find out how robust our democracies are to online disinformation campaigns<figure><img src="https://images.theconversation.com/files/579212/original/file-20240301-24-zxaj4p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/colorful-fo-election-vote-hand-holding-794518426">I'm Friday / Shutterstock</a></span></figcaption></figure><p><a href="https://www.un.org/en/countering-disinformation">Disinformation</a>, sharing false information to deceive and mislead others, can take many forms. From edited “deepfake” videos made on smartphones to vast foreign-led information operations, politics and elections show how varied disinformation can be. </p>
<p>Hailed as <a href="https://www.aljazeera.com/news/2024/1/4/the-year-of-elections-is-2024-democracys-biggest-test-ever">“the year of elections”</a>, with the majority of the world’s population going to the polls, 2024 will also be a year of lessons learned, where we will see whether disinformation can truly subvert our political processes or if we are more resilient than we think.</p>
<p>The dissemination of disinformation, as well as misleading content and methods, is not always high-tech. We often think about social networking, manipulated media, and sophisticated espionage in this regard, but sometimes efforts can be very low budget. In 2019, publications with <a href="https://news.sky.com/story/general-election-is-it-time-to-ban-fake-newspaper-political-ads-11870963">names that sounded like newspapers</a> were posted through letterboxes across the UK. These news publications, however, do not exist. </p>
<p>Bearing headlines such as “90% back remain”, they were imitation newspapers created and disseminated by the UK’s major political parties. These types of publication, which some voters thought were legitimate news publications, led to the Electoral Commission <a href="https://www.electoralcommission.org.uk/sites/default/files/2020-04/UKPGE%20election%20report%202020.pdf">describing this technique as “misleading”</a>. </p>
<p>The News Media Association, the body which represents local and regional media, also wrote to the Electoral Commission <a href="https://newsmediauk.org/blog/2021/03/31/nma-launches-campaign-against-politicians-fake-local-newspapers/">calling for the ban of “fake local newspapers”</a>.</p>
<h2>Zone flooding</h2>
<p>Research has shown that for some topics, such as politics and civil rights, all figures across the political spectrum are often <a href="https://benjamins.com/catalog/scl.98.07chr">both attacked and supported</a>, in an attempt to cause confusion and to obfuscate who and what can be believed. </p>
<p>This practice often goes hand-in-hand with <a href="https://www.cambridge.org/core/books/disinformation-age/flooded-zone/388DFBCC7E50B02921023B28E87DD26F">something called “zone flooding”</a>, where the information environment is deliberately overloaded with any and all information, just to confuse people. The aim of these broad disinformation campaigns is to make it difficult for people to believe any information, leading to <a href="https://www.eesc.europa.eu/en/news-media/press-releases/disinformation-and-lack-interest-are-main-reasons-poor-voter-turnout-european-elections">a disengaged and potentially uninformed electorate</a>.</p>
<p><a href="https://www.disinfo.eu/publications/foreign-election-interferences-an-overview-of-trends-and-challenges/">Hostile state information operations</a> and disinformation from abroad will continue to threaten countries such as the UK and US. Adversarial countries such as Russia, China and Iran continuously seek to subvert trust in our institutions and processes with the goal of increasing apathy and resentment. </p>
<p>Just two weeks ago, the US congressional Republicans’ impeachment proceedings against President Joe Biden began to fall apart when it was revealed that a witness was <a href="https://edition.cnn.com/2024/02/20/politics/biden-former-fbi-informant-russian-intelligence/index.html">supplied with false information</a> by Russian <a href="https://www.forbes.com/sites/mollybohannon/2024/02/20/russians-involved-in-passing-a-story-to-key-biden-impeachment-witness-about-hunter-biden-prosecutors-say/">intelligence officials</a>.</p>
<figure class="align-center ">
<img alt="President Joe Biden in the foreground with Donald Trump in the background." src="https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Disinformation is certain to feature in 2024 elections. But are some of the risks overstated?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/democratic-candidate-joe-biden-sharp-foreground-2401520329">Below the Sky / Shutterstock</a></span>
</figcaption>
</figure>
<p>Disinformation can also be found much closer to home. Although it is often uncomfortable for academics and fact checkers to talk about, disinformation can come from the very top, with <a href="https://www.ft.com/content/5da52770-b474-4547-8d1b-9c46a3c3bac9">members of the political elite</a> embracing and promoting false content knowingly. This is further compounded by the reality that fact checks and corrections may not reach the same audience as the original content, causing some disinformation to go unchecked.</p>
<h2>AI-fuelled campaigns</h2>
<p>Recently, there has been increased focus <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8551.12554">on the role of</a> artificial intelligence (AI) <a href="https://journals.sagepub.com/doi/pdf/10.1177/2056305120903408">in spreading disinformation</a>. AI allows computers to carry out tasks that could previously have only been done by humans. So AI and AI-enabled tools can carry out very sophisticated tasks with low effort from humans and at low cost.</p>
<p>Disinformation can be both mediated and enabled by artificial intelligence. Bad actors can use sophisticated algorithms to identify and target swathes of people with disinformation on social media platforms. One key focus, however, has been on generative AI, the use of this technology to produce text and media that seem as if they were created by a human. </p>
<p>This can vary from using tools such as ChatGPT to write social media posts, to using AI-powered image, video and audio generation tools to create media of <a href="https://www.bbc.co.uk/news/uk-68146053">politicians in embarrassing, but fabricated situations</a>. This encompasses what are known as “deepfakes”, which can vary from poor to convincing in their quality.</p>
<p>While some say that AI will shape the coming elections in ways we can’t yet understand, others think the effects of disinformation are exaggerated. The simple reality is that, at present, we do not know how AI will affect the year of elections. </p>
<p>We could see vast deception at a scale only previously imagined, <a href="https://www.britannica.com/technology/Y2K-bug">or this could be a Y2K moment</a>, where our fears simply do not come to fruition. We are at a pivotal moment and the extent to which these elections are affected, or otherwise, will inform our regulatory and policy decisions for years to come.</p>
<p>If 2024 is the year of elections, then 2025 is likely to be the year of reflections. Reflecting on how susceptible our democracies are to disinformation, whether as societies we are vulnerable to sweeping deception and manipulation, and how we can safeguard our future elections. </p>
<p>Whether it’s profoundly consequential or simply something that bubbles under the surface, disinformation will always exist. But the coming year will determine whether it’s top of the agenda for governments, journalists and educators to tackle, or simply something that we learn to live with.</p><img src="https://counter.theconversation.com/content/224789/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>William Dance does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Low tech or hi-tech, the next year will determine how much action nations take on election interference.William Dance, Senior Research Associate, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2248572024-03-01T07:07:04Z2024-03-01T07:07:04ZFacebook won’t keep paying Australian media outlets for their content. Are we about to get another news ban?<p>Facebook’s parent company, Meta, <a href="https://about.fb.com/news/2024/02/update-on-facebook-news-us-australia/">has announced</a> it will stop paying for news content in Australia when the current deals it has expire. Meta will also cease news aggregation on the site.</p>
<p>Three years ago, the company signed deals with Australian news outlets after the government introduced laws requiring tech companies to pay for the news on their platforms. The law only comes into effect if no commercial deal is struck.</p>
<p>Meta has now decided that the cost of providing news in Australia is too high. Its reason for the change is to “better align our investments to our products and services people value the most”. That is, it saves money. </p>
<p>So what does this mean for news on Facebook? What can users expect to find on the platform?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-news-is-fading-from-sight-on-big-social-media-platforms-where-does-that-leave-journalism-218522">The news is fading from sight on big social media platforms – where does that leave journalism?</a>
</strong>
</em>
</p>
<hr>
<h2>An unsurprising manoeuvre</h2>
<p>This decision was largely predictable, as it’s consistent with Meta’s actions in the UK, France, and Germany in December 2023. The same “deprecation” will occur simultaneously in the US. </p>
<p>Meta’s rationale is that news is “a small part of the Facebook experience for the vast majority of people” and is not a reason for the use of the platform as it “makes up less than 3% of what people around the world see in their Facebook feed”. It does not comment on the percentage in Australia.</p>
<p>Meta says “this does not impact our commitment to connecting people to reliable information on our platforms”. However, this “reliable information” is a reference to fact-checking in the context of misinformation. </p>
<p>Meta does not see a link between reliable information and Australian news. It has not addressed the issue of the sustainability of news journalism in Australia.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebooks-news-blockade-in-australia-shows-how-tech-giants-are-swallowing-the-web-155832">Facebook's news blockade in Australia shows how tech giants are swallowing the web</a>
</strong>
</em>
</p>
<hr>
<h2>So what will Facebook look like?</h2>
<p>Facebook says that it will simply remove the <a href="https://www.facebook.com/business/help/417376132287321?id=204021664031159">dedicated tab</a> on the site for news content. </p>
<p>For many users, this will not have an effect. However, for those who use Facebook as a news aggregator, access to links to news publishers will disappear. </p>
<p>Facebook users will need to go to the Facebook page of their favourite news publishers in order to be able to keep up with events. This means having to “follow” all of the news publishers with which Facebook currently has a commercial agreement.</p>
<p>Unlike the approach in 2021, Facebook is not going to <a href="https://www.abc.net.au/news/2021-02-18/facebook-to-restrict-sharing-or-viewing-news-in-australia/13166208">shut down</a> all of the pages that its systems thought were “media pages” (including emergency services and helplines such as 1-800-RESPECT). </p>
<p>Instead, Meta is encouraging news publishers to buy the tech giant’s services to increase their own traffic. </p>
<p>However, this means Meta expects that the flow of funds will be from news publishers to Meta, rather than the other way around.</p>
<h2>What does this mean for news?</h2>
<p>There is already a concern that social media is replacing legacy news sources.</p>
<p>Meta has consistently argues that news is not a driver of its business. In <a href="https://about.fb.com/wp-content/uploads/2020/08/Facebooks-response-to-Australias-proposed-News-Media-and-Digital-Platforms-Mandatory-Bargaining-Code.pdf">submissions to government</a>, it has sought to differentiate Meta and Google. In fact, news publishers often report having their <a href="https://theconversation.com/the-news-is-fading-from-sight-on-big-social-media-platforms-where-does-that-leave-journalism-218522">content buried</a> by algorithms over which they have no control. </p>
<p>Meta contends that news is so unimportant that it would rather not have news options than pay news publishers for content. </p>
<p>The Facebook news ban of 2021 was largely in response to the government’s <a href="https://www.acma.gov.au/news-media-bargaining-code">News Media Bargaining Code</a> – an arrangement in which news organisations could negotiate with big tech companies over payment and inclusion of their content on digital platforms. </p>
<p>In contrast, Google has previously been willing to enter into commercial deals or to launch news aggregator services rather than having a code imposed on it. </p>
<p>It is not clear whether Google will change its view in Australia as a result of the Meta decision. The News Media Bargaining Code has the potential to apply to both businesses. However, Google relies more on news content than Meta. </p>
<h2>Can the government do anything?</h2>
<p>The relevant ministers, Stephen Jones and Michelle Rowland, have already <a href="https://minister.infrastructure.gov.au/rowland/media-release/metas-news-content-announcement">referred to</a> the decision as a “dereliction of its commitment to the sustainability of Australian news media.” </p>
<p>As a practical matter, the News Media Bargaining Code is only triggered if there is no commercial deal in play. The current commercial deals with news outlets are <a href="https://www.abc.net.au/news/2024-03-01/meta-won-t-renew-deal-with-australian-news-media/103533874">due to expire</a> in a few months. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/this-weeks-changes-are-a-win-for-facebook-google-and-the-government-but-what-was-lost-along-the-way-155865">This week's changes are a win for Facebook, Google and the government — but what was lost along the way?</a>
</strong>
</em>
</p>
<hr>
<p>Meta has said that it “will not offer new Facebook products specifically for news publishers in the future”. It will let the existing commercial agreements lapse in in Australia, France, and Germany as they already have in the UK and the US.</p>
<p>The treasurer is now faced with a tough decision. He can “designate” Meta under the code and force it to the bargaining table, or he can agree that news is not a driver of Facebook use. This decision will need to take into account the issue of news journalism sustainability. </p>
<p>However, it also risks a repeat of the 2021 shut down in Australia and a similar one in <a href="https://www.bbc.com/news/world-us-canada-67755133">Canada</a> last year.</p><img src="https://counter.theconversation.com/content/224857/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rob Nicholls received funding from the Australian Research Council. He has previously received funding from Google (at the University of New South Wales). </span></em></p>The news page on Facebook will go, and with it, the flow of money to some Australian media outlets. But will the news content disappear too?Rob Nicholls, Visiting Fellow, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2246262024-02-29T03:52:46Z2024-02-29T03:52:46ZAlgorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?<figure><img src="https://images.theconversation.com/files/578812/original/file-20240229-22-ki29m8.jpg?ixlib=rb-1.1.0&rect=3%2C7%2C2462%2C1608&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/online-news-mobile-phone-close-smartphone-1204164946">Tero Vesalainen/Shutterstock</a></span></figcaption></figure><p>Generative artificial intelligence (AI) tools are supercharging the problem of misinformation, disinformation and fake news. OpenAI’s ChatGPT, Google’s Gemini, and various image, voice and video generators have made it easier than ever to produce content, while making it harder to tell what is factual or real.</p>
<p>Malicious actors looking to spread disinformation can use AI tools to largely automate the generation of <a href="https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and">convincing and misleading text</a>. </p>
<p>This raises pressing questions: how much of the content we consume online is true and how can we determine its authenticity? And can anyone stop this?</p>
<p>It’s not an idle concern. Organisations seeking to covertly influence public opinion or sway elections can now <a href="https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and">scale their operations</a> with AI to unprecedented levels. And their content is being widely disseminated by search engines and social media. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-sora-a-new-generative-ai-tool-could-transform-video-production-and-amplify-disinformation-risks-223850">What is Sora? A new generative AI tool could transform video production and amplify disinformation risks</a>
</strong>
</em>
</p>
<hr>
<h2>Fakes everywhere</h2>
<p>Earlier this year, <a href="https://www.techradar.com/computing/search-engines/google-search-might-be-getting-worse-and-ai-threatens-to-ruin-it-entirely">a German study</a> on search engine content quality noted “a trend toward simplified, repetitive and potentially AI-generated content” on Google, Bing and DuckDuckGo.</p>
<p>Traditionally, readers of news media could rely on editorial control to uphold journalistic standards and verify facts. But AI is rapidly changing this space.</p>
<p>In a report published this week, the internet trust organisation NewsGuard <a href="https://www.newsguardtech.com/special-reports/ai-tracking-center/">identified 725 unreliable websites</a> that publish AI-generated news and information “with little to no human oversight”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1761047409243603406"}"></div></p>
<p>Last month, Google <a href="https://www.adweek.com/media/google-paying-publishers-unreleased-gen-ai/">released an experimental AI tool</a> for a select group of independent publishers in the United States. Using generative AI, the publisher can summarise articles pulled from a list of external websites that produce news and content relevant to their audience. As a condition of the trial, the users have to publish three such articles per day.</p>
<p>Platforms hosting content and developing generative AI blur the traditional lines that enable trust in online content. </p>
<h2>Can the government step in?</h2>
<p>Australia has already seen tussles between government and online platforms over the display and moderation of news and content.</p>
<p>In 2019, the Australian government <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=s1201">amended the criminal code</a> to mandate the swift removal of “abhorrent violent material” by social media platforms. </p>
<p>The Australian Competition and Consumer Commission’s (ACCC) inquiry into power imbalances between Australian news media and digital platforms led to the 2021 implementation of <a href="https://www.accc.gov.au/by-industry/digital-platforms-and-services/news-media-bargaining-code/news-media-bargaining-code">a bargaining code</a> that forced platforms to pay media for their news content.</p>
<p>While these might be considered partial successes, they also demonstrate the scale of the problem and the difficulty of taking action.</p>
<p><a href="https://journals.sagepub.com/doi/full/10.1177/02683962221114408">Our research</a> indicates these conflicts saw online platforms initially open to changes and later resisting them, while the Australian government oscillated from enforcing mandatory measures to preferring voluntary actions.</p>
<p>Ultimately, the government realised that relying on platforms’ “trust us” promises wouldn’t lead to the desired outcomes. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-google-and-meta-owe-news-publishers-much-more-than-you-think-and-billions-more-than-theyd-like-to-admit-216818">Why Google and Meta owe news publishers much more than you think – and billions more than they’d like to admit</a>
</strong>
</em>
</p>
<hr>
<p>The takeaway from our study is that once digital products become integral to millions of businesses and everyday lives, they serve as a tool for platforms, AI companies and big tech to anticipate and push back against government.</p>
<p>With this in mind, it is right to be sceptical of early calls for regulation of generative AI by tech leaders like <a href="https://fortune.com/2023/11/02/elon-musk-ai-regulations-uk-prime-minister-sunak-ai-safety-summit/">Elon Musk</a> and Sam Altman. Such calls have faded as AI takes a hold on our lives and online content.</p>
<p>A challenge lies in the sheer speed of change, which is so swift that safeguards to mitigate the potential risks to society are not yet established. Accordingly, the World Economic Forum’s 2024 Global Risk Report has predicted mis- and disinformation as the <a href="https://www.weforum.org/publications/global-risks-report-2024/">greatest threats</a> in the next two years.</p>
<p>The problem gets worse through generative AI’s ability to create multimedia content. Based on current trends, we can expect an increase in <a href="https://www.nbcnews.com/tech/social-media/emma-watson-deep-fake-scarlett-johansson-face-swap-app-rcna73624">deepfake incidents</a>, although social media platforms like Facebook are responding to these issues. They aim to <a href="https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/">automatically identify and tag</a> AI-generated photos, video and audio.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-openai-saga-demonstrates-how-big-corporations-dominate-the-shaping-of-our-technological-future-218540">The OpenAI saga demonstrates how big corporations dominate the shaping of our technological future</a>
</strong>
</em>
</p>
<hr>
<h2>What can we do?</h2>
<p>Australia’s eSafety commissioner <a href="https://www.esafety.gov.au/industry/tech-trends-and-challenges/generative-ai">is working on ways to regulate and mitigate</a> the potential harm caused by generative AI while balancing its potential opportunities.</p>
<p>A key idea is “safety by design”, which requires tech firms to place these safety considerations at the core of their products.</p>
<p>Other countries like the US are further ahead with the regulation of AI. For example, US President Joe Biden’s recent executive order <a href="https://www.theguardian.com/technology/2023/oct/30/biden-orders-tech-firms-to-share-ai-safety-test-results-with-us-government">on the safe deployment of AI</a> requires companies to share safety test results with the government, regulates <a href="https://en.wikipedia.org/wiki/Red_team">red-team testing</a> (simulated hacking attacks), and guides watermarking on content.</p>
<p>We call for three steps to help protect against the risks of generative AI in combination with disinformation.</p>
<p>1. Regulation needs <a href="https://www.linkedin.com/posts/noamsp_3-steps-to-reshaping-our-digital-landscape-activity-7152649121189797889-WEct">to pose clear rules</a> without allowing for nebulous “best effort” aims or “trust us” approaches.</p>
<p>2. To protect against large-scale disinformation operations, we need to teach media literacy in the same way we teach maths.</p>
<p>3. Safety tech or “safety by design” needs to become a non-negotiable part of every product development strategy.</p>
<p>People are aware AI-generated content is on the rise. In theory, they should adjust their information habits accordingly. However, research shows users <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8196605/">generally tend to underestimate</a> their own risk of believing fake news compared to the perceived risk for others.</p>
<p>Finding trustworthy content shouldn’t involve sifting through AI-generated content to make sense of what is factual.</p><img src="https://counter.theconversation.com/content/224626/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stan Karanasios receives funding from Emergency Management Victoria, Asia-Pacific Telecommunity, and the International Telecommunications Union.
Stan is a Distinguished Member of the Association for Information Systems.</span></em></p><p class="fine-print"><em><span>Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award (project number DE220101597) funded by the Australian Government.</span></em></p>It’s increasingly hard to tell which content online is fake. As malicious actors use generative AI to fuel disinformation, governments must regulate now before it’s too late.Stan Karanasios, Associate Professor, The University of QueenslandMarten Risius, Senior Lecturer in Business Information Systems, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2205812024-02-26T13:39:02Z2024-02-26T13:39:02ZAs war in Ukraine enters third year, 3 issues could decide its outcome: Supplies, information and politics<figure><img src="https://images.theconversation.com/files/577714/original/file-20240224-28-7jc86z.jpg?ixlib=rb-1.1.0&rect=49%2C37%2C8281%2C5508&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Will war fatigue be a factor?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/february-2024-ukraine-odessa-a-gepard-anti-aircraft-gun-news-photo/2022536165?adppopup=true">Kay Nietfeld/picture alliance via Getty Images</a></span></figcaption></figure><p>In retrospect, there was perhaps nothing surprising about Russia’s decision to <a href="https://www.france24.com/en/live-news/20230214-february-24-2022-the-day-russia-invaded-ukraine">invade Ukraine on Feb. 24, 2022</a>.</p>
<p>Vladimir Putin’s intentions were, after all, <a href="https://www.washingtonpost.com/national-security/russia-ukraine-invasion/2021/12/03/98a3760e-546b-11ec-8769-2f4ecdf7a2ad_story.html">hiding in plain sight</a> and <a href="https://www.reuters.com/world/europe/putin-says-western-military-backing-ukraine-threatens-russia-2021-10-21/">signaled in the months running up</a> to the incursion.</p>
<p>What could not be foreseen, however, is where the conflict finds itself now. Heading into its third year, the war has become bogged down: Neither is it a stalemate, nor does it look like either side could make dramatic advances any time soon.</p>
<p>Russia appears to be on the ascendancy, having secured the <a href="https://www.bbc.com/news/world-europe-68322527">latest major battlefield victory</a>, but Ukrainian fighters have exceeded military expectations with their doggedness in the past, and may do so again.</p>
<p>But as a <a href="https://facultyprofiles.tufts.edu/tara-sonenshine">foreign policy expert</a> and former journalist who spent many years covering Russia, I share the view of those who argue that the conflict is potentially at a pivotal point: If Washington does not continue to fully support President Volodymyr Zelenskyy and his military, then Ukraine’s very survival could be at risk. I believe it would also jeopardize America’s leadership in the world and global security. </p>
<p>How the conflict develops during the rest of 2024 will depend on many factors, but three may be key: supplies, information and political will.</p>
<h2>The supplies race</h2>
<p>Russia and Ukraine are locked in a race to resupply its war resources – not just in terms of soldiers, but also ammunition and missiles. Both sides are desperately trying to shore up the number of soldiers it can deploy. </p>
<p>In December 2023, Putin <a href="https://apnews.com/article/russia-ukraine-putin-army-expansion-a2bf0b035aabab20c8b120a1c86c9e38">ordered his generals to increase troop numbers</a> by nearly 170,000, taking the total number of soldiers to 1.32 million. Meanwhile, Ukraine is said to be looking at plans to <a href="https://apnews.com/article/ukraine-russia-war-draft-b2ca1d0ecd72019be2217a653989fbc2">increase its military by 500,000 troops</a>.</p>
<p>Of course, here, Russia has the advantage of being able to draw on a population more than three times that of Ukraine. Also, whereas Putin can simply order up more troops, Zelenskyy must get measures approved through parliament.</p>
<p>Aside from personnel, there is also the need for a steady supply of weapons and ammunition – and there have been reports that both sides are <a href="https://www.bbc.com/news/world-europe-68364924">struggling to maintain</a> <a href="https://www.wbaltv.com/article/after-2-years-of-war-questions-abound-on-whether-kyiv-can-sustain-the-fight-against-russia/46940958">sufficient levels</a>.</p>
<p>Russia appears particularly eager to boost its number of ballistic missiles, as they are <a href="https://www.businessinsider.com/russia-sourcing-ballistic-missiles-to-bypass-ukraine-air-defense-isw-2024-1">better equipped for countering Ukraine air defense systems</a> despite being slower than cruise missiles.</p>
<p>Increasingly, Moscow appears to be looking to North Korea and Iran as suppliers. After Kim Jong Un, the North Korean leader, visited Russia in 2023, the U.S. <a href="https://www.bbc.com/news/world-us-canada-67888793">accused Pyongyang of supplying Russia</a> with ballistic missiles. Iran, meanwhile, has <a href="https://www.atlanticcouncil.org/blogs/ukrainealert/arsenal-of-autocracy-north-korea-and-iran-are-arming-russia-in-ukraine/">delivered to Russia</a> a large number of powerful surface-to-surface ballistic missiles and drones.</p>
<figure class="align-center ">
<img alt="Men in suits talk." src="https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577743/original/file-20240225-16-y23p92.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Russian President Vladimir Putin and North Korean leader Kim Jong Un on Sept. 13, 2023, in Tsiolkovsky, Russia.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/russian-president-vladimir-putin-and-north-korean-leader-news-photo/1661841029?adppopup=true">Getty Images</a></span>
</figcaption>
</figure>
<p>Ukraine, meanwhile, is <a href="https://www.france24.com/en/live-news/20220610-ukraine-dependent-on-arms-from-allies-after-exhausting-soviet-era-weaponry">dependent on foreign military equipment</a>. </p>
<p>Supplies were stronger at the beginning of the war, but since then, Ukraine’s military has suffered from the slow, bureaucratic nature of NATO and U.S. deliveries. It wasn’t, for example, until the summer of 2023 that the <a href="https://www.cfr.org/europe-and-eurasia/ukraine">U.S. approved Europe’s request</a> to provide F-16s to Ukraine. </p>
<p>Ukraine needs more of everything, including air defense munitions, artillery shells, tanks and missile systems. It is also <a href="https://www.theglobeandmail.com/world/article-ukraine-war-medical-care-frontlines/#:%7E:text=In%20an%20open%20letter%20recently,stabilization%20posts%20with%20supplies%20and">running short of medical supplies</a> and has seen hospital shortages of drugs at a time when <a href="https://thehill.com/opinion/healthcare/4371240-the-invisible-enemy-in-ukraine-superbugs/">rampant infections are proving resistant</a> to antibiotics.</p>
<p>Perhaps the biggest factor that remains in Russia’s favor when it comes to supplies is the onerous restrictions placed on Ukraine from the West, <a href="https://www.reuters.com/world/europe/ukraine-shouldnt-use-us-arms-inside-russia-us-general-says-2023-05-25/">limiting its ability</a> to attack Russian territory with U.S. or NATO equipment to avoid a wider war. For example, the Ukrainian military had a High Mobility Artillery Rocket System with a 50-mile range that could hit targets inside Russia, but it modified the range to <a href="https://www.wsj.com/articles/u-s-altered-himars-rocket-launchers-to-keep-ukraine-from-firing-missiles-into-russia-11670214338">keep the U.S. military satisfied</a> that it would not cross a Russian red line.</p>
<p>If this policy could be relaxed, that might be a game changer for Ukraine, although it would raise the stakes for the U.S.</p>
<h2>The information war</h2>
<p>The Ukraine conflict is also a war of messaging.</p>
<p>To this end, Putin uses <a href="https://www.wsj.com/articles/putins-wartime-russia-propaganda-payouts-and-jail-151bb117">propaganda to bolster support</a> for the campaign at home, while undermining support for Ukraine elsewhere – for example, by planting stories in Europe that cause disenchantment with the war. One outrageous claim in the early weeks of the war was that <a href="https://www.cnn.com/2022/05/19/politics/pro-russia-disinformation-report/index.html">Zelenskyy had taken his own life</a>. The rumor came from pro-Russia online operatives as part of an aggressive effort to harm Ukrainian morale, according to <a href="https://www.mandiant.com/resources/blog/information-operations-surrounding-ukraine">cybersecurity firm Mandiant</a>.</p>
<p>More recently, in France, stories appeared that <a href="https://www.washingtonpost.com/world/2023/12/30/france-russia-interference-far-right/">questioned the value of assistance to Ukraine</a> and reminded the public of the negative impact of Russian sanctions on the French. Stirring dissent in this way is a classic Putin play to raise doubts.</p>
<p>And investigative reporting points toward <a href="https://www.washingtonpost.com/world/2024/02/16/russian-disinformation-zelensky-zaluzhny/">a disinformation network</a> being run out of the Kremlin, which includes social media bots deployed on Ukrainian sites spreading stories of Zelenskyy’s team being corrupt and warning that the war would go badly.</p>
<p>Given that Putin controls the Russian media and is quick to crack down on dissent, it is hard to really know what Russians think. But one reputable polling agency recently reported <a href="https://www.norc.org/research/projects/russian-public-opinion-wartime.html">strong support in Russia</a> for both Putin and the war in Ukraine. </p>
<p>Ukrainians, too, still <a href="https://news.gallup.com/poll/512258/ukrainians-stand-behind-war-effort-despite-fatigue.aspx">support the fight against Russia</a>, polling shows. But some war fatigue has no doubt lowered morale.</p>
<p>There are other signs of domestic strain in Ukraine. At the end of 2023, tensions grew between Zelenskyy and his top military commander, General Valery Zaluzhny who had complained about weaponry. Zelenskyy <a href="https://www.nytimes.com/2024/02/08/world/europe/zelensky-general-valery-zaluzhny-ukraine-military.html">ended up firing the military chief</a>, risking political backlash and underscoring that not all is well in the top chain of command.</p>
<p>Should disunity and war fatigue continue into the war’s third year, it could serious impair Ukraine’s ability to fight back against a resurgent Russian offensive. </p>
<h2>The politics of conflict</h2>
<p>But it isn’t just domestic politics in Ukraine and Russia that will decide the outcome of the war. </p>
<p>U.S. politics and European unity could be a factor in 2024 in determining the future of this conflict.</p>
<p>In the U.S., Ukraine aid has become politicized – with aid to Ukraine <a href="https://www.pewresearch.org/short-reads/2023/12/08/about-half-of-republicans-now-say-the-us-is-providing-too-much-aid-to-ukraine/">becoming an increasingly partisan issue</a>.</p>
<p>In early February, the <a href="https://www.cnn.com/2024/02/12/politics/senate-foreign-aid-bill-ukraine/index.html">Senate finally passed an emergency aid bill</a> for Ukraine and Israel that would see US$60.1 billion go to Kyiv. But the bill’s fate in the House is unknown.</p>
<p>And the looming 2024 presidential elections could complicate matters further. Former president Donald Trump has made no secret of his aversion to aid packages over loans, <a href="https://www.nytimes.com/2024/02/14/us/politics/trump-ukraine-biden.html">calling them “stupid</a>,” and has long argued that Americans shouldn’t be footing the bill for the conflict. Recently, he has made bombastic statements about NATO and <a href="https://www.economist.com/the-economist-explains/2024/02/13/how-donald-trumps-re-election-would-threaten-natos-article-5">threatened not to adhere</a> to the alliance’s commitment to protect members if they were attacked by Russia.</p>
<p>And uncertainty about American assistance could leave Europe carrying more of the financial load.</p>
<p>European Union members have had to absorb the majority of the <a href="https://www.cfr.org/expert/max-boot?utm_source=twtw&utm_medium=email&utm_campaign=TWTW2024Feb23&utm_term=TWTW%20and%20All%20Staff%20as%20of%207-9-20">6.3 million Ukrainians who have fled the country</a> since the beginning of the conflict. And that puts a strain on resources. European oil needs also suffer from the sanctions against Russian companies.</p>
<p>Whether these potential war determinants – supplies, information and politics – mean that the Ukraine war will not be entering a fourth year in 12 months time, however, is far from certain. In fact, one thing that does appear clear is that the war that some predicted would be over in weeks looks set to continue for some time still.</p><img src="https://counter.theconversation.com/content/220581/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tara Sonenshine does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Russia appears to have seized the battleground initiative as the Ukraine war marks its second anniversary – but the conflict is far from over.Tara Sonenshine, Edward R. Murrow Professor of Practice in Public Diplomacy, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2237172024-02-23T00:02:10Z2024-02-23T00:02:10ZHow people get sucked into misinformation rabbit holes – and how to get them out<figure><img src="https://images.theconversation.com/files/576118/original/file-20240216-28-bwac7i.jpeg?ixlib=rb-1.1.0&rect=0%2C35%2C6000%2C3952&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/sleepy-exhausted-woman-lying-bed-using-2142188351">Shutterstock</a></span></figcaption></figure><p>As misinformation and radicalisation rise, it’s tempting to look for something to blame: the internet, social media personalities, sensationalised political campaigns, religion, or conspiracy theories. And once we’ve settled on a cause, solutions usually follow: do more fact-checking, regulate advertising, ban YouTubers deemed to have “gone too far”.</p>
<p>However, if these strategies were the whole answer, we should already be seeing a decrease in people being drawn into fringe communities and beliefs, and less misinformation in the online environment. We’re not.</p>
<p>In new research <a href="https://doi.org/10.1177/14407833241231756">published in the Journal of Sociology</a>, we and our colleagues found radicalisation is a process of increasingly intense stages, and only a small number of people progress to the point where they commit violent acts. </p>
<p>Our work shows the misinformation radicalisation process is a pathway driven by human emotions rather than the information itself – and this understanding may be a first step in finding solutions.</p>
<h2>A feeling of control</h2>
<p>We analysed dozens of public statements from newspapers and online in which former radicalised people described their experiences. We identified different levels of intensity in misinformation and its online communities, associated with common recurring behaviours. </p>
<p>In the early stages, we found people either encountered misinformation about an anxiety-inducing topic through algorithms or friends, or they went looking for an explanation for something that gave them a “bad feeling”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/three-reasons-why-disinformation-is-so-pervasive-and-what-we-can-do-about-it-188457">Three reasons why disinformation is so pervasive and what we can do about it</a>
</strong>
</em>
</p>
<hr>
<p>Regardless, they often reported finding the same things: a new sense of certainty, a new community they could talk to, and feeling they had regained some control of their lives.</p>
<p>Once people reached the middle stages of our proposed radicalisation pathway, we considered them to be invested in the new community, its goals, and its values. </p>
<h2>Growing intensity</h2>
<p>It was during these more intense stages that people began to report more negative impacts on their own lives. This could include the loss of friends and family, health issues caused by too much time spent on screens and too little sleep, and feelings of stress and paranoia. To soothe these pains, they turned again to their fringe communities for support. </p>
<p>Most people in our dataset didn’t progress past these middle stages. However, their continued activity in these spaces kept the misinformation ecosystem alive. </p>
<figure class="align-center ">
<img alt="Photo showing man and woman lying in bed in the dark, facing away from each other and looking at their phones." src="https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Engagement with misinformation proceeds in stages.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-asian-couple-using-smartphone-midnight-2131573395">TimeImage / Shutterstock</a></span>
</figcaption>
</figure>
<p>When people did move further and reach the extreme final stages in our model, they were doing active harm. </p>
<p>In their recounting of their experiences at these high levels of intensity, individuals spoke of choosing to break ties with loved ones, participating in public acts of disruption and, in some cases, engaging in violence against other people in the name of their cause. </p>
<p>Once people reached this stage, it took pretty strong interventions to get them out of it. The challenge, then, is how to intervene safely and effectively when people are in the earlier stages of being drawn into a fringe community.</p>
<h2>Respond with empathy, not shame</h2>
<p>We have a few suggestions. For people who are still in the earlier stages, friends and trusted advisers, like a doctor or a nurse, can have a big impact by simply responding with empathy. </p>
<p>If a loved one starts voicing possible fringe views, like a fear of vaccines, or animosity against women or other marginalised groups, a calm response that seeks to understand the person’s underlying concern can go a long way. </p>
<p>The worst response is one that might leave them feeling ashamed or upset. It may drive them back to their fringe community and accelerate their radicalisation. </p>
<p>Even if the person’s views intensify, maintaining your connection with them can turn you into a lifeline that will see them get out sooner rather than later.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/out-of-the-rabbit-hole-new-research-shows-people-can-change-their-minds-about-conspiracy-theories-222507">Out of the rabbit hole: new research shows people can change their minds about conspiracy theories</a>
</strong>
</em>
</p>
<hr>
<p>Once people reached the middle stages, we found third-party online content – not produced by government, but regular users – could reach people without backfiring. Considering that many people in our research sample had their radicalisation instigated by social media, we also suggest the private companies behind such platforms should be held responsible for the effects of their automated tools on society. </p>
<p>By the middle stages, arguments on the basis of logic or fact are ineffective. It doesn’t matter whether they are delivered by a friend, a news anchor, or a platform-affiliated fact-checking tool.</p>
<p>At the most extreme final stages, we found that only heavy-handed interventions worked, such as family members forcibly hospitalising their radicalised relative, or individuals undergoing government-supported deradicalisation programs.</p>
<h2>How not to be radicalised</h2>
<p>After all this, you might be wondering: how do you protect <em>yourself</em> from being radicalised? </p>
<p>As much of society becomes more dependent on digital technologies, we’re going to get exposed to even more misinformation, and our world is likely going to get smaller through online echo chambers. </p>
<p>One strategy is to foster your critical thinking skills by <a href="https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(23)00198-5">reading long-form texts from paper books</a>. </p>
<p>Another is to protect yourself from the emotional manipulation of platform algorithms by <a href="https://guilfordjournals.com/doi/10.1521/jscp.2018.37.10.751">limiting your social media use</a> to small, infrequent, purposefully-directed pockets of time.</p>
<p>And a third is to sustain connections with other humans, and lead a more analogue life – which has other benefits as well.</p>
<p>So in short: log off, read a book, and spend time with people you care about. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-month-at-sea-with-no-technology-taught-me-how-to-steal-my-life-back-from-my-phone-127501">A month at sea with no technology taught me how to steal my life back from my phone</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/223717/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Emily Booth is supported by funding from the Australian Department of Home Affairs and the Defence Innovation Network.</span></em></p><p class="fine-print"><em><span>Marian-Andrei Rizoiu receives funding from the Australian Department of Home Affairs, the Defence Science and Technology Group, the Defence Innovation Network and the Australian Academy of Science.</span></em></p>People who dive into misinformation are driven to satisfy an emotional need, according to our new research.Emily Booth, Research assistant, University of Technology SydneyMarian-Andrei Rizoiu, Associate Professor in Behavioral Data Science, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2233922024-02-15T15:58:50Z2024-02-15T15:58:50ZDisinformation threatens global elections – here’s how to fight back<figure><img src="https://images.theconversation.com/files/575950/original/file-20240215-22-at0x1v.jpg?ixlib=rb-1.1.0&rect=180%2C90%2C5826%2C3890&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some Republicans still believe the 2020 election was "stolen" from Donald Trump.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/helena-montana-nov-7-2020-protesters-1849449790">Lyonstock/Shutterstock</a></span></figcaption></figure><p>With over half the world’s population heading to the polls in 2024, disinformation season is upon us — and the warnings are dire. The World Economic Forum <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">declared</a> misinformation a top societal threat over the next two years and major news organisations <a href="https://www.nbcnews.com/tech/misinformation/disinformation-unprecedented-threat-2024-election-rcna134290">caution</a> that disinformation poses an unprecedented threat to democracies worldwide. </p>
<p>Yet, some scholars and pundits have <a href="https://theconversation.com/disinformation-is-often-blamed-for-swaying-elections-the-research-says-something-else-221579">questioned</a> whether disinformation can really sway election outcomes. Others think concern over disinformation is just a <a href="https://undark.org/2023/10/26/opinion-misinformation-moral-panic/">moral panic</a> or merely a <a href="https://iai.tv/articles/misinformation-is-the-symptom-not-the-disease-daniel-walliams-auid-2690">symptom</a> rather than the cause of our societal ills. Pollster Nate Silver even thinks that misinformation “<a href="https://twitter.com/NateSilver538/status/1745556135157899389">isn’t a coherent concept</a>”.</p>
<p>But we argue the evidence tells a different story.</p>
<p>A 2023 study showed that the vast majority of academic <a href="https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/">experts</a> are in agreement about how to define misinformation (namely as false and misleading content) and what this looks like (for example lies, conspiracy theories and pseudoscience). Although the study didn’t cover disinformation, such experts generally agree that this can be defined as intentional misinformation.</p>
<p>A recent paper <a href="https://www.nature.com/articles/s44271-023-00054-5">clarified</a> that misinformation can both be a symptom and the disease. In 2022, nearly 70% of Republicans still <a href="https://www.politifact.com/article/2022/jun/14/most-republicans-falsely-believe-trumps-stolen-ele/">endorsed</a> the false conspiracy theory that the 2020 US presidential election was “stolen” from Donald Trump. If Trump had never floated this theory, how would millions of people have possibly acquired these beliefs?</p>
<p>Moreover, although it is clear that people do not always act on dangerous beliefs, the January 6 US Capitol riots, incited by false claims, serve as an important reminder that a <a href="https://www.politifact.com/article/2021/jun/30/misinformation-and-jan-6-insurrection-when-patriot/">misinformed</a> crowd can disrupt and undermine democracy. </p>
<p>Given that nearly 25% of elections are decided by a margin of <a href="https://www.pnas.org/doi/full/10.1073/pnas.1419828112">under 3%</a>, mis- and disinformation can have important influence. One <a href="https://www.sciencedirect.com/science/article/pii/S0261379418303019">study</a> found that among previous Barack Obama voters who did not buy into any fake news about Hillary Clinton during the 2016 presidential election, 89% voted for Clinton. By contrast, among prior Obama voters who believed at least two fake headlines about Clinton, only 17% voted for her. </p>
<p>While this doesn’t necessarily prove that the misinformation caused the voting behaviour, we do know that <a href="https://www.channel4.com/news/revealed-trump-campaign-strategy-to-deter-millions-of-black-americans-from-voting-in-2016">millions</a> of black voters were targeted with misleading ads discrediting Clinton in key swing states ahead of the election. </p>
<p>Research has shown that such micro-targeting of specific audiences based on
variables such as their personality not only influences <a href="https://www.pnas.org/doi/full/10.1073/pnas.1710966114">decision-making</a> but also impacts <a href="https://journals.sagepub.com/doi/full/10.1177/0093650220961965">voting intentions</a>. A recent <a href="https://academic.oup.com/pnasnexus/advance-article/doi/10.1093/pnasnexus/pgae035/7591134">paper</a> illustrated how large language models can be deployed to craft micro-targeted ads at scale, estimating that for every 100,000 individuals targeted, at least several thousand can be persuaded.</p>
<p>We also know that not only are people bad at <a href="https://www.cell.com/iscience/pdf/S2589-0042(21)01335-3.pdf">discerning</a> deepfakes (AI generated images of fake events) from genuine content, studies find that deepfakes do influence <a href="https://journals.sagepub.com/doi/full/10.1177/1940161220944364">political</a> attitudes among a small target group. </p>
<p>There are more indirect consequences of disinformation too, such as eroding public <a href="https://journals.sagepub.com/doi/full/10.1177/1461444820943878">trust</a> and <a href="https://www.pnas.org/doi/abs/10.1073/pnas.2115900119">participation</a> in elections.</p>
<p>Other than hiding under our beds and worrying, what can we do to protect ourselves?</p>
<h2>The power of prebunking</h2>
<p>Many efforts have focused on fact-checking and debunking false beliefs. In contrast, <a href="https://www.tandfonline.com/doi/full/10.1080/10463283.2021.1876983">“prebunking”</a> is a new way to prevent false beliefs from forming in the first place. Such “inoculation” involves warning people not to fall for a false narrative or propaganda tactic, together with an explanation as to why. </p>
<p>Misinforming rhetoric has clear <a href="https://journals.sagepub.com/doi/full/10.1177/09579265221076609">markers</a>, such as scapegoating or use of false dichotomies (there are many others), that people can learn to identify. Like a medical vaccine, the prebunk exposes the recipient to a “weakened dose” of the infectious agent (the disinformation) and refutes it in a way that confers protection. </p>
<p>For example, we created an online <a href="https://www.vice.com/en/article/dy8vzm/homeland-security-funded-this-game-about-destabilizing-a-small-us-town">game</a> for the Department of Homeland Security to empower Americans to spot foreign influence techniques during the 2020 presidential election. The weakened dose? <a href="https://www.nbcnews.com/news/us-news/u-s-cybersecurity-agency-uses-pineapple-pizza-demonstrate-vulnerability-foreign-n1035296">Pineapple pizza</a>.</p>
<p>How could pineapple pizza possibly be the way to tackle misinformation? It shows how bad-faith actors can take an innocuous issue such as whether or not to put pineapple on pizza, and use this to try to start a culture war. They might claim it’s offensive to Italians or urge Americans not to let anybody restrict their pizza-topping freedom.</p>
<p>They can then buy bots to amplify the issue on both sides, disrupt debate – and sow chaos. Our <a href="https://misinforeview.hks.harvard.edu/article/breaking-harmony-square-a-game-that-inoculates-against-political-misinformation/">results</a> showed that people improved in their ability to recognise these tactics after playing our inoculation game. </p>
<p>In 2020, <a href="https://www.npr.org/2022/10/28/1132021770/false-information-is-everywhere-pre-bunking-tries-to-head-it-off-early">Twitter</a> identified false election tropes as potential “vectors of misinformation” and sent out prebunks to millions of US users warning them of fraudulent claims, such as that voting by mail is not safe. </p>
<p>These prebunks armed people with a fact — that experts agree that voting by mail is reliable — and it worked insofar as the prebunks inspired confidence in the election process and motivated users to seek out more factual information. Other social media companies, such as <a href="https://medium.com/jigsaw/prebunking-to-build-defenses-against-online-manipulation-tactics-in-germany-a1dbfbc67a1a">Google</a> and <a href="https://sustainability.fb.com/blog/2022/10/24/climate-science-literacy-initiative/">Meta</a> have followed suit across a range of issues. </p>
<p>A new <a href="https://bpb-us-e1.wpmucdn.com/sites.dartmouth.edu/dist/5/2293/files/2024/02/voter-fraud-corrections-e163369556a2d7a4.pdf">paper</a> tested inoculation against false claims about the election process in the US and Brazil. Not only did it found that prebunking worked better than traditional debunking, but that the inoculation improved discernment between true and false claims, effectively reduced election fraud beliefs and improved confidence in the integrity of the upcoming 2024 elections. </p>
<p>In short, inoculation is a <a href="https://futurefreespeech.org/background-report-empowering-audiences-against-misinformation-through-prebunking/">free speech</a>-empowering intervention that can work on a global scale. When Russia was looking for a pretext to invade Ukraine, US president Joe Biden used this approach to “<a href="https://www.deseret.com/opinion/2022/3/2/22955870/opinion-how-the-white-house-prebunked-putins-lies-disinformation-joe-biden-donald-trump-russia">inoculate</a>” the world against Putin’s plan to stage and film a fabricated Ukrainian atrocity, complete with actors, a script and a movie crew. Biden declassified the intelligence and exposed the plot.</p>
<p>In effect, he warned the world not to fall for fake videos with actors pretending to be Ukrainian soldiers on Russian soil. Forewarned, the international community was <a href="https://www.economist.com/united-states/2022/02/26/deploying-reality-against-putin">unlikely</a> to fall for it. Russia found another pretext to invade, of course, but the point remains: forewarned is forearmed.</p>
<p>But we need not rely on government or tech firms to build <a href="https://harpercollins.co.uk/products/mental-immunity-infectious-ideas-mind-parasites-and-the-search-for-a-better-way-to-think-andy-norman?variant=39295503597646">mental immunity</a>. We can all <a href="https://interventions.withgoogle.com/static/pdf/A_Practical_Guide_to_Prebunking_Misinformation.pdf">learn</a> how to spot misinformation by studying the markers accompanying misleading rhetoric.</p>
<p>Remember that polio was a highly infectious disease that was eradicated through vaccination and herd immunity. Our challenge now is to build herd immunity to the tricks of disinformers and propagandists. </p>
<p>The future of our democracy may depend on it.</p><img src="https://counter.theconversation.com/content/223392/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sander van der Linden consults for or receives funding from the UK Government's Cabinet Office, The U.S. State Department, the American Psychological Association, the US Center for Disease Control, the European Commission, the Templeton World Charity Foundation, the United Nations, the World Health Organization, Google, and Meta. </span></em></p><p class="fine-print"><em><span>Lee McIntyre advises the UK Government on how to fight disinformation.</span></em></p><p class="fine-print"><em><span>Stephan Lewandowsky receives funding from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO), the
Humboldt Foundation through a research award, the Volkswagen Foundation (grant ``Reclaiming individual autonomy and democratic discourse online: How to rebalance human and algorithmic decision making''), and the European Commission (Horizon 2020 grants 964728 JITSUVAX and 101094752 SoMe4Dem). He also receives funding from Jigsaw (a technology incubator created by Google) and from UK Research and Innovation (through EU Horizon replacement funding grant number 10049415). He collaborates with the European Commission's Joint Research Centre.</span></em></p>Scientists estimate that for every 100,000 people targeted with specific political ads, several thousand can be persuaded.Sander van der Linden, Professor of Social Psychology in Society, University of CambridgeLee McIntyre, Research Fellow, Center for Philosophy and History of Science, Boston UniversityStephan Lewandowsky, Chair of Cognitive Psychology, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2158152024-02-15T01:53:26Z2024-02-15T01:53:26ZCan we be inoculated against climate misinformation? Yes – if we prebunk rather than debunk<figure><img src="https://images.theconversation.com/files/575202/original/file-20240213-24-2257zy.jpg?ixlib=rb-1.1.0&rect=239%2C58%2C4606%2C2971&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/montreal-canada-september-27-2019-woman-1547586671">Adrien Demers/Shutterstock</a></span></figcaption></figure><p>Last year, the world experienced the hottest day <a href="https://www.washingtonpost.com/climate-environment/2023/07/05/hottest-day-ever-recorded">ever recorded</a>, as we endured the first year where temperatures were 1.5°C warmer than the pre-industrial era. The link between extreme events and climate change is <a href="https://www.worldweatherattribution.org/extreme-heat-in-north-america-europe-and-china-in-july-2023-made-much-more-likely-by-climate-change/#:%7E:text=July%202023%20saw%20extreme%20heatwaves,China%20(CNN%2C2023).">clearer than ever</a>. But that doesn’t mean climate misinformation has stopped. Far from it. </p>
<p>Misleading or incorrect information on climate still spreads like wildfire, even during the angry northern summer of 2023. Politicians falsely claimed the heatwaves were “<a href="https://www.politico.com/news/magazine/2023/08/09/phoenix-heat-wave-republicans-00110325">normal</a>” for summer. Conspiracy theorists claimed the devastating fires in Hawaii were ignited by <a href="https://www.forbes.com/sites/mattnovak/2023/08/11/conspiracy-theorists-go-viral-with-claim-space-lasers-are-to-blame-for-hawaii-fires/?sh=1d46579e4529">government lasers</a>. </p>
<p>People producing misinformation have shifted tactics, too, often moving from the old denial (claiming climate change isn’t happening) to the <a href="https://edition.cnn.com/2024/01/16/climate/climate-denial-misinformation-youtube/index.html">new denial</a> (questioning climate solutions). Spreading doubt and scepticism has hamstrung our response to the enormous threat of climate change. And with sophisticated generative AI making it easy to generate plausible lies, it could become an <a href="https://www.stockholmresilience.org/download/18.889aab4188bda3f44912a32/1687863825612/SRC_Climate%20misinformation%20brief_A4_.pdf">even bigger issue</a>.</p>
<p>The problem is, debunking misinformation <a href="https://www.nature.com/articles/s41562-023-01623-8">is often not sufficient</a> and you run the risk of giving false information <a href="https://link.springer.com/article/10.1007/s12144-024-05651-z">credibility</a> when you have to debunk it. Indeed, a catchy lie can often stay in people’s heads while sober facts are forgotten. </p>
<p>But there’s a new option: the <a href="https://interventions.withgoogle.com/static/pdf/A_Practical_Guide_to_Prebunking_Misinformation.pdf">prebunking method</a>. Rather than waiting for misinformation to spread, you lay out clear, accurate information in advance – along with describing common manipulation techniques. Prebunking often has a better chance of success, according to <a href="https://harpercollins.co.uk/products/foolproof-why-we-fall-for-misinformation-and-how-to-build-immunity-sander-van-der-linden?variant=39973011980366">recent research</a> from co-author Sander van der Linden. </p>
<h2>How does prebunking work?</h2>
<p><a href="https://engineering.stanford.edu/magazine/article/how-fake-news-spreads-real-virus">Misinformation spreads</a> much like a virus. The way to protect ourselves and everyone else is similar: through vaccination. Psychological inoculation via prebunking acts like a vaccine and reduces the probability of infection. (We focus on misinformation here, which is shared accidentally, not <a href="https://frontline.thehindu.com/news/what-is-climate-misinformation-and-why-does-it-matter-disinformation-opponents-of-climate-science-greenwashing/article67771776.ece">disinformation</a>, which is where people deliberately spread information they know to be false). </p>
<p>If you’re forewarned about dodgy claims and questionable techniques, you’re more likely to be sceptical when you come across a YouTube video claiming electric cars are dirtier than those with internal combustion engines, or a Facebook page suggesting offshore wind turbines will kill whales. </p>
<p>Inoculation is not just a metaphor. By exposing us to a weakened form of the types of misinformation we might see in the future and giving us ways to identify it, we reduce the chance false information takes root in our psyches. </p>
<p>Scientists have tested these methods with some success. In <a href="https://publichealth.jmir.org/2022/6/e34615/">one study</a> exploring ways of countering anti-vaccination misinformation, researchers created simple videos to warn people manipulators might try to influence their thinking about vaccination with anecdotes or scary images rather than evidence. </p>
<p>They also gave people relevant facts about how low the actual injury rate from vaccines is (around two injuries per million). The result: compared to a control group, people with the psychological inoculation were more likely to recognise misleading rhetoric, less likely to share this type of content with others, and more likely to want to get vaccinated. </p>
<p>Similar studies have <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/gch2.201600008">been conducted</a> on climate misinformation. Here, one group was forewarned that politically motivated actors will try to make it seem as if there was a lot of disagreement on the causes of climate change by appealing to fake experts and bogus petitions, while in fact <a href="https://theconversation.com/the-97-climate-consensus-is-over-now-its-well-above-99-and-the-evidence-is-even-stronger-than-that-170370">97% or more</a> of climate scientists have concluded humans are causing climate change. This inoculation proved effective. </p>
<p>The success of these early studies has spurred social media companies <a href="https://sustainability.fb.com/blog/2022/10/24/climate-science-literacy-initiative/">such as Meta</a> to adopt the technique. You can now find prebunking efforts on Meta sites such as Facebook and Instagram intended to protect people against common misinformation techniques, such as cherry-picking isolated data. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/youtube-how-a-team-of-scientists-worked-to-inoculate-a-million-users-against-misinformation-189007">YouTube: how a team of scientists worked to inoculate a million users against misinformation</a>
</strong>
</em>
</p>
<hr>
<h2>Prebunking in practice</h2>
<p>A hotter world will experience increasing climate extremes and <a href="https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020RG000726">more fire</a>. Even though many of the fires we have seen in recent years in Australia, Hawaii, Canada and <a href="https://www.theguardian.com/global-development/2024/feb/10/chile-wildfires-vina-del-mar-achupallas">now Chile</a> are the worst on record, climate misinformation actors routinely try to minimise their severity. </p>
<p>As an example, let’s prebunk claims likely to circulate after the next big fire. </p>
<p><strong>1. The claim: “Climate change is a hoax – wildfires have always been a part of nature.”</strong></p>
<p>How to prebunk it: ahead of fire seasons, scientists can demonstrate claims like this rely on the “<a href="https://newslit.org/tips-tools/news-lit-tip-false-equivalence/">false equivalence</a>” logical fallacy. Misinformation falsely equates the recent rise in extreme weather events with natural events of the past. A devastating fire 100 years ago does not disprove <a href="https://www.unep.org/resources/report/spreading-wildfire-rising-threat-extraordinary-landscape-fires">the trend</a> towards more fires and larger fires. </p>
<p><strong>2. Claim: “Bushfires are caused by arsonists.”</strong> </p>
<p>How to prebunk it: media professionals have an important responsibility here in fact-checking information before publishing or broadcasting. Media can give information on the most common causes of bushfires, from lightning (about 50%) to accidental fires to arson. <a href="https://www.theaustralian.com.au/nation/bushfires-firebugs-fuelling-crisis-as-national-arson-toll-hits-183/news-story/52536dc9ca9bb87b7c76d36ed1acf53f#:%7E:text=Victoria's%20Crime%20Statistics%20agency%20told,older%20men%20in%20their%2060s.">Media claims</a> arsonists were the main cause of the unprecedented 2019-2020 Black Summer fires in Australia were used by climate deniers worldwide, even though arson was <a href="https://www.abc.net.au/news/2020-01-11/australias-fires-reveal-arson-not-a-major-cause/11855022">far from the main cause</a>.</p>
<p><strong>3. Claim: “The government is using bushfires as an excuse to bring in climate regulations.”</strong> </p>
<p>How to prebunk it: explain this recycled conspiracy theory is likely to circulate. Point out how it was used to claim COVID-19 lockdowns were a government ploy to soften people up for <a href="https://www.nbcnews.com/news/world/climate-lockdowns-became-new-battleground-conspiracy-driven-protest-mo-rcna80370">climate lockdowns</a> (which never happened). Show how government agencies can and do communicate openly about why climate regulations <a href="https://www.dcceew.gov.au/climate-change/strategies">are necessary</a> and how they are intended to stave off the worst damage. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="firefighter putting out bushfire" src="https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=220&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=220&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=220&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=276&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=276&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=276&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">False information on bushfires can spread like a bushfire.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/australia-bushfires-fire-fueled-by-wind-1566620281">Toa55/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Misinformation isn’t going away</h2>
<p>Social media and the open internet have made it possible to broadcast information to millions of people, regardless of whether it’s true. It’s no wonder it’s a golden age for misinformation. Misinformation actors have found effective ways to cast scepticism on established science and then sell a false alternative. </p>
<p>We have to respond. Doing nothing means the lies win. And getting on the front foot with prebunking is one of the best tools we have. </p>
<p>As the world gets hotter, prebunking offers a way to anticipate new variants of lies and misinformation and counter them – before they take root. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/7-ways-to-avoid-becoming-a-misinformation-superspreader-157099">7 ways to avoid becoming a misinformation superspreader</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/215815/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Turney receives funding from the Australian Research Council. He is a scientific adviser and holds shares in cleantech biographite company, CarbonScape. Chris is affiliated with the virtual Climate Recovery Institute, is a volunteer firefighter with the New South Wales Rural Fire Service (the NSW RFS), and is a Non-Executive Director on the boards of the NSW Environment Protection Authority (EPA) and deeptech incubator, Cicada.</span></em></p><p class="fine-print"><em><span>Sander van der Linden consults for or has received funding from Google, the EU Commission, the United Nations (UN), the World Health Organization (WHO), the Alfred Landecker Foundation, Omidyar Network India, the American Psychological Association, the Centers for Disease Control, UK Government, Facebook/Meta, and the Gates Foundation.</span></em></p>When we see false information circulating, we might move to debunk it. But prebunking lies and explaining manipulation techniques can work better.Christian Turney, Pro Vice-Chancellor of Research, University of Technology SydneySander van der Linden, Professor of Social Psychology in Society, University of CambridgeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2232532024-02-14T14:26:07Z2024-02-14T14:26:07ZWagner Group is now Africa Corps. What this means for Russia’s operations on the continent<p><em>In August 2023, Wagner Group leader Yevgeny Prigozhin died after <a href="https://www.theguardian.com/world/2023/oct/05/hand-grenade-explosion-caused-plane-crash-that-killed-wagner-boss-says-putin">his private jet crashed</a> about an hour after taking off in Moscow. He had been Russia’s pointman in Africa since the Wagner Group <a href="https://www.cfr.org/in-brief/what-russias-wagner-group-doing-africa">began operating on the continent in 2017</a>.</em></p>
<p><em>The group is known for <a href="https://theconversation.com/wagner-group-in-africa-russias-presence-on-the-continent-increasingly-relies-on-mercenaries-198600">deploying paramilitary forces, running disinformation campaigns and propping up influential political leaders</a>. It has had a destabilising effect. Prigozhin’s death – and his <a href="https://www.aljazeera.com/news/2023/6/24/timeline-how-wagner-groups-revolt-against-russia-unfolded">aborted mutiny</a> against Russian military commanders two months earlier – has led to a shift in Wagner Group’s activities.</em></p>
<p><em>What does this mean for Africa? <a href="https://scholar.google.com/citations?hl=en&user=fvXhZxQAAAAJ&view_op=list_works&sortby=pubdate">Alessandro Arduino’s research</a> includes mapping the evolution of <a href="https://rowman.com/ISBN/9781538170311/Money-for-Mayhem-Mercenaries-Private-Military-Companies-Drones-and-the-Future-of-War">mercenaries</a> and private military companies across Africa. He provides some answers.</em></p>
<h2>What is the current status of the Wagner Group?</h2>
<p>Following Yevgeny Prigozhin’s death, the Russian ministries of foreign affairs and defence quickly reassured Middle Eastern and African states that it would be <a href="https://jamestown.org/program/the-wagner-group-evolves-after-the-death-of-prigozhin/">business as usual</a> – meaning unofficial Russian boots on the ground would keep operating in these regions.</p>
<p><a href="https://adf-magazine.com/2024/01/with-new-name-same-russian-mercenaries-plague-africa/">Recent reports</a> on the Wagner Group suggest a <a href="https://www.cnbc.com/2024/02/12/russias-wagner-group-expands-into-africas-sahel-with-a-new-brand.html#:%7E:text=Wagner%20Group%20has%20been%20replaced,its%20new%20leader%20has%20confirmed.">transformation</a> is underway. </p>
<p>The group’s activities in Africa are now under the <a href="https://www.brookings.edu/articles/what-is-the-fallout-of-russias-wagner-rebellion/">direct supervision</a> of the Russian ministry of defence. </p>
<p>Wagner commands an estimated force of <a href="https://www.cfr.org/in-brief/what-russias-wagner-group-doing-africa#:%7E:text=Rather%20than%20a%20single%20entity%2C%20Wagner%20is%20a,of%20former%20Russian%20soldiers%2C%20convicts%2C%20and%20foreign%20nationals.">5,000 operatives</a> deployed throughout Africa, from Libya to Sudan. As part of the transformation, the defence ministry has renamed it the <a href="https://www.bloomberg.com/news/newsletters/2024-01-30/russia-raises-the-stakes-in-tussle-over-africa">Africa Corps</a>. </p>
<p>The choice of <a href="https://www.businessinsider.com/new-russian-military-unit-recruiting-former-wagner-fighters-ukraine-veterans-2023-12?r=US&IR=T">name</a> could be an attempt to add a layer of obfuscation to cover what has been in plain sight for a long time. That Russian mercenaries in Africa <a href="https://www.theglobeandmail.com/business/article-canadian-owned-mine-seized-by-russian-mercenaries-in-africa-is-helping/">serve one master</a> – the Kremlin. </p>
<p>Nevertheless, the direct link to Russia’s ministry of defence will make it difficult for Russia to argue that a foreign government has requested the services of a Russian private military company without the Kremlin’s involvement. The head of the Russian ministry of foreign affairs <a href="https://www.reuters.com/world/africa/mali-asked-private-russian-military-firm-help-against-insurgents-ifx-2021-09-25/">attempted to use this defence in Mali</a>.</p>
<p>The notion of transforming the group into the Africa Corps may have been inspired by World War II German field marshal <a href="https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/afrika-korps">Erwin Rommel’s Afrika Korps</a>. Nazi Germany wove myths around his <a href="https://academic.oup.com/ahr/article-abstract/115/4/1243/35179?redirectedFrom=fulltext">strategic and tactical successes in north Africa</a>.</p>
<p>But will the Wagner Group under new leadership uphold the <a href="https://nationalinterest.org/feature/wagner-group-africa-where-rubber-meets-road-206202">distinctive modus operandi</a> that propelled it to infamy during Prigozhin’s reign? This included the intertwining of boots on the ground with propaganda and disinformation. It also leveraged technologies and a sophisticated network of financing to enhance combat capabilities.</p>
<h2>What will happen to Wagner’s modus operandi now?</h2>
<p>In my recent book, <a href="https://rowman.com/ISBN/9781538170311/Money-for-Mayhem-Mercenaries-Private-Military-Companies-Drones-and-the-Future-of-War">Money for Mayhem: Mercenaries, Private Military Companies, Drones and the Future of War</a>, I record Prigozhin’s adept weaving of disinformation and misinformation. </p>
<p>Numerous meticulously orchestrated campaigns flooded Africa’s online social platforms <a href="https://www.state.gov/disarming-disinformation/yevgeniy-prigozhins-africa-wide-disinformation-campaign/">promoting</a> the removal of French and western influence across the Sahel. </p>
<p>Prigozhin oversaw the creation of the Internet Research Agency, which operated as the propaganda arm of the group. It supported Russian disinformation campaigns and was sanctioned in 2018 by the US government for meddling in American elections. Prigozhin <a href="https://edition.cnn.com/2023/02/14/europe/russia-yevgeny-prigozhin-internet-research-agency-intl/index.html">admitted</a> to founding the so-called troll farm: </p>
<blockquote>
<p>I’ve never just been the financier of the Internet Research Agency. I invented it, I created it, I managed it for a long time.</p>
</blockquote>
<p>From a financial perspective, Prigozhin’s approach involved establishing a <a href="https://home.treasury.gov/news/press-releases/jy1581">convoluted network of lucrative natural resources mining operations</a>. These spanned gold mines in the Central African Republic to diamond mines in Sudan. </p>
<p>This strategy was complemented by significant cash infusions from the <a href="https://www.theguardian.com/world/2023/nov/09/how-russia-recruiting-wagner-fighters-continue-war-ukraine">Russian state</a> to support the Wagner Group’s direct involvement in hostilities. This extended from Syria to Ukraine, and across north and west Africa.</p>
<p>My research shows Prigozhin networks are solid enough to last. But only as long as the golden rule of the mercenary remains intact: guns for hire are getting paid.</p>
<p>In Libya and Mali, Russia is unlikely to yield ground due to enduring geopolitical objectives. These include generating revenue from oil fields, securing access to ports for its navy and securing its position as a kingmaker in the region. However, the Central African Republic may see less attention from Moscow. The Wagner Group’s involvement here was <a href="https://foreignpolicy.com/2024/02/07/africa-corps-wagner-group-russia-africa-burkina-faso/">primarily linked</a> to Prigozhin’s personal interests in goldmine revenues.</p>
<p>The Russian ministry of defence will no doubt seek to create a unified and loyal force dedicated to military action. But with the enduring legacy of Soviet-style bureaucracy, marked by excessive paperwork and procrastination in today’s Russian officials, one might surmise that greater allegiance to Moscow will likely come at the cost of reduced flexibility.</p>
<p>History has shown that Africa serves as a <a href="https://theconversation.com/wagner-group-mercenaries-in-africa-why-there-hasnt-been-any-effective-opposition-to-drive-them-out-207318">lucrative arena for mercenaries</a> due to various factors. These include: </p>
<ul>
<li><p>the prevalence of low-intensity conflicts reduces the risks to mercenaries’ lives compared to full-scale wars like in <a href="https://www.aljazeera.com/news/2024/2/13/russia-ukraine-war-list-of-key-events-day-720">Ukraine</a></p></li>
<li><p>the continent’s abundant natural resources are prone to exploitation</p></li>
<li><p>pervasive instability allows mercenaries to operate with relative impunity.</p></li>
</ul>
<p>As it is, countries in Africa once considered allies of the west are looking for alternatives. Russia is increasingly looking like a <a href="https://theconversation.com/five-essential-reads-on-russia-africa-relations-187568">viable candidate</a>. In January 2024, Chad’s junta leader, Mahamat Idriss Deby, met with Russian president Vladimir Putin in Moscow to “<a href="https://www.reuters.com/world/africa/putin-meets-chad-junta-leader-russia-competes-with-france-africa-2024-01-24/">develop bilateral ties</a>”. Chad previously had taken a pro-western policy.</p>
<p>A month earlier, Russia’s deputy defence minister Yunus-Bek Yevkurov, who’s been tasked with overseeing Wagner’s activities in the Middle East and north Africa, <a href="https://www.africanews.com/2023/12/04/russian-officials-visit-niger-to-strengthen-military-ties/">visited Niger</a>. The two countries <a href="https://theconversation.com/niger-and-russia-are-forming-military-ties-3-ways-this-could-upset-old-allies-221696">agreed to strengthen military ties</a>. Niger is currently led by the military after a <a href="https://www.iiss.org/en/publications/strategic-comments/2023/the-coup-in-niger/">coup in July 2023</a>.</p>
<h2>Where does it go from here?</h2>
<p>There are a number of paths that the newly named Africa Corps could take.</p>
<ul>
<li><p>It gets deployed by Moscow to fight in conflicts meeting Russia’s geopolitical ends. </p></li>
<li><p>It morphs into paramilitary units under the guise of Russian foreign military intelligence agencies.</p></li>
<li><p>It splinters into factions, acting as heavily armed personal guards for local warlords. </p></li>
</ul>
<p>The propaganda machinery built by Prigozhin may falter during the transition. But this won’t signal the immediate disappearance of the Russian disinformation ecosystem. </p>
<p>Russian diplomatic efforts are already mobilising to preserve the status quo. This is clear from Moscows’s <a href="https://jamestown.org/program/brief-russia-deepens-counter-terrorism-ties-to-sahelian-post-coup-regimes/">backing</a> of the recent Alliance of Sahelian States encompassing Mali, Burkina Faso and Niger. All three nations are led by military rulers who overthrew civilian governments a recently announced <a href="https://www.reuters.com/world/africa/niger-mali-burkina-faso-say-they-are-leaving-ecowas-regional-block-2024-01-28/">plans to exit</a> from the 15-member Economic Community of West African States.</p><img src="https://counter.theconversation.com/content/223253/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alessandro Arduino is a member of the International Code of Conduct Advisory Group.</span></em></p>Will the Wagner Group under new leadership uphold the ruthless modus operandi that propelled it to the spotlight in Africa?Alessandro Arduino, Affiliate Lecturer, King's College LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2216802024-02-12T13:58:36Z2024-02-12T13:58:36ZHow memes transformed from pics of cute cats to health disinformation super-spreaders<figure><img src="https://images.theconversation.com/files/574520/original/file-20240208-20-bdfm5s.jpg?ixlib=rb-1.1.0&rect=53%2C17%2C5908%2C3909&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.gettyimages.co.uk/detail/news-photo/this-photo-illustration-created-in-washington-dc-on-july-26-news-photo/1558455551?adppopup=true">Stefani Reynolds / AFP</a></span></figcaption></figure><p>If you think memes are simply online images of <a href="https://www.msn.com/en-us/lifestyle/lifestyle-buzz/hysterical-cat-memes-that-remind-us-why-cats-rule/ar-AA1mwoLb">cute cats</a> and <a href="https://www.boredpanda.com/funny-celebrity-memes-peter-parkers-glasses-format-suckertom/">celebrities</a> with funny captions, then you might be surprised to learn that they can have a more sinister function.</p>
<p><a href="https://journals.sagepub.com/doi/full/10.1177/20563051231224729">Our research</a> shows that memes form part of a highly sophisticated strategy to spread and monetise health disinformation. </p>
<p>Memes may appear trivial, but they should be taken seriously. Dismissing them as harmless jokes is to grossly underestimate their influence – and bolsters their power to spread potentially harmful health messages.</p>
<h2>Anti-vaccine memes have a long history</h2>
<p>Memes aren’t a recent invention. They have featured prominently in anti-vaccination messaging for centuries. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=483&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=483&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=483&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=607&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=607&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574052/original/file-20240207-30-hbm40.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=607&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A monster being fed baskets of infants and excreting them with horns; symbolising vaccination and its effects. Etching by C. Williams, 1802.</span>
<span class="attribution"><a class="source" href="https://wellcomecollection.org/works/vbux8st5">Wellcome Collection</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>When widespread smallpox immunisation began in the early 19th century, political cartoons published in print media used memes (see image below) to evoke fear about the safety of the vaccine.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=421&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=421&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=421&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=529&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=529&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574617/original/file-20240209-26-2yorsd.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=529&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Cartoon from an anti-vaccination publication, titled ‘Do not vaccinate!’, 1892.</span>
<span class="attribution"><a class="source" href="https://museumandarchives.redcross.org.uk/objects/46927">The Historical Medical Library of The College of Physicians of Philadelphia</a></span>
</figcaption>
</figure>
<p>The most infamous anti-vaccination meme, however, emerged from a now discredited <a href="https://theconversation.com/the-media-mmr-and-autism-a-cautionary-tale-23321">1998 study</a> that falsely linked the measles, mumps and rubella (MMR) vaccine with autism. </p>
<p>The meme “vaccines cause autism”, which appeared on billboards and was circulated widely in the media, provoked doubts about the safety of the vaccine. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2831678/">The study</a>, since described as an <a href="https://www.bmj.com/content/342/bmj.c7452">“elaborate fraud”</a>, was published the same year as the launch of Google’s search engine allowing “vaccines cause autism” to became a global meme.</p>
<p>Today, memes remain an important part of the anti-vaccine movement. <a href="https://academic.oup.com/mit-press-scholarship-online/book/20281">The internet</a> enables memes to be created anonymously, repurposed and shared at scale – making them a highly effective medium for spreading health disinformation.</p>
<p>They are often used as part of a <a href="https://mediamanipulation.org/definitions/meme-war">meme war</a>, defined as “the intentional propagation of political memes on social media for the purpose of political persuasion or community building, or to strategically spread narratives and other messaging crucial to a media manipulation campaign.” According to disinformation research platform The Media Manipulation Casebook.</p>
<p>Memes play an integral role in disinformation campaigns by facilitating fear, uncertainty and doubt.</p>
<h2>Influencers and money</h2>
<p>Our study analysed how popular anti-vaccine influencers used memes to galvanise the anti-vaccine movement during the COVID pandemic. </p>
<p>We discovered three recurring themes for encouraging vaccine refusal.</p>
<p>First, memes were used to vilify the government and social institutions, portraying them as corrupt and politically compromised. Anti-government sentiments were used to support several claims. These included claims that the government is corrupt and tyrannical; that vaccines are unsafe and ineffective and that the government is using vaccines as a form of state surveillance, for control and profit.</p>
<p>Second, memes depicted unvaccinated people as unfairly <a href="https://www.simonandschuster.com/books/Stigma/Erving-Goffman/9780671622442">stigmatised</a>. Influencers suggested the unvaccinated were being persecuted, using evocative imagery to imply a false equivalence between those who remain unvaccinated by choice and the persecution of Jews during the Holocaust. Such memes portrayed unvaccinated people as victims subject to Nazi-like sanctions and social exclusion.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=506&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=506&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=506&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=636&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=636&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574516/original/file-20240208-16-c32rfv.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=636&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A meme representing the Jewish Star to draw parallels between the victimization of the Jews during the Holocaust and the unvaccinated today.</span>
</figcaption>
</figure>
<p>Third, vaccinated people were depicted as morally and physically inferior to the unvaccinated. Vaccination was associated with infertility, low sex drive and a lack of critical thinking. Those opposed to vaccines, however, were portrayed positively as virile, attractive and intellectually superior. </p>
<p>To establish group membership and promote a sense of belonging, influencers referred to those who are anti vaccines as their “soul family”. But our research suggests there may be a more cynical motivation behind this apparently warm sentiment. </p>
<h2>Going viral – and avoiding challenge</h2>
<p>Influencers were strategic in their use of memes for political persuasion and commercial gain. </p>
<p>Several influencers provided their followers with “meme drops”: packages of memes with dissemination instructions. These memes were tested and produced in <a href="https://theconversation.com/pivot-to-coronavirus-how-meme-factories-are-crafting-public-health-messaging-135557">meme factories</a>, then distributed monthly to a mass audience via personal newsletters and websites, encouraging followers to spread anti-vaccination content. By adapting memes to current affairs, influencers increased their relevance and likelihood of going viral.</p>
<p>Memes weren’t just a method of self-promotion for anti-vaccination influencers, however. They were also a way to profit financially from pandemic anxieties.</p>
<p>Anti-vaccine sentiment became a powerful gateway to promote <a href="https://journals.sagepub.com/doi/full/10.1177/13675494211062623">potentially harmful health products</a>.
We found that memes were used to market unauthorised medical products by directing consumers to online stores. For example, we found that clicking on satirical COVID themed memes directed consumers to purchase hydroxychloroquine (a treatment for autoimmune disorders) and veterinary Ivermectin (used to treat parasites in animals). Both medicines are <a href="https://pubmed.ncbi.nlm.nih.gov/35314650/">unapproved for the treatment</a> of COVID. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574057/original/file-20240207-22-u5cz68.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A meme depicting the U.S. House of Representatives, which refers to elected officials as parasites, and links to the anti-parasitic drug, Ivermectin, which was promoted by some anti-vaccine influencers as an alternative (and unapproved) treatment for COVID-19.</span>
</figcaption>
</figure>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=421&fit=crop&dpr=1 600w, https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=421&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=421&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=530&fit=crop&dpr=1 754w, https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=530&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/574619/original/file-20240209-18-zn1xhe.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=530&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Ivermectin pills sold by the influencer for $90USD were intended for animal use only.</span>
</figcaption>
</figure>
<p>Memes are powerful propagators of disinformation because they allow influencers to claim <a href="https://journals.sagepub.com/doi/full/10.1177/1329878X20951301">plausible deniability</a>. Under the protective guise of humour and satire, memes can evade fact checkers and content moderators while promoting anti-vaccine myths and unauthorised treatments. </p>
<p>Influencers promoting vaccine hesitancy use memes to build their online following, sow distrust of health authorities and profit from the promotion of unapproved medicines. This enables them to evade responsibility for any negative consequences of their messaging. </p>
<p>Memes may not look threatening – but that’s why they are such effective super spreaders of health disinformation.</p><img src="https://counter.theconversation.com/content/221680/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Memes have featured in anti-vaccine messaging for centuries and their power to spread harmful health disinformation is growing.Stephanie Alice Baker, Senior Lecturer in Sociology, City, University of LondonMichael James Walsh, Associate Professor in Social Sciences, University of CanberraLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2213562024-02-07T12:03:08Z2024-02-07T12:03:08ZGaza is now the frontline of a global information war<p>The conflicts in Gaza and Ukraine have become key battlegrounds in an information war that goes far wider than their tightly drawn physical borders. Carefully crafted social media posts and other online propaganda are fighting to make people around the world take sides, harden their positions and even <a href="https://time.com/6549544/israel-and-hamas-the-media-war/">move broader public opinion</a>. </p>
<p>Propaganda has always been a weapon of war, but the digital <a href="https://www.ieworldconference.org/content/WP2023/Papers/GDRKMCC23_4.pdf">revolution</a> increases its reach, immediacy and effectiveness and makes it a more potent tool. This makes it harder and harder for the average person, as well as professionals with expertise, to work out what is true and what isn’t. </p>
<p>To understand this information war, we need to understand where and how arguments and ideologies are promoted and developed online. </p>
<p>In some instances, online propaganda simply involves the framing of real events, <a href="https://www.isdglobal.org/digital_dispatches/capitalising-on-crisis-russia-china-and-iran-use-x-to-exploit-israel-hamas-information-chaos/?cmplz-force-reload=1705683801885">violent images and videos, and hate speech</a> to emphasise the guilt of one side and vindicate the other.</p>
<p>But much material relies on the creation of what’s commonly referred to as fake news. This often takes the form of fabricated stories published on social media that repurpose or mislabel real photos or videos. </p>
<p>For example, <a href="https://perma.cc/5H76-YBBP">one post</a> on X (formerly Twitter) that was viewed 300,000 times used a photo of an accidental fire at a McDonald’s restaurant in New Zealand to falsely claim the company had been attacked by pro-Palestinian protestors for its perceived support of Israel. Despite <a href="https://factcheck.afp.com/doc.afp.com.34GE6ZA">being debunked</a>, the story was still the focus of heated <a href="https://twitter.com/search?q=mcdonalds%20IDF&src=typed_query&f=live">discussions</a> on social media channels. </p>
<p>There are also <a href="https://news.sky.com/story/its-important-to-separate-the-facts-from-speculation-what-we-actually-know-about-the-viral-report-of-beheaded-babies-in-israel-12982329">reports of excerpts from video</a> games and old TikToks being shared with claims they are from real current events in Gaza, and fake government agency <a href="https://www.euronews.com/my-europe/2023/11/07/israel-hamas-war-fake-mossad-account-creates-online-confusion">social media accounts</a> posting disinformation. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-houthis-four-things-you-will-want-to-know-about-the-yemeni-militia-targeted-by-uk-and-us-military-strikes-221040">The Houthis: four things you will want to know about the Yemeni militia targeted by UK and US military strikes</a>
</strong>
</em>
</p>
<hr>
<p>Advances in AI are also playing a role. Experts in digital <a href="https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47">forensics</a> have shown how AI-faked photographs of bloodied babies and abandoned children in Gaza were being widely used in November 2023. These were being published at the same time as the media was trying to investigate allegations that <a href="https://news.sky.com/story/its-important-to-separate-the-facts-from-speculation-what-we-actually-know-about-the-viral-report-of-beheaded-babies-in-israel-12982329">babies</a> had been beheaded in the Hamas attack of October 7. </p>
<p>Deep fake videos have been used in the Gaza conflict, to show prominent <a href="https://fortune.com/2023/12/04/deepfakes-israel-hamas-war-ai-detection-tech-startups/">figures</a> in the Middle East saying things they never said, and it is reasonable to think they don’t believe. Edited battlefield footage from Ukraine and modified footage from high-end military computer games have also been used as deepfaked “Gazan footage”, with the Associated Press keeping an extensive <a href="https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47">archive</a> of examples.</p>
<p>Based on what we know about misinformation on other subjects, it’s likely that much of this online propaganda about Gaza isn’t being generated by individual supporters posting randomly on social media. <a href="https://www.zdnet.com/article/the-dark-webs-latest-offering-disinformation-as-a-service">Misinformation contractors</a> now make their <a href="https://www.theguardian.com/world/2023/feb/15/revealed-disinformation-team-jorge-claim-meddling-elections-tal-hanan">services available</a> on the dark web (an encrypted part of the web that makes it very difficult to identify users) to people looking to mount widespread campaigns.</p>
<p>Inside the dark web, those developing mis- and disinformation can use techniques that are used by legitimate marketing companies in the outside world. They can <a href="https://www.tandfonline.com/doi/pdf/10.1080/23738871.2020.1797135?casa_token=BqdCKN1_d5YAAAAA:Fn7CF8QaYy62jxJFJOKOfkiK7yneTQ_Tz7PMcR7B6KHBHdBa_xHDP0A8S1VWMcLGbl-gBCTFukY">experiment</a> with messages, and test the responses they receive to them. On dark web forums, <a href="https://dl.acm.org/doi/pdf/10.1145/3366424.3385775?casa_token=1EVIZafzOkQAAAAA:O0_x_p8Teo-BifB8gkMRs7T247ebOH08wO7QkFLgqDLLARJERRguRHwAjCdAwvDowiC3fE6AYk0">groups of activists</a> can collaborate on <a href="https://www.tandfonline.com/doi/pdf/10.1080/10584609.2019.1661888?casa_token=DFHnxgRL-mAAAAAA:3UYaV5i58bAcJ1i_5S_XWzIkOPcdO1Qe8OeIW8A5U8o-myGRu1ZXDuqiVNiMdIhqkbL8V_iaGeg">messaging</a>, imagery, timing and targeting to best effect.</p>
<p>Another origin of much misinformation is <a href="https://www.nato.int/nato_static_fl2014/assets/pdf/2020/5/pdf/2005-deepportal2-troll-factories.pdf">“troll farms”</a>, which are staffed by government agents or their proxies in <a href="https://www.rollingstone.com/politics/politics-features/china-internet-trolls-russia-copycat-1234728307/">China</a>, <a href="https://www.rand.org/content/dam/rand/pubs/research_reports/RR4300/RR4373z1/RAND_RR4373z1.pdf">North Korea</a> and <a href="https://www.cnn.com/2023/02/14/europe/russia-yevgeny-prigozhin-internet-research-agency-intl/index.html">Russia</a>, among other countries. These are groups who identify the messages they think will change attitudes and amplify them through coordinating social media campaigns.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/israel-now-ranks-among-the-worlds-leading-jailers-of-journalists-we-dont-know-why-theyre-behind-bars-221411">Israel now ranks among the world’s leading jailers of journalists. We don't know why they're behind bars</a>
</strong>
</em>
</p>
<hr>
<p>They are increasingly using AI-driven bots programmed to spread particular narratives or key words or phrases. “Viral” bots magnify the reach of their content by getting networks of other bots to repost it, which in turn encourages <a href="https://www.econstor.eu/bitstream/10419/214101/1/IntPolRev-2019-4-1442.pdf">search engine</a> and <a href="https://oro.open.ac.uk/66155/8/70-Article%20Text-258-2-10-20190906.pdf">social media</a> algorithms that favour popular and provocative posts to give it greater prominence. </p>
<p>The dark web origins of misinformation makes it much harder for governments to track and stop the people creating it, as does the use of encrypted messaging services such as WhatsApp and <a href="https://www.rollingstone.com/politics/politics-features/telegram-fueling-israel-hamas-war-misinformation-1234854300/">Telegram</a> to share content. By the time the authorities have identified a piece of misinformation it may have been seen by many thousands of people across multiple channels. </p>
<p>The traditional media is also struggling to sift through and counter the weight of misinformation about Gaza, which appears in social media much faster than journalists can verify or debunk it. And the death of <a href="https://www.theguardian.com/world/2023/dec/21/israel-idf-accused-targeting-journalists-gaza#:%7E:text=The%20Committee%20to%20Protect%20Journalists,workers%20in%20any%20recent%20conflict">so many journalists</a> in Gaza is making accurate news harder to gather. </p>
<p>Media outlets are often <a href="https://www.theguardian.com/media/2023/oct/16/bbc-gets-1500-complaints-over-israel-hamas-coverage-split-50-50-on-each-side">accused of bias</a> in both directions. So when traditional news is seen as inadequate or hard to come by, people are more likely to turn social media and its flood of dark web-created misinformation. </p>
<p>The information war in Gaza is a war of values, and a war of behaviours, of establishing who is “them” and who are “us”. The war in Ukraine is exactly the same. The danger is that in shaping the view of the public, the information war could have an impact on governments and on the battlefield.</p><img src="https://counter.theconversation.com/content/221356/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert M. Dover does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Viral bots are ‘tricking’ social media algorithms to get more coverage for disinformation.Robert M. Dover, Professor of Intelligence and National Security, University of HullLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2211712024-01-28T13:53:55Z2024-01-28T13:53:55ZDeepfakes: How to empower youth to fight the threat of misinformation and disinformation<figure><img src="https://images.theconversation.com/files/571710/original/file-20240126-23-6oiuw5.jpg?ixlib=rb-1.1.0&rect=0%2C73%2C8171%2C4464&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Deepfakes pose a profound social threat, and education along with technology and legislation matters for containing and addressing this. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/deepfakes-how-to-empower-youth-to-fight-the-threat-of-misinformation-and-disinformation" width="100%" height="400"></iframe>
<p>The <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">World Economic Forum’s Global Risks Report 2024</a> has issued a stark warning: misinformation and disinformation, primarily driven by <a href="https://www.merriam-webster.com/dictionary/deepfake">deepfakes</a>, are ranked as the most severe global short-term risks the world faces in the next two years.</p>
<p>In October 2023, the Innovation council of Québec <a href="https://conseilinnovation.quebec/wp-content/uploads/2023/10/CIQ_Impacts_societe_IA_EDS-1.pdf">shared the same realization</a> after months of <a href="https://conseilinnovation.quebec/intelligence-artificielle/publications-de-la-reflexion-collective/">consultations</a> with experts and the public.</p>
<p>This <a href="https://arxiv.org/pdf/2208.10913.pdf">digital deception</a>, which leverages artificial intelligence and, more recently generative AI, to create hyper-realistic fabrications, extends beyond being a technological marvel; it <a href="https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/democracys-new-challenge-navigating-the-era-of-generative-ai.html">poses a profound societal threat</a>. </p>
<p>In response to the gap in effectively combating deepfakes with technology and legislation alone, a <a href="https://pedagogienumerique.chaire.ulaval.ca/en/projets/lagentivite-numerique-pour-contrecarrer-la-desinformation-exploration-du-cas-des-hypertrucages/">research project</a> led by my team and I sheds light on a vital solution: human intervention through education.</p>
<h2>Technological solutions alone are inadequate</h2>
<p>Despite ongoing development of <a href="https://doi.org/10.1080/23742917.2023.2192888">deepfake detection tools</a>, these <a href="https://doi.org/10.1007/s10489-022-03766-z">technological solutions</a> are racing to catch up with the rapidly advancing capabilities of deepfake algorithms. </p>
<p><a href="https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=2150&context=hastings_constitutional_law_quaterly">Legal systems</a> and <a href="https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/deepfakes-a-real-threat-to-a-canadian-future.html">governments</a> are struggling to keep pace with this swift advancement of digital deception.</p>
<p>There is an urgent need for education to adopt a more serious, aggressive and strategic approach in equipping youth to combat this imminent threat.</p>
<h2>Political disinformation concerns</h2>
<p>The <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">potential for political polarization is particularly alarming</a>. </p>
<p>Nearly three billion people are expected to vote in countries including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and <a href="https://montrealgazette.com/news/world/as-social-media-guardrails-fade-and-ai-deepfakes-go-mainstream-experts-warn-of-impact-on-elections">the United States</a> within the next two years. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-can-we-learn-from-the-history-of-pre-war-germany-to-the-atmosphere-today-in-the-u-s-220730">What can we learn from the history of pre-war Germany to the atmosphere today in the U.S.?</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.cbc.ca/news/politics/ai-deepfake-election-canada-1.7084398">Disinformation campaigns</a> threaten to undermine the legitimacy of newly elected governments. </p>
<p>Deepfakes of prominent figures like Palestinian American supermodel <a href="https://www.newarab.com/news/pro-israel-activists-deep-fake-bella-hadid-palestine-video">Bella Hadid</a> and <a href="https://www.boomlive.in/fact-check/video-of-jordans-queen-rania-supporting-israel-is-a-deepfake-23493">others</a> have been manipulated to falsify their political statements, exemplifying the technology’s capacity to sway public opinion and skew political narratives. </p>
<p>A deepfake of <a href="https://www.reuters.com/fact-check/greta-thunberg-vegan-grenades-tv-interview-is-deepfake-2023-10-30/#">Greta Thunberg</a> advocating for “vegan grenades” highlights the nefarious use of this technology. </p>
<p><a href="https://www.france24.com/en/technology/20230930-counterfeit-people-the-dangers-posed-by-meta-s-ai-celebrity-lookalike-chatbots">Meta’s unveiling of an AI assistant featuring celebrities’ likenesses</a> raises concerns about misuse and spreading disinformation. </p>
<h2>Financial fraud, pornographic harms</h2>
<p><a href="https://ca.investing.com/news/stock-market-news/ripple-ceo-warns-of-deepfake-scams-93CH-3178269">Deepfake videos</a> are also, unsurprisingly, <a href="https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html">being leveraged</a> to commit <a href="https://www.lapresse.ca/actualites/2024-01-22/hypertrucage-audio/berne-par-la-fausse-voix-de-son-fils.php?fbclid=IwAR3J0MTmXOhx8tAus2Z_4_F72Xh_6WC65bg01FzwYK9i40BMfyYjM5IDcC4">financial fraud</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1724139272884883936"}"></div></p>
<p>The popular YouTuber <a href="https://www.bbc.com/news/technology-66993651">MrBeast was impersonated in a deepfake scam on TikTok</a>, falsely promising an iPhone 15 giveaway that led to financial deceit. </p>
<p>These incidents highlight vulnerability to sophisticated <a href="https://www.engadget.com/taylor-swift-deepfake-used-for-le-creuset-giveaway-scam-123231417.html">AI-driven frauds and scams</a> targeting people of all ages.</p>
<p><a href="https://www.wired.com/story/deepfake-porn-is-out-of-control/">Deepfake pornography</a> represents a grave concern for young people and adults alike, where individuals’ faces are <a href="https://www.dexerto.com/entertainment/tiktok-model-mortified-by-ai-deepfake-video-showing-her-getting-dressed-2392928">non-consensually superimposed onto explicit content</a>. Sexually explicit deepfake images of Taylor Swift <a href="https://www.washingtonpost.com/technology/2024/01/26/ai-deepfakes-taylor-swift-nude/">spread on social media before platforms took them down</a>. One was viewed over 45 million times.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/cyberbullying-girls-with-pornographic-deepfakes-is-a-form-of-misogyny-217182">Cyberbullying girls with pornographic deepfakes is a form of misogyny</a>
</strong>
</em>
</p>
<hr>
<h2>Policy and technology approaches</h2>
<p><a href="https://www.bbc.com/news/technology-67366311">Meta’s policy</a> now mandates political advertisers to disclose any AI manipulation in ads, a move mirrored by Google.</p>
<p><a href="https://www.rochester.edu/newscenter/audio-deepfake-detective-developing-new-sleuthing-techniques-573482/">Neil Zhang</a>, a PhD student at the University of Rochester, is developing detection tools for audio deepfakes, including advanced algorithms and watermarking techniques.</p>
<p>The U.S. has introduced several acts: the <a href="https://clarke.house.gov/clarke-leads-legislation-to-regulate-deepfakes/">Deepfakes Accountability Act of 2023</a>, the <a href="https://dean.house.gov/2024/1/representatives-dean-and-salazar-introduce-bipartisan-legislation-to-protect-americans-images-online">No AI FRAUD Act</a> safeguarding identities against AI misuse and the <a href="https://www.upi.com/Top_News/US/2023/05/05/representative-joe-morelle-legislation-bans-deepfakes/7981683327579/">Preventing Deepfakes of Intimate Images Act</a> targeting non-consensual pornographic deepfakes.</p>
<p><a href="https://www.canada.ca/en/campaign/online-disinformation.html">In Canada</a>, legislators <a href="https://www.parl.ca/legisinfo/en/bill/44-1/c-27">have proposed</a> <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020">Bill C-27</a> and the <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">Artificial Intelligence and Data Act (AIDA)</a> which emphasize AI transparency and data privacy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Q5V0yap77yg?wmode=transparent&start=2" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">‘Disinformation can cause harm’ video from the Communications Security Establishment (CSE), a Canadian federal agency devoted to security and intelligence.</span></figcaption>
</figure>
<p>The <a href="https://www.washingtonexaminer.com/news/2442119/uk-adopts-law-forcing-big-tech-to-rein-in-child-pornography-and-deepfakes/">United Kingdom adopted its Online Safety Bill</a>. The EU recently announced a provisional deal surrounding <a href="https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act/">its AI Act</a>; the EU’s <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf">AI Liability Directive</a> addresses broader online safety and AI regulation issues. </p>
<p>The Indian government announced <a href="https://www.nationalheraldindia.com/science-tech/india-sixth-most-susceptible-country-to-deepfakes-can-laws-tackle-the-menace">plans to draft regulations</a> targeting deepfakes. </p>
<p>These measures reflect growing global commitments to curbing the pernicious effects of deepfakes. However, these efforts are insufficient to contain, let alone stop, the proliferation of deepfake dissemination.</p>
<h2>Research study with youth</h2>
<p><a href="https://doi.org/10.1080/10720537.2023.2294314">Research I have conducted with colleagues</a>, funded by the Social Sciences and Humanities Research Council (SSHRC) and Canadian Heritage, unveils how empowering youth with digital agency can be a force against the rising tide of disinformation fueled by deepfake and artificial intelligence technologies.</p>
<p>Our study focused on how youth perceive the impact of deepfakes on critical issues and their own process of constructing knowledge in digital contexts. We explored their capacity and willingness to effectively counterbalance disinformation.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/ZFcBZVUwm38?wmode=transparent&start=3" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Author Nadia Naffi shares some results of a study on youth digital agency and deepfakes.</span></figcaption>
</figure>
<p>The study brought together Canadian university students, aged 18 to 24, for a series of hands-on workshops, in-depth individual interviews and focus group discussions. </p>
<p>Participants created deepfakes, gaining a firsthand understanding of easy access to and use of this technology and its potential for misuse. This experiential learning proved invaluable in demystifying how easily deepfakes are generated.</p>
<p>Participants initially perceived deepfakes as an uncontrollable and inevitable part of the digital landscape. </p>
<p>Through engagement and discussion, they went from being passive deepfake bystanders to developing a deeper realization of their grave threat. Critically, they also developed a sense of responsibility in preventing and mitigating deepfakes’ spread, and a readiness to counter deepfakes. </p>
<p>Students shared recommendations for concrete actions, including urging educational systems to empower youth and help them recognize their actions can make a difference. This includes:</p>
<ul>
<li><p>teaching the detrimental effects of disinformation on society;</p></li>
<li><p>providing spaces for youth to reflect on and challenge societal norms, inform them about social media policies and outlining permissible and prohibited content;</p></li>
<li><p>training students in recognizing deepfakes through exposure to the technology behind them;</p></li>
<li><p>encouraging involvement in meaningful causes while staying alert to disinformation and guiding youth in respectfully and productively countering disinformation.</p></li>
</ul>
<figure class="align-center ">
<img alt="Students seen at a laptop." src="https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Educational systems have an important role empowering youth and helping them recognize their actions can make a difference.</span>
<span class="attribution"><span class="source">(Allison Shelley/EDUimages)</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<h2>Multifaceted strategy needed</h2>
<p>Based on our research and the participants’ recommendations, we propose a multifaceted strategy to counter the proliferation of deepfakes. </p>
<p>Deepfake education needs to be integrated into educational curricula, along with nurturing critical thinking and digital agency in our youth. Youth need to be encouraged in active, yet safe, well-informed and strategic, participation in the fight against malicious deepfakes in digital spaces. </p>
<p>We emphasize the importance of hands-on collaborative learning experiences. We also advocate for an interdisciplinary educational approach that marries technology, psychology, media studies and ethics to fully grasp the implications of deepfakes. </p>
<h2>The human element</h2>
<p>Our research underscores a crucial realization: The human element, particularly the role of education, is indispensable in the fight against deepfakes. We cannot rely solely on technology and legal fixes. </p>
<p>By equipping younger generations, but also every single member of our society, with the skills to critically analyze and challenge disinformation, we are nurturing a digitally literate society resilient enough to withstand the manipulative power of deepfakes. </p>
<p>To do so, we must equip people to understand they have roles and agency in safeguarding the integrity of our digital world.</p><img src="https://counter.theconversation.com/content/221171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nadia Naffi receives funding from the National Bank to support the work of her Chair in Educational Leadership (CEL) on Innovative Pedagogical Practices in Digital Contexts. Her project on disinformation is funded by the Social Sciences and Humanities Research Council of Canada (SSHRC) and Canadian Heritage.
Naffi is affiliated with the International Observatory on the Societal Impacts of AI and Digital Technology (OBVIA), the Institute Intelligence and Data (IID), the Centre de recherche et d'intervention sur la réussite scolaire (CRIRES), the Centre de recherche interuniversitaire sur la formation et la profession enseignante (CRIFPE) and the Centre de recherche et d'intervention sur l'éducation et la vie au travail (CRIEVAT).</span></em></p>Youth in a study went from being passive deepfake bystanders to developing a sense of responsibility and readiness to help prevent deepfakes’ spread.Nadia Naffi, Assistant Professor, Educational Technology, Chair in Educational Leadership in the Innovative Pedagogical Practices in Digital Contexts - National Bank, Université LavalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2215792024-01-26T17:57:55Z2024-01-26T17:57:55ZDisinformation is often blamed for swaying elections – the research says something else<figure><img src="https://images.theconversation.com/files/571138/original/file-20240124-29-k5hu7q.jpg?ixlib=rb-1.1.0&rect=50%2C175%2C5575%2C3530&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/color-image-some-people-voting-polling-435657658">Alexandru Nika/Shutterstock</a></span></figcaption></figure><p>Many countries <a href="https://en.wikipedia.org/wiki/List_of_elections_in_2024">face general elections</a> this year. Political campaigning will include misleading and even false information. Just days ago, <a href="https://www.bloomberg.com/news/articles/2024-01-23/fake-biden-robocall-message-in-new-hampshire-alarms-election-experts?leadSource=uverify%20wall">it was reported</a> that a robocall impersonating US president Joe Biden had told recipients not to vote in the presidential primary. </p>
<p>But can disinformation significantly influence voting? </p>
<p>There are two typical styles of election campaigning. One is positive, presenting favourable attributes of politicians and their policies, and the other is negative – disparaging the opposition. The latter <a href="https://link.springer.com/article/10.1057/s41253-019-00084-8;">can backfire</a>, though, or lead to <a href="https://www.journals.uchicago.edu/doi/abs/10.1111/j.1468-2508.2007.00618.x?casa_token=kG3-EyUhaHYAAAAA:UydVoChML-dFiFC370Su8gRQmPSAMV1E0cqg0cZ2owdl-NSw4uvQvHsjXIpdxpebgYZXAYb5aDWX">voters disengaging</a> with the entire democratic process. </p>
<p>Voters are already <a href="https://www.annualreviews.org/doi/abs/10.1146/annurev.polisci.10.071905.101448?casa_token=a0oggffzdCkAAAAA:61ee1-KZtnN5OvUoordIlQChJwegerDlKfg6q5bCJZXUy-ND70U_4ZcapONNd1mibsDPVD8jjSvHYw">fairly savvy</a> – they know that campaigning tactics often include distortions and untruths. Both types of tactics, positive and negative, <a href="https://www.unhcr.org/innovation/wp-content/uploads/2022/02/Factsheet-4.pdf">can feature misinformation</a>, which loosely refers to inaccurate, false and misleading information. Sometimes this even counts as disinformation, because the details are deliberately designed to be misleading. </p>
<p>Unfortunately, recent research shows that the <a href="https://theconversation.com/misinformation-why-it-may-not-necessarily-lead-to-bad-behaviour-199123">lack of clarity in defining</a> misinformation and disinformation is a problem. There is no consensus. Scientifically and practically, this is bad. It’s hard to chart the scale of a problem if your starting point includes <a href="https://journals.sagepub.com/doi/full/10.1177/17456916221141344">vague or confused</a> concepts. This is a problem for the general public, too, given it makes it harder to decipher and trust research on the topic.</p>
<p>For example, depending on how inclusive the definition is, <a href="https://books.google.com/books?hl=en&lr=&id=hB5sEAAAQBAJ&oi=fnd&pg=PA173&dq=public+perceptions+negative+election+campaigning+%22propaganda%22&ots=i47RTsBtju&sig=JYS30Bjr6Hu17xdxRn50HXlsAPY">propaganda</a>, <a href="https://ideas.repec.org/a/taf/rcybxx/v5y2020i2p199-217.html">deep fakes</a>, <a href="https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211">fake news</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/35039654/">conspiracy theories</a> are all examples of disinformation. But <a href="https://edisciplinas.usp.br/pluginfile.php/4948550/mod_resource/content/1/Fake%20News%20Digital%20Journalism%20-%20Tandoc.pdf">news parody or political satire</a> can be too. </p>
<p>Unfortunately, researchers <a href="https://doi.org/10.1016/j.copsyc.2020.03.014">often fail to provide clear definitions</a>, and do not carefully compare different types of disinformation, adding uncertainty to evidence examining its effect on voting behaviour. </p>
<p>Nevertheless, let’s investigate the research on disinformation so far, which is generally viewed as more serious than misinformation, to see <a href="https://misinforeview.hks.harvard.edu/article/explaining-beliefs-in-electoral-misinformation-in-the-2022-brazilian-election-the-role-of-ideology-political-trust-social-media-and-messaging-apps/">how much influence it can really have</a> on the way we vote. </p>
<h2>Unconvincing findings</h2>
<p>Consider <a href="https://www.sciencedirect.com/science/article/pii/S0048733322001494">a study published in 2023</a>, investigating the role of fake news in the Italian general elections in 2013 and 2018. It used debunking websites to help create a fake news score for articles published in the run-up to the election.</p>
<p>Then the researchers analysed populist parties’ pre-election Facebook posts containing such news content. This also generated an engagement score based on the number of likes and shares of the posts. </p>
<p>Finally, scores were combined with actual electoral votes for populist parties to gauge the possible influence of fake news on such votes. The researchers estimated that fake news added a small but statistically significant electoral gain for populist parties. But the researchers suggested that fake news could not be the sole cause of the overall increase in vote share for populist parties – it only seemed to add a small amount to the overall increase in vote share.</p>
<p>Similar studies showing <a href="https://www.science.org/doi/10.1126/science.aau2706">low effects</a> of fake news on persuading voters has led some researchers <a href="https://www.nature.com/articles/s41562-020-0833-x">to argue</a> that the panic about fake news is overblown. </p>
<p>Other recent studies have looked at the potential influence of disinformation by asking people how they intended to vote and whether they believed specific pieces of disinformation. This was examined in national or presidential elections in <a href="https://www.martenscentre.eu/wp-content/uploads/2023/04/15.pdf">the Czech Republic in 2021</a>, <a href="https://www.tandfonline.com/doi/abs/10.1080/23743670.2020.1719858?casa_token=G5kslUWsQRkAAAAA:ZW_ghmhO0phxYhgElEnuToqcAK_f_3o2BLrzew-RW0tlNZBX9_UuXgricYyuzZ-qgvZVQUgfoycKXw">Kenya in 2017</a>, <a href="https://www.ajpor.org/article/12982-analysis-of-fake-news-in-the-2017-korean-presidential-election">South Korea in 2017</a>, <a href="https://www.tandfonline.com/doi/abs/10.1080/23743670.2020.1719858">Indonesia in 2019, Malaysia in 2018</a>, <a href="https://www.martenscentre.eu/wp-content/uploads/2023/04/15.pdf">Philippines in 2022</a> and <a href="https://www.ajpor.org/article/12985-does-fake-news-matter-to-election-outcomes-the-case-study-of-taiwan-s-2018-local-elections">Taiwan in 2018</a>. </p>
<p>The general finding among all these studies was that it is hard to establish a reliable causal influence of fake news on voting. One reason was that who people say they vote for and how they actually vote can be vastly different. </p>
<p>In fact, research has gone into understanding the reasons for dramatic failures of traditional pollsters to predict elections and referendums <a href="https://journalofbigdata.springeropen.com/articles/10.1186/s40537-021-00525-8">in Argentina in 2019</a>, <a href="https://www.cambridge.org/core/journals/canadian-journal-of-political-science-revue-canadienne-de-science-politique/article/abs/quebec-2018-a-failure-of-the-polls/97380BA7567B11B95E88FAA2149BDC51">Quebec in 2018</a>, <a href="https://www.researchgate.net/publication/319982710_Collective_failure_Lessons_from_combining_forecasts_for_the_UK's_referendum_on_EU_membership">UK in 2016</a> and <a href="https://digitalcommons.unl.edu/sociologyfacpub/543/">US in 2016</a>. People didn’t, for many reasons, reveal their actual voting intentions to pollsters and researchers. </p>
<h2>Who is susceptible?</h2>
<p>What about specific groups of voters, though? Might there be some that are more influenced by disinformation than others? Political affiliation doesn’t seem to matter. People tend <a href="https://doi.org/10.1016/j.copsyc.2020.03.014">to rate fake news as accurate</a> when it’s in line with their own political beliefs. For instance, in the 2016 US presidential elections, both Hillary Clinton and Donald Trump supporters <a href="https://doi.org/10.1111/ajpy.12233">were equally likely</a> to rate fake news about their opposition as accurate. </p>
<p>How about undecided voters? Some studies show that undecided voters are more likely than decided voters to <a href="https://doi.org/10.1080/1369118X.2021.1883706">consider fake news headlines as credible</a>. But the opposite has also been shown – that they are <a href="https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211">less susceptible to political fake news</a>. </p>
<p>Still, to maximise the influence of disinformation in an election, undecided voters would be the obvious target, especially in close-run elections. But accurately profiling undecided voters <a href="https://doi.org/10.1111/rssa.12414">is difficult</a> – especially since people are cautious in revealing their voting intentions and the reasons behind them.</p>
<p>And if politicians or campaign staff use <a href="https://journals.sagepub.com/doi/abs/10.1177/1369148119842038">disinformation in aggressive negative campaigning</a> to sway undecided voters, they can end up increasing disengagement in the election process – making some people even more undecided.</p>
<p>Ultimately, most research suggests that fake news <a href="https://journals.sagepub.com/doi/full/10.1177/17456916221141344">is more likely to enhance existing beliefs</a> and views rather than <a href="https://link.springer.com/article/10.1007/s00146-020-00980-6">radically change voting intentions</a> of the undecided. </p>
<p>Another issue that often gets ignored is a phenomenon known in psychology as <a href="https://psycnet.apa.org/record/2001-16230-004">the third-person effect</a> – that we think that others are more persuadable, and even gullible, than ourselves. </p>
<p>So when it comes to who is susceptible to disinformation, it is likely that those studying it, as well as those participating in the studies, <a href="https://misinforeview.hks.harvard.edu/article/the-presumed-influence-of-election-misinformation-on-others-reduces-our-own-satisfaction-with-democracy/">assume they are immune</a>, but that anyone else, such as supporters of the opposing political party, are not – making the evidence harder to interpret. </p>
<p>It would be naive to say that disinformation, <a href="https://books.google.co.uk/books/about/Politics_and_Propaganda.html?id=FTrgh74moswC">such as political propaganda</a>, doesn’t have any influence on voting. But we should be careful not to assign disinformation as the sole explanation for election results that go against predictions.</p>
<p>If we assign disinformation such a high level of influence, we ultimately deny people’s agency in making free voting choices. And studies show that <a href="https://www.researchgate.net/publication/375301055_Folk_beliefs_about_where_manipulation_outside_of_awareness_occurs_and_how_much_awareness_and_free_choice_is_still_maintained">we are aware</a> that manipulative methods are used on us. Still, we all judge that we can maintain <a href="https://psycnet.apa.org/record/2023-13856-001">an ability to make our own choice</a> when voting.</p>
<p>It’s important to take this seriously. Our belief in free will is ultimately a reason so many of us back democracy in the first place. Denying it can arguably be more damaging than a few fake news posts lurking on social media.</p><img src="https://counter.theconversation.com/content/221579/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Magda Osman receives funding from Research England, ESRC, Wellcome Trust, and Turing Institute. </span></em></p>Most studies suggests that fake news is more likely to enhance existing beliefs and views rather than radically change voting intentions of those who are undecided.Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2213552024-01-18T16:49:32Z2024-01-18T16:49:32ZThe maths of rightwing populism: easy answers + confidence = reassuring certainty<figure><img src="https://images.theconversation.com/files/570085/original/file-20240118-17-no0zv8.jpg?ixlib=rb-1.1.0&rect=149%2C77%2C3426%2C1820&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock/Pictrider</span></span></figcaption></figure><p>Rightwing populists appear to be enjoying a <a href="https://theconversation.com/iowa-was-different-this-time-even-if-the-outcome-was-as-predicted-221094">surge</a> across the <a href="https://theconversation.com/far-right-poised-to-score-big-at-next-european-elections-214702">western world</a>. For those who don’t support these parties, their appeal can be baffling and unsettling. They appear to play on people’s fears and offer somewhat trivial answers to difficult issues.</p>
<p>But the mathematics of human inference and cognition can help us understand what makes this a winning formula.</p>
<p>Because politics largely boils down to communication, the mathematics of communication theory can help us understand why voters are drawn to parties that use simple, loud messaging in their campaigning – as well as how they get away with using highly questionable messaging. Traditionally, this is the theory that enables us to listen to radio broadcasts and <a href="https://www.britannica.com/biography/Claude-Shannon#ref666143">make telephone calls</a>. But American mathematician <a href="https://en.wikipedia.org/wiki/Norbert_Wiener">Norbert Wiener</a> went so far as to <a href="https://www.goodreads.com/book/show/153954.The_Human_Use_of_Human_Beings">argue</a> that social phenomena can only be understood via the theory of communication.</p>
<p>Wiener tried to explain different aspects of society by evoking a concept in science known as the <a href="https://www.britannica.com/science/second-law-of-thermodynamics">second law of thermodynamics</a>. In essence, this law says that over time, order will turn into disorder, or, in the present context, reliable information will be overwhelmed by confusion, uncertainties and noise. In mathematics, the degree of disorder is often measured by a quantity called <a href="https://www.britannica.com/science/entropy-physics">entropy</a>, so the second law can be rephrased by saying that over time, and on average, entropy will increase.</p>
<p>One of Wiener’s arguments is that as technologies for communication advance, people will circulate more and more inessential “noisy” information (think Twitter, Instagram and so on), which will overshadow facts and important ideas. This is becoming more pronounced with AI-generated disinformation. </p>
<p>The effect of the second law is significant in predicting the future form of society over a period of decades. But <a href="https://www.nature.com/articles/s41598-023-43403-4">another aspect</a> of communication theory also comes into play in the more immediate term.</p>
<p>When we analyse information about a topic of interest, we will reach a conclusion that leaves us, on average, with the smallest uncertainty about that topic. In other words, our thought process attempts to minimise entropy. This means, for instance, when two people with opposing views on a topic are presented with an article on that subject, they will often take away different interpretations of the same article, with each confirming the validity of their own initial view. The reason is simple: interpreting the article as questioning one’s opinion will inevitably raise uncertainty.</p>
<p>In psychology, this effect is known as <a href="https://thedecisionlab.com/biases/confirmation-bias">confirmation bias</a>. It is often interpreted as an irrational or illogical trait of our behaviour, but we now understand the science behind it by borrowing concepts from communication theory. I call this a “<a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2022.797904/full">tenacious Bayesian</a>” behaviour because it follows from the <a href="https://plato.stanford.edu/entries/bayes-theorem/">Bayes theorem</a> of probability theory, which tells us how we should update our perspectives of the world as we digest noisy or uncertain information.</p>
<p>A corollary of this is that if someone has a strong belief in one scenario which happens to represent a false reality, then even if factual information is in circulation, it will take a long time for that person to change their belief. This is because a conversion from one certainty to another typically (but not always) requires a path that traverses uncertainties we instinctively try to avoid.</p>
<h2>Polarised society</h2>
<p>When the tenacious Bayesian effect is combined with Wiener’s second law, we can understand how society becomes polarised. The second law says there will be a lot of diverging information and noise around us, creating confusion and uncertainty. We are drawn to information that offers greater certainty, even if it is flawed. </p>
<p>For a binary issue, the greatest uncertainty happens when the two alternatives seem equally likely – and are therefore difficult to choose between. But for an individual person who believes in one of the two alternatives, the path of least uncertainty is to hold steady on that belief. So in a world in which any information can easily be disseminated far and wide but in which people are also immovable, society can easily be polarised.</p>
<h2>Where are the leftwing populists?</h2>
<p>If a society is maximally polarised, then we should find populists surging on both the left and right of the political spectrum. And yet that is not the case at the moment. The right is more dominant. The reason for this is, in part, that the left is not well-positioned to offer certainty. Why? Historically, socialism has rarely been implemented in running a country – not even the Soviet Union or China managed to implement it. </p>
<p>At least for now, the left (or centrists, for that matter) also seem a lot more cautious about knowingly offering unrealistic answers to complex problems. In contrast, the right offers (often false) certainty with confidence. It is not difficult to see that in a noisy environment, the loudest are heard the most. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-have-authoritarianism-and-libertarianism-merged-a-political-psychologist-on-the-vulnerability-of-the-modern-self-218949">Why have authoritarianism and libertarianism merged? A political psychologist on 'the vulnerability of the modern self'</a>
</strong>
</em>
</p>
<hr>
<p>Today’s politics plays out against a backdrop of uncertainties that include wars in Ukraine and Gaza with little prospect of exit strategies in sight; the continued cost of living crisis; energy, food and water insecurity; migration; and so on. Above all, the impact of the climate crisis.</p>
<p>The answer to this uncertainty, according to rightwing populists, is to blame everything on outsiders. Remove migrants and all problems will be solved – and all uncertainties eradicated. True or false, the message is simple and clear. </p>
<p>In conveying this message, it is important to instil in the public an exaggerated fear of the impact of migration, so their message will give people a false sense of certainty. What if there are no outsiders? Then create one. Use the culture war to label the “experts” (judges, scholars, etc.) as the enemy of the people.</p>
<p>For populists to thrive, society needs to be divided so that people can feel certain about where they belong – and so that those on the opposing side of the argument can be ignored. </p>
<p>The problem, of course, is that there are rarely simple solutions to complex issues. Indeed, a political party campaigning for a tough migration policy but weak climate measures is arguably enabling mass migration on a scale unseen in modern history, because climate change will make <a href="https://www.ipcc.ch/srccl/">many parts of the world uninhabitable</a>.</p>
<p>Wiener was already arguing in 1950 that we will pay the price for our actions at a time when it is most inconvenient to do so. Whatever needs to be done to solve complex societal issues, those who wish to implement what they believe are the right measures need to be aware that they have to win an election to do that – and that voters respond to simple and positive messages that will reduce the uncertainties hanging over their thoughts.</p><img src="https://counter.theconversation.com/content/221355/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dorje C Brody receives funding from the UK Engineering and Physical Science Research Council (EP/X019926/1).</span></em></p>In an uncertain world our natural instinct is to seek out answers that reassure, even when they don’t make sense.Dorje C. Brody, Professor of Mathematics, University of SurreyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2156232024-01-17T17:49:47Z2024-01-17T17:49:47ZSome people who share fake news on social media actually think they’re helping the world<figure><img src="https://images.theconversation.com/files/569339/original/file-20240115-25-gr73c2.jpg?ixlib=rb-1.1.0&rect=692%2C617%2C7020%2C4634&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">'You're welcome!'</span> <span class="attribution"><span class="source">Shutterstock/Roman Samborskyi</span></span></figcaption></figure><p><a href="https://www.weforum.org/publications/global-risks-report-2024/digest/">Misinformation</a> is the number one risk facing society over the next two years, according to the World Economic Forum. With key elections due in the US, UK and many other nations this year, an onslaught of political misinformation can be expected.</p>
<p>Some of this material is distributed through paid advertising on social media, like the AI generated “deep fake” videos of British prime minister <a href="https://www.theguardian.com/technology/2024/jan/12/deepfake-video-adverts-sunak-facebook-alarm-ai-risk-election">Rishi Sunak</a> doing the rounds. However, we know that much of <a href="https://www.science.org/doi/full/10.1126/science.aap9559">the spread of false material</a> is due to the actions of individual social media users.</p>
<p><a href="https://www.pewresearch.org/politics/wp-content/uploads/sites/4/2022/06/PDL_06.16.22_Twitter_Politics_full_report.pdf">Many people</a> share political news online. Inevitably some of that news is false. Fake political news is, after all, common. It’s not unusual to see it as you scroll through your social media feeds.</p>
<p>One of the main ways in which fake news spreads is when people share it to their own social networks. Some genuinely believe the story to be true and share it by mistake. We’ve <a href="https://www.sciencedirect.com/science/article/pii/S0191886921004487">found</a> that around 20% of people report having shared a story they later found out was untrue. </p>
<p>However, like <a href="https://misinforeview.hks.harvard.edu/wp-content/uploads/2023/08/littrell_knowingly_sharing_false_political_info_20230825.pdf">other researchers</a>, we also find that around one in 10 people admit sharing political information that they knew at the time was untrue. </p>
<p>Why would these people deliberately spread lies? Are they deliberately setting out to do harm? Or do they perhaps think it’s acceptable to spread because it supports ideas they hold strongly and <a href="https://www.sciencedirect.com/science/article/pii/S2352250X24000010">“might as well be true”</a>?</p>
<h2>Meaning well, meaning ill</h2>
<p>Only a minority of people share false information but, given the vast scale of social media platforms, even that can lead to fake stories spreading like wildfire. This makes it harder for people to get news they can trust and leads people to believe things that simply aren’t true.</p>
<p><a href="https://journals.sagepub.com/doi/10.1177/20563051231192032">Our research</a> revealed that some people shared fake stories because they thought they were funny (one said because they thought it was “ludicrous”, for example). Others shared the misinformation specifically to highlight that it was false. Others minimised the harm they were doing by suggesting it wasn’t actually that serious if they shared fake news. </p>
<p>Our findings reveal that some people behave in an antisocial way when it comes to fake news, deliberately sharing false information to achieve some personal objective, even if it means attacking other people or trying to manipulate them. Sharing false stories in this way can be used, for example, to affect people’s political views, whether by supporting a smear campaign against a politician or by boosting a politician’s clout. </p>
<p>People driven by such reasons seem not to be bothered by whether the news they are sharing is true or false, and may even view sharing news as a means of manipulation. At the very least, these people are being uncaring about the harmful effects of their actions. </p>
<figure class="align-center ">
<img alt="A phone showing the news with the word 'fake' written across it." src="https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/569340/original/file-20240115-17-w57daw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Fake news is everywhere, and difficult to spot.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>In sharp contrast to these, some people share political news, whether true or false, with the best intentions. They seem to see sharing fake news as a way to make the world better. </p>
<p>“Good” reasons for sharing can reflect a desire to protect others (for example, by alerting them to potential dangers), to encourage people to “do the right thing”, or even to become socially or politically engaged. Other people may use news sharing as a force for good by pointing out that a particular story is false. Ironically though, that means the false story may spread even further. </p>
<h2>Dealing with fake news</h2>
<p>People can have strong reactions when they see a friend or family member sharing material they know is untrue. This is not a big surprise because misinformation tends to rely on <a href="https://www.nature.com/articles/s41599-022-01174-9">negative sentiment and to appeal to our morals</a>. It is the stories that make us emotional (for example by scaring us) that go viral in the first place.</p>
<p>However, the next time you see someone sharing a story you know to be false, and you think about giving them a piece of your mind or blocking them, remember that they may be unaware that they were doing harm and may even have been trying to do good. It may be that they were thinking only about themselves, but it may also be they have shared that story thinking that it benefits others. </p>
<p>Sharing false stories, even when done with the best intentions, may have implications that <a href="https://theconversation.com/disinformation-campaigns-are-undermining-democracy-heres-how-we-can-fight-back-217539">go beyond</a> people’s personal goals for sharing. When people expose others to misinformation in order to debunk it, they are potentially risking unintended <a href="https://journals.sagepub.com/doi/full/10.1177/1461444820943878">political consequences</a> such as increasing cynical perceptions towards election campaigns and politicians. </p>
<p>One way to reduce this risk and support the battle against misinformation is to follow <a href="https://www.who.int/campaigns/connecting-the-world-to-combat-coronavirus/how-to-report-misinformation-online">guidance on how to report false stories</a>, for example by marking them as false on the platform.</p>
<p>And if you yourself are tempted to share material that might not be true — for whatever reason — it is best to find other ways to get your message across.</p><img src="https://counter.theconversation.com/content/215623/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Buchanan receives funding from The Leverhulme Trust. </span></em></p><p class="fine-print"><em><span>Deborah Husbands and Rotem Perach do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We asked people why they shared misinformation and a lot of people do it with good intentions.Rotem Perach, Lecturer in Psychology, University of WestminsterDeborah Husbands, Reader, Social Sciences, University of WestminsterTom Buchanan, Professor of Psychology, University of WestminsterLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2167252024-01-17T13:38:02Z2024-01-17T13:38:02ZReining in AI means figuring out which regulation options are feasible, both technically and economically<figure><img src="https://images.theconversation.com/files/569664/original/file-20240116-21-278zrn.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5097%2C2880&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">One form of regulating AI is watermarking its output – the equivalent of AI signing its work.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/an-artificial-intelligence-paints-a-portrait-of-a-royalty-free-image/1427639738">R_Type/iStock via Getty Images</a></span></figcaption></figure><p>Concern about generative artificial intelligence technologies <a href="https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/">seems to be growing</a> almost as fast as the spread of the technologies themselves. These worries are driven by unease about the possible spread of disinformation at a scale never seen before, and fears of loss of employment, loss of control over creative works and, more futuristically, AI becoming so powerful that it causes extinction of the human species. </p>
<p>The concerns have given rise to calls for regulating AI technologies. Some governments, for example <a href="https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/">the European Union</a>, have responded to their citizens’ push for regulation, while some, such as the U.K. and India, are taking a more laissez-faire approach.</p>
<p>In the U.S., the White House <a href="https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694">issued an executive order</a> on Oct. 30, 2023, titled Safe, Secure, and Trustworthy Artificial Intelligence. It sets out guidelines to reduce both immediate and long-term risks from AI technologies. For example, it asks AI vendors to share safety test results with the federal government and calls for Congress to enact consumer privacy legislation in the face of AI technologies soaking up as much data as they can get. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/UF6tmnTUyL4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The Biden administration’s executive order on artificial intelligence set some key standards, but most of the work of regulating AI falls to Congress and the states.</span></figcaption>
</figure>
<p>In light of the drive to regulate AI, it is important to consider which approaches to regulation are feasible. There are two aspects to this question: what is technologically feasible today and what is economically feasible. It’s also important to look at both the training data that goes into an AI model and the model’s output.</p>
<h2>1. Honor copyright</h2>
<p>One approach to regulating AI is to limit the training data to public domain material and copyrighted material that the AI company has secured permission to use. An AI company can decide precisely what data samples it uses for training and can use only permitted material. This is technologically feasible.</p>
<p>It is partially economically feasible. The quality of the content that AI generates depends on the amount and richness of the training data. So it is economically advantageous for an AI vendor to not have to limit itself to content it’s received permission to use. Nevertheless, today some companies in generative AI are proclaiming as a sellable feature that they are only using content they have permission to use. One example is Adobe with its <a href="https://firefly.adobe.com/">Firefly image generator</a>.</p>
<h2>2. Attribute output to a training data creator</h2>
<p>Attributing the output of AI technology to a specific creator – artist, singer, writer and so on – or group of creators so they can be compensated is another potential means of regulating generative AI. However, the complexity of the AI algorithms used makes it <a href="https://doi.org/10.48550/arXiv.1312.6199">impossible to say which input samples the output is based on</a>. Even if that were possible, it would be impossible to determine the extent each input sample contributed to the output. </p>
<p>Attribution is an important issue because it’s likely to determine whether creators or the license holders of their creations will embrace or fight AI technology. The 148-day <a href="https://theconversation.com/what-are-hollywood-actors-and-writers-afraid-of-a-cinema-scholar-explains-how-ai-is-upending-the-movie-and-tv-business-210360">Hollywood screenwriters’ strike</a> and the <a href="https://www.theguardian.com/culture/2023/oct/01/hollywood-writers-strike-artificial-intelligence">resultant concessions they won</a> as protections from AI showcase this issue.</p>
<p>In my view, this type of regulation, which is at the output end of AI, is technologically not feasible. </p>
<h2>3. Distinguish human- from AI-generated content</h2>
<p>An immediate worry with AI technologies is that they will unleash automatically generated disinformation campaigns. This has already happened to various extents – for example, <a href="https://thehackernews.com/2023/12/russias-ai-powered-disinformation.html">disinformation campaigns during the Ukraine-Russia war</a>. This is an important concern for democracy, which relies on a public informed through reliable news sources. </p>
<p>There is a lot of activity in the startup space aimed at developing technology that can tell AI-generated content from human-generated content, but so far, this technology is <a href="https://www.nytimes.com/2023/05/18/technology/ai-chat-gpt-detection-tools.html">lagging behind generative AI technology</a>. The current approach focuses on identifying the patterns of generative AI, which is almost by definition fighting a losing battle.</p>
<p>This approach to regulating AI, which is also at the output end, is technologically not currently feasible, though rapid progress on this front is likely. </p>
<h2>4. Attribute output to an AI firm</h2>
<p>It is possible to attribute AI-generated content as coming from a specific AI vendor’s technology. This can be done through the well-understood and mature technology of <a href="https://www.cisa.gov/news-events/news/understanding-digital-signatures">cryptographic signatures</a>. AI vendors could cryptographically sign all output from their systems, and anyone could verify those signatures. </p>
<p>This technology is already embedded in basic computational infrastructure – for example, when a web browser verifies a website you are connecting to. Therefore, AI companies could easily deploy it. It’s a different question whether it’s desirable to rely on AI-generated content from only a handful of big, well-established vendors whose signatures can be verified. </p>
<p>So this form of regulation is both technologically and economically feasible. The regulation is geared toward the output end of AI tools. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/cmNFM6AW_Vk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The stakes are high for being able to distinguish between AI-generated and human-generated content.</span></figcaption>
</figure>
<p>It will be important for policymakers to understand the possible costs and benefits of each form of regulation. But first they’ll need to understand which of these is technologically and economically feasible.</p><img src="https://counter.theconversation.com/content/216725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Saurabh Bagchi receives research funding from a variety of federal government agencies and a few corporate entities. The total list of current and past funders can be found from his CV which is at:
<a href="https://bagchi.github.io/vita.html">https://bagchi.github.io/vita.html</a>
He is a Professor at Purdue University, the CTO of a cloud computing startup, KeyByte, and is a Board of Governors member of the IEEE Computer Society.</span></em></p>There are many ideas about how to regulate AI, but not all of them are technologically feasible, and some of those that are won’t fly economically.Saurabh Bagchi, Professor of Electrical and Computer Engineering, Purdue UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2185102024-01-09T19:15:38Z2024-01-09T19:15:38ZWanting to ‘move on’ is natural – but women’s pandemic experiences can’t be lost to ‘lockdown amnesia’<p>The COVID-19 pandemic was – and continues to be – hugely disruptive and stressful for individuals, communities and countries. Yet many seem desperate to close the chapter entirely, almost as if it had never happened. </p>
<p>This desire to <a href="https://www.washingtonpost.com/wellness/2023/03/13/brain-memory-pandemic-covid-forgetting/">forget and move on</a> – labelled “<a href="https://www.ft.com/content/be70b24e-8ca0-4681-a23b-0c59c69a2616">lockdown amnesia</a>” by some – is understandable at one level. But it also risks missing the opportunity to learn from what happened.</p>
<p>And while various official enquiries and royal commissions have been established to examine the wider government responses (including in New Zealand), the experiences of ordinary people are equally important to understand.</p>
<p>As researchers interested in women and gender roles, we wanted to capture some of this. For the past three years, our research has focused on what happened to everyday women during this period of uncertainty and disruption – and what lessons might be learned.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/QNZac2mmi7o?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Pandemic amnesia</h2>
<p>Individual memory can become vague as time goes on. But this can also be affected by broader narratives (in the media or official responses) that overwrite our own recollections of the pandemic.</p>
<p>Political calls to “<a href="https://www.mdpi.com/2076-0760/11/8/340">live with the virus</a>”, and <a href="https://www.rnz.co.nz/national/programmes/mediawatch/audio/2018849569/sick-and-tired-of-the-sickness">media hesitancy</a> to publish COVID-related stories due to perceived audience fatigue, can create a collective sense of needing to “move on”. Looking back can be seen as questionable, or even attacked.</p>
<p>Indeed, misinformation and disinformation have been used, <a href="https://www.routledge.com/Risk/Lupton/p/book/9781032327006">in the words</a> of leading pandemic social scientist Deborah Lupton, to “challenge science and manufacture dissent against attempts to tackle [such] crises”.</p>
<p>But as the memory scholar <a href="https://journals.sagepub.com/doi/pdf/10.1177/17506980231184563?casa_token=Wrs8pMKoFqcAAAAA:N9DN9rb9XNopHSIF2af2q8z4Ue457oW6l-mqPtBlmUQSy6dw53DYhQWxgk8BLe3SyWIzlkXTnvAPrYw">Sydney Goggins has put it</a>, such “public forgetting leads to a cascade of impacts on policy and social wellbeing”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/jacinda-arderns-resignation-gender-and-the-toll-of-strong-compassionate-leadership-198152">Jacinda Ardern's resignation: gender and the toll of strong, compassionate leadership</a>
</strong>
</em>
</p>
<hr>
<h2>A gendered pandemic</h2>
<p>Responding to the rapidly changing social, cultural and economic impacts of the pandemic, feminist scholars have highlighted the particular <a href="https://www.frontiersin.org/Articles/10.3389/Fgwh.2020.588372/Full">physical and emotional toll</a> on women worldwide.</p>
<p>This has included <a href="https://academic.oup.com/biomedgerontology/article/77/Supplement_1/S31/6463712">social isolation and loneliness</a>, increased <a href="https://www.tandfonline.com/doi/full/10.1080/15487733.2020.1776561?src=recsys">domestic and emotional labour</a>, the rise in <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7262164/">domestic and gender-based violence</a>, <a href="https://www.tandfonline.com/doi/full/10.1080/13545701.2021.1876906">job losses and financial insecurity</a>. Black, Indigenous, minority and migrant women have <a href="https://journals.sagepub.com/doi/full/10.1177/08912432211001302">felt these impacts</a> particularly keenly.</p>
<p>The <a href="https://search.informit.org/doi/abs/10.3316/informit.777013552598989">same trends</a> have been observed in Aotearoa New Zealand. And whereas some countries embraced pandemic recovery strategies that recognised these gender differences, this <a href="https://theconversation.com/nz-budget-2021-women-left-behind-despite-the-focus-on-well-being-161187">hasn’t been the case</a> in New Zealand.</p>
<p>The gendered abuse of women leaders – former prime minister <a href="https://theconversation.com/jacinda-arderns-resignation-gender-and-the-toll-of-strong-compassionate-leadership-198152">Jacinda Ardern</a> and scientist <a href="https://www.rnz.co.nz/national/programmes/atthemovies/audio/2018913516/review-ms-information">Siouxsie Wiles</a>, for example – have been well documented. But the experiences of ordinary women, their struggles and strategies to look after themselves and others, have had much less attention.</p>
<h2>Experiences of everyday women</h2>
<p>Our study involved 110 women in Aotearoa New Zealand. We set out to understand how they adapted their everyday practices – work, leisure, exercise, sport – to maintain or regain wellbeing, social connections and a sense of community.</p>
<p>Despite many differences between the women in our sample, there were also shared experiences. We referred to the ruptures in the patterns, rhythms and routines of their lives as “<a href="https://onlinelibrary.wiley.com/doi/full/10.1111/gwao.12987">gender arrhythmia</a>”.</p>
<p>The women responded to the psycho-social and physical challenges, such as disrupted sleep or weight changes, by creating counter-rhythms – taking up hobbies, exercising, changing diet.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-pandemics-disproportionate-impact-on-women-is-derailing-decades-of-progress-on-gender-equality-180941">The pandemic’s disproportionate impact on women is derailing decades of progress on gender equality</a>
</strong>
</em>
</p>
<hr>
<p>The pandemic also prompted many to reflect on how their pre-pandemic routines and rhythms had caused various forms of “alienation”: from their own health and wellbeing, meaningful social connections, ethical and sustainable work practices, and pleasure.</p>
<p>The disruption of the pandemic caused many to reevaluate the importance of work in their lives. As one reflected: </p>
<blockquote>
<p>COVID-19 has made me reassess what is the most important thing. Is it making money? Actually, no, not at all.</p>
</blockquote>
<p>Others were prompted to question and challenge the gendered demands on women to “do everything” and “be everywhere” for everyone:</p>
<blockquote>
<p>I think as women, because we’re so good at multitasking, we just put so much on our plates. I think we need to learn just to say no, because we’re not superhuman. And ultimately, all of this responsibility is weighing us down.</p>
</blockquote>
<p>Our research also highlighted how the pandemic affected women’s relationships with <a href="https://www.sciencedirect.com/science/article/pii/S1755458623000270?casa_token=KcmGBPnpKLQAAAAA:MmQhDue20CoR0f6lK8rjWfxtBSHsjpzjbJu8tIc03StdccyCvduAs3CUVPwk18rPbklx3_j8DEo">familiar spaces and places</a>. Leaving home for a walk, run or bike ride became important everyday practices that proved highly beneficial for most women’s subjective wellbeing. </p>
<p>Some came to <a href="https://journals.sagepub.com/doi/abs/10.1177/01937235231200288">appreciate physical activity</a> for the general joys of movement and connection with people and places, rather than simply to achieve particular goals like fitness or weight loss. </p>
<h2>Special challenges for young women</h2>
<p>As part of our overall project, we also <a href="https://www.tandfonline.com/doi/full/10.1080/13668803.2023.2268818?needAccess=true">focused on 45 young women</a> (aged 16 to 25). This highlighted the importance of recognising how gender, ethnicity and socioeconomic circumstances intersect. </p>
<p>Listening to their <a href="https://www.tepunahamatatini.ac.nz/2023/11/07/the-invisible-glue-holding-families-together-during-the-pandemic/">pandemic stories</a>, we found young women played important roles in supporting their families and communities. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/covid-19-has-laid-bare-how-much-we-value-womens-work-and-how-little-we-pay-for-it-136042">COVID-19 has laid bare how much we value women's work, and how little we pay for it</a>
</strong>
</em>
</p>
<hr>
<p>In particular, Māori, Pacific and others from diverse ethnic or migrant backgrounds carried increased responsibilities in the home, including childcare, cleaning, cooking and shopping. While many did so willingly, these extra burdens took a toll on their schooling, mental health and wellbeing.</p>
<p>For many young women, the pandemic was a radical disruption to their everyday lives and routines during a critical stage of identity development. They missed key milestones and events, and crucial phases of education and social development. </p>
<p>Many still grieve for some of those losses. And some are struggling to rebuild social connections, motivation and aspirations.</p>
<p>For example, some described being passionate and aspiring athletes before the pandemic. But social anxieties and body-image issues left over from lockdowns have been hard to shake, and have seen them <a href="https://www.mdpi.com/2673-995X/3/3/55">struggle to return</a> to sport. </p>
<h2>The invisible work of migrant women</h2>
<p>We also looked deeply at the experiences of <a href="https://link.springer.com/chapter/10.1007/978-3-031-38797-5_9">12 middle-class migrant women</a>, and how prolonged border closures created real anxiety about “not being there” for families overseas. </p>
<p>As one nurse working on the front line of COVID care in NZ explained:</p>
<blockquote>
<p>About a year ago, the cases of COVID in my homeland were increasing so rapidly. My family were not very well and I was depending on social media […] trying to reach out to them. I was really scared at that time, not being able to see your family when they really need you, not being able to be with them.</p>
</blockquote>
<p>Some of the women in our sample also experienced <a href="https://www.tandfonline.com/doi/full/10.1080/14649365.2023.2275761">increased anti-immigrant sentiments</a> which further affected their health and wellbeing – and their feelings of belonging. As one said:</p>
<blockquote>
<p>I’ve become extremely sensitive. I cry about small things. My doctor said “go and get some fresh air, it’s good for you” […] I went outside for a walk, and someone shouted at me, screamed at me. I got terrified for my life. How do you expect me to have wellbeing when no one in the society accepts you?</p>
</blockquote>
<p>This arm of the research suggests a real need for <a href="https://www.belong.org.nz/migrant-experiences-in-the-time-of-covid">investment in policies and support strategies</a> specifically for migrant women and their communities in any future global health emergency.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealanders-are-learning-to-live-with-covid-but-does-that-mean-having-to-pay-for-protection-ourselves-219698">New Zealanders are learning to live with COVID – but does that mean having to pay for protection ourselves?</a>
</strong>
</em>
</p>
<hr>
<h2>Communities of care</h2>
<p>A key feature of our study was the highly creative ways women cultivated “<a href="https://journals.sagepub.com/doi/full/10.1177/2043820620934268">communities of care</a>” during the pandemic. Even when they were struggling themselves, they reached out to friends and family – and particularly other women. </p>
<p>The majority of our participants were prompted to think differently about their own health and wellbeing, and what is important in their lives (now and in the future). </p>
<p>Throughout the pandemic, women have worked quietly, behind the scenes, in their families, communities and workplaces, supporting their own and others’ health and wellbeing. This invisible labour is rarely acknowledged or celebrated. </p>
<p>Many still feel the toll of economic hardship, violence and exhaustion. And less tangible feelings of disillusionment remain in a society that has so quickly “moved on” from the pandemic.</p>
<p>Acknowledging and addressing pandemic amnesia – personal and collective – is an important first step in documenting, learning from, and using these experiences to <a href="https://www.sciencedirect.com/science/article/pii/S0277953622008176">better prepare for future events</a>. Next time, we need to ensure the necessary support is available for those most in need.</p>
<hr>
<p><em>The authors wish to acknowledge the other members of the research team: Dr Nikki Barrett, Dr Julie Brice, Dr Allison Jeffrey and Dr Anoosh Soltani.</em></p>
<hr><img src="https://counter.theconversation.com/content/218510/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Holly Thorpe receives funding from a Royal Society Te Apārangi James Cook Research Fellowship.</span></em></p><p class="fine-print"><em><span>Grace O'Leary, Mihi Joy Nemani, and Nida Ahmad do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>COVID was a ‘gendered pandemic’, with women carrying very different burdens to men. A three-year New Zealand research project aimed to overcome the urge to forget, and provide lessons for the future.Holly Thorpe, Professor in Sociology of Sport and Gender, University of WaikatoGrace O'Leary, Research Fellow, University of WaikatoMihi Joy Nemani, Senior Lecturer, Te Huataki Waiora School of Health, University of WaikatoNida Ahmad, Research Fellow, University of WaikatoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2197252024-01-04T12:51:30Z2024-01-04T12:51:30ZHow subtle forms of misinformation affect what we buy and how much we trust brands<figure><img src="https://images.theconversation.com/files/566367/original/file-20231218-18-bq4prp.jpg?ixlib=rb-1.1.0&rect=42%2C0%2C4700%2C3123&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Both direct and indirect misinformation influence brand trust. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/motion-escalators-modern-shopping-mall-201174746">estherpoon/Shutterstock</a></span></figcaption></figure><p>Misinformation isn’t just blurring political lines anymore. It’s quietly infiltrating our shopping trolleys in subtle ways, shaping our decisions about what we buy and who we trust, as my research shows. </p>
<p>Spurred by political events, misinformation has garnered widespread media coverage and academic research. But most of the attention has been in the fields of <a href="https://www.aeaweb.org/articles?id=10.1257%2Fjep.31.2.211&fbclid=IwAR04My3aiycypMJKSI58e84gDvdrodsB9fqCycH9YfepWDDDwT--fZnVPvo;%20https://www.nyu.edu/about/news-publications/news/2019/january/fake-news-shared-by-very-few--but-those-over-65-more-likely-to-p.html">political science</a>, <a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(21)00051-6?dgcid=raven_jbs_etoc_email">social psychology</a>, <a href="https://www.sciencedirect.com/science/article/pii/S0306457318306794">information technology</a> and <a href="https://www.tandfonline.com/doi/full/10.1080/21670811.2017.1360143">journalism studies</a>. </p>
<p>More recently though, misinformation has also gained traction among <a href="https://www.sciencedirect.com/science/article/abs/pii/S0148296320307852">marketing</a> and <a href="https://myscp.onlinelibrary.wiley.com/doi/abs/10.1002/jcpy.1288">consumer</a> experts. Much of that research has focused on the direct impacts of misinformation on brands and consumer attitudes, but a new perspective on the topic is now emerging.</p>
<p>What if the influence of misinformation extends beyond explicit attacks on brands? What if our choices as consumers are shaped not only by deliberate misinformation campaigns but also by subtle, indirect false information? </p>
<p>My own research has explored the dynamics of misinformation from a consumer standpoint. I have looked at how misinformation <a href="https://www.sciencedirect.com/science/article/abs/pii/S0148296320307852">spreads</a>, why people find it <a href="https://journals.sagepub.com/doi/full/10.1177/07439156221103860">credible</a> and what we can do to try to <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/mar.21479">mitigate its spreading</a>. </p>
<p>However, my latest <a href="https://www.sciencedirect.com/science/article/pii/S2352250X23001616">study</a> looks at direct and indirect forms of misinformation and their consequences for brands and consumers. I have found that one of the major consequences of these types of misinformation is the erosion of trust.</p>
<h2>Direct and indirect misinformation</h2>
<p>Misinformation comes in direct and indirect forms. It can be direct when it purposefully targets brands or their products. Examples of direct misinformation include fabricated customer reviews or fake news campaigns targeting brands. </p>
<p>It was fake news that led to the <a href="https://www.nytimes.com/interactive/2016/12/10/business/media/pizzagate.html">“pizzagate” scandal</a> in 2016, for example. This involved unsubstantiated accusations of child abuse against prominent individuals linked to a Washington DC pizzeria. While last year, the brand Target was <a href="https://www.reuters.com/article/idUSL1N37S2U1/">falsely accused</a> of selling “satanic” children’s clothes on social media. </p>
<p>The consequences of direct misinformation can be far reaching, leading to a breakdown in brand trust. This erosion is particularly pronounced when misinformation originates from seemingly trustworthy sources, forcing brands into crisis management mode. </p>
<p>For example, in late 2022, Eli Lilly’s stock price fell by 4.37% after a <a href="https://www.washingtonpost.com/technology/2022/11/14/twitter-fake-eli-lilly/">fake Twitter</a> account impersonating the pharmaceutical company falsely announced that insulin would be given away for free. Investors were misled and the company was forced to issue multiple statements to regain their trust. </p>
<p>But beyond the realm of blatant brand attacks lies a subtler, less understood territory I call “indirect misinformation”. This type of misinformation doesn’t zero in on specific companies, but instead cloaks itself in issues like politics, social affairs or health issues.</p>
<p>The constant exposure to misinformation around issues like COVID-19 and politics can have a ripple effect. And my research, which reviewed the academic marketing literature on direct and indirect misinformation, argues that this constant barrage has the potential to impact consumer choices. </p>
<p>Consider the two distinct levels where these effects unfold for a company. At the brand level, reputable names may unwittingly find themselves entangled in disreputable fake news sites through <a href="https://journals.sagepub.com/doi/full/10.1177/0276146718755869">programmatic advertising</a>, in which automated technology is used to buy ad space on these websites. And while the misinformation itself might not directly impact brand trust, the association with dubious websites can cast a shadow over attitudes to brands. It can also <a href="https://journals.sagepub.com/doi/abs/10.1016/j.intmar.2018.09.001">impair</a> consumers’ intentions towards the brand. </p>
<p>Simultaneously, at the consumer level, the impact of indirect misinformation is profound. It breeds confusion, doubt and a general sense of vulnerability. Continuous exposure to misinformation is linked to <a href="https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power/">decreased trust</a> in mainstream and traditional media brands, for example. </p>
<p>Consequently, people might become wary of all information sources and even fellow consumers. Subconsciously influenced by misinformation, they may make different purchase decisions and hold <a href="https://www.journals.uchicago.edu/doi/full/10.1086/708035">altered views</a> of brands and products.</p>
<h2>What can brands do?</h2>
<p>While the negative repercussions of direct misinformation on brand trust have been well documented, shining a light on the subtler impacts of indirect misinformation marks a crucial step forward. It not only opens new avenues for researchers but also serves as a warning to brands. It urges them to be more proactive in their approach to misinformation. </p>
<p>If indirect misinformation makes consumers mistrustful and sceptical, brands could take preemptive measures. Tailoring specific marketing communications to instil trust in brands, products and offers becomes paramount in a world where trust is continually under siege. Building and maintaining a reputation for trustworthiness is essential for companies.</p>
<p>As we navigate this terrain of hidden influences, the call for a more comprehensive understanding of misinformation’s multifaceted impacts also becomes clearer. Researchers, brands and consumers alike need to decode the hidden messages of misinformation. This could help to fortify the foundations of trust in an era where it has become a precious commodity.</p><img src="https://counter.theconversation.com/content/219725/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Giandomenico Di Domenico is affiliated with the International Panel on the Information Environment. </span></em></p>Trust in brands may be eroded as awareness of misinformation increases according to new research.Giandomenico Di Domenico, Lecturer in Marketing & Strategy, Cardiff UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2195792023-12-29T11:42:01Z2023-12-29T11:42:01ZWhy some people don’t trust science – and how to change their minds<figure><img src="https://images.theconversation.com/files/567234/original/file-20231222-23-r02y8p.png?ixlib=rb-1.1.0&rect=26%2C15%2C1421%2C1035&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Nasa/wikipedia</span></span></figcaption></figure><p>During the pandemic, a third of people in the UK reported that their trust in science had increased, <a href="https://doi.org/10.1371/journal.pone.0278169">we recently discovered</a>. But 7% said that it had decreased. Why is there such variety of responses?</p>
<p>For many years, it was thought that the main reason some people reject science was a simple deficit of knowledge and a mooted fear of the unknown. Consistent with this, <a href="https://doi.org/10.1177/0963662506070159">many surveys</a> reported that attitudes to science are more positive among those people who know more of the textbook science. </p>
<p>But if that were indeed the core problem, the remedy would be simple: inform people about the facts. This strategy, which dominated science communication through much of the later part of the 20th century, <a href="https://doi.org/10.1177/097172180901400202">has, however, failed</a> at multiple levels. </p>
<p>In <a href="https://link.springer.com/article/10.1023/A:1023695519981">controlled experiments</a>, giving people scientific information was found not to change attitudes. And in the UK, scientific messaging over genetically modified technologies <a href="https://doi.org/10.1177/097172180901400202">has even backfired</a>. </p>
<p>The failure of the information led strategy may be down to people discounting or avoiding information if it contradicts their beliefs – also known as <a href="https://doi.org/10.1037/1089-2680.2.2.175">confirmation bias</a>. However, a second problem is that some trust neither the message nor the messenger. This means that a distrust in science isn’t necessarily just down to a deficit of knowledge, but a <a href="https://doi.org/10.1177/097172180901400202">deficit of trust</a>. </p>
<p>With this in mind, many research teams including ours decided to find out why some people do and some people don’t trust science. <a href="https://doi.org/10.1371/journal.pone.0278169">One strong predictor</a> for people distrusting science during the pandemic stood out: being distrusting of science in the first place. </p>
<h2>Understanding distrust</h2>
<p>Recent evidence has revealed that people who reject or distrust science are not especially well informed about it, but more importantly, they typically <a href="https://www.nature.com/articles/s41562-018-0520-3">believe that they do understand</a> the science. </p>
<p>This result has, over the past five years, been found over and over in studies investigating attitudes to a plethora of scientific issues, including <a href="https://doi.org/10.1016/j.socscimed.2018.06.032">vaccines</a> and <a href="https://www.nature.com/articles/s41562-018-0520-3">GM foods</a>. It also holds, <a href="https://doi.org/10.1371/journal.pbio.3001915">we discovered</a>, even when no specific technology is asked about. However, they may not apply to certain politicised sciences, such as <a href="https://www.science.org/doi/10.1126/sciadv.abo0038">climate change</a>.</p>
<p>Recent work also found that overconfident people who dislike science tend to <a href="https://doi.org/10.31234/osf.io/d5fz2">have a misguided belief</a> that theirs is the common viewpoint and hence that many others agree with them. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&rect=0%2C162%2C3721%2C2329&q=45&auto=format&w=1000&fit=clip"><img alt="Image of a protest of protest by covid-19 sceptics." src="https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&rect=0%2C162%2C3721%2C2329&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/564799/original/file-20231211-29-fgl8fe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Covid protest in London.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/london-uk-april-24-2021-unite-1966630096">Devis M/Shutterstock</a></span>
</figcaption>
</figure>
<p>Other evidence suggests that some of those who reject science also gain psychological satisfaction by framing their alternative explanations in a manner that <a href="https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response/fighting-disinformation/identifying-conspiracy-theories_en">can’t be disproven</a>. Such is often the nature of conspiracy theories – be it microchips in vaccines or COVID being caused by 5G radiation. </p>
<p>But the whole point of science is to examine and test theories that can be proven wrong – theories scientists call falsifiable. Conspiracy theorists, on the other hand, often reject information that doesn’t align with their preferred explanation by, as a last resort, questioning instead the <a href="https://commission.europa.eu/strategy-and-policy/coronavirus-response/fighting-disinformation/identifying-conspiracy-theories_en">motives of the messenger</a>. </p>
<p>When a person who trusts the scientific method debates with someone who doesn’t, they are essentially playing by different rules of engagement. This means it is hard to convince sceptics that they might be wrong. </p>
<h2>Finding solutions</h2>
<p>So what we can one do with this new understanding of attitudes to science?</p>
<p>The messenger is every bit as important as the message. Our work confirms many prior surveys showing that politicians, for example, aren’t trusted to communicate science, whereas university professors <a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001915">are</a>. This should be kept in mind.</p>
<p>The fact that some people hold negative attitudes reinforced by a misguided belief that many others agree with them suggests a further potential strategy: tell people what the consensus position is. The advertising industry got there first. Statements such as “eight out ten cat owners say their pet prefers this brand of cat food” are popular.</p>
<p>A recent <a href="https://doi.org/10.1177/09567976221083219">meta-analysis</a> of 43 studies investigating this strategy (these were “randomised control trials” – the gold standard in scientific testing) found support for this approach to alter belief in scientific facts. In specifying the consensus position, it implicitly clarifies what is misinformation or unsupported ideas, meaning it would also address the problem that <a href="https://www.sfi.ie/resources/SFI-Science-in-Ireland-Barometer.pdf">half of people</a> don’t know what is true owing to circulation of conflicting evidence. </p>
<p>A complementary approach is to prepare people for the possibility of misinformation. Misinformation spreads fast and, unfortunately, each attempt to debunk it acts to bring the misinformation more into view. Scientists call this the “<a href="https://doi.org/10.1177/1529100612451018">continued influence effect</a>”. Genies never get put back into bottles. Better is to anticipate objections, or <a href="https://www.science.org/doi/10.1126/sciadv.abo6254">inoculate people</a> against the strategies used to promote misinformation. This is called “prebunking”, as opposed to debunking. </p>
<p>Different strategies may be needed in different contexts, though. Whether the science in question is established with a consensus among experts, such as climate change, or cutting edge new research into the unknown, such as for a completely new virus, matters. For the latter, explaining what we know, what we don’t know and what we are doing – and emphasising that results are provisional – <a href="https://www.nature.com/articles/d41586-020-03189-1">is a good way to go</a>. </p>
<p>By emphasising uncertainty in fast changing fields we can prebunk the objection that a sender of a message cannot be trusted as they said one thing one day and something else later.</p>
<p>But no strategy is likely to be 100% effective. We found that even with widely debated <a href="https://genetics.org.uk/wp-content/uploads/2018/06/Copy-of-Public-Perception-of-Genetics.pdf">PCR tests for COVID</a>, 30% of the public said they hadn’t heard of PCR. </p>
<p>A common quandary for much science communication may in fact be that it appeals to those already engaged with science. Which may be why you read this.</p>
<p>That said, the new science of communication suggests it is certainly worth trying to reach out to those who are disengaged.</p><img src="https://counter.theconversation.com/content/219579/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Laurence D. Hurst receives funding from The Evolution Education Trust. He is affiliated with The Genetics Society.
Dr Cristina Fonseca also contributed to this article as well as to some of the research mentioned that was funded by The Genetics Society.</span></em></p>People who are suspicious of science often assume they are understand it well – and that others agree with them.Laurence D. Hurst, Professor of Evolutionary Genetics at The Milner Centre for Evolution, University of BathLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2186712023-12-18T16:17:17Z2023-12-18T16:17:17ZVictorian Britain had its own anti-vaxxers – and they helped bring down a government<figure><img src="https://images.theconversation.com/files/565425/original/file-20231213-31-19s6sa.jpg?ixlib=rb-1.1.0&rect=11%2C5%2C3663%2C2886&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://wellcomecollection.org/works/tr7x4acf/images?id=chsz86gd">E.E. Hillemacher/Wellcome Collection</a></span></figcaption></figure><p>As the 1906 UK general election results <a href="https://www.theguardian.com/politics/1906/jan/15/electionspast.past">rolled in</a>, it became clear that the Conservative party, after 11 years in power, had suffered one of the most disastrous defeats in its history. Of 402 Conservative MPs, 251 lost their seats, including <a href="https://www.gov.uk/government/history/past-prime-ministers/arthur-james-balfour">their candidate for prime minister</a>, defeated on a 22.5% swing against him in the constituency he had held for two decades. </p>
<p>Rising food prices, unpopular taxes and an opposition that promised to spend heavily on an expanded welfare state all contributed to the <a href="https://liberalhistory.org.uk/history/1906-election/">Tory downfall that year</a>. But something else had tipped the opposition Liberal landslide over the edge – compulsory vaccination. </p>
<p>Anti-vaccination campaigner <a href="https://www.bmj.com/content/1/2374/1566.1">Arnold Lupton</a> had taken Sleaford in Lincolnshire for the Liberals on a 12% swing and immediately started his parliamentary campaign to abolish compulsory vaccination against smallpox, a public health policy that had been in place in England and Wales since 1853 (with Scottish and Irish legislation following suit in later years). </p>
<p>Hardly a single Conservative MP was an anti-vaccinator, but 174 of the 397 Liberal MPs in the new parliament signed Lupton’s petition. </p>
<p>Their attempt at changing the law was unsuccessful, but this flexing of parliamentary muscle by the anti-vaccinators persuaded the new Liberal government that the most expedient option was to reach a compromise with its backbench rebels.</p>
<p>In 1907, the law was changed to permit quick and easy opt-out by parents. Vaccination of all babies against smallpox remained theoretically compulsory until 1946, but in practice, it was now optional. A five-decade-long campaign, in the streets, the courts and finally parliament, had resulted in victory for the opponents of vaccination.</p>
<p>This is a sobering story for those of us who are researchers, medical professionals or public health activists campaigning against the spread of vaccine hesitancy in the modern world. </p>
<p>The success of vaccination in saving millions of lives, not just from <a href="https://theconversation.com/eradicating-smallpox-the-global-vaccination-push-that-brought-the-world-arm-to-arm-162091">smallpox</a> but a host of other diseases, seems so obvious that the case scarcely needs to be made. And yet it does, as just a cursory glance at social, <a href="https://www.theguardian.com/media/2023/may/09/gb-news-censured-after-naomi-wolf-compared-covid-jab-to-mass-murder">even at times mainstream</a>, media will reveal. </p>
<p>In response to this tide of dangerous disinformation, vaccine advocacy work often focuses on issues such as the lack of <a href="https://www.cidrap.umn.edu/lack-high-school-education-predicts-vaccine-hesitancy">public comprehension of scientific concepts</a> of “relative risk” and “efficacy”, and the connections of the anti-vaccine activists to more general conspiracy theories and <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9111101/">extreme religious</a> or <a href="https://researchonline.lshtm.ac.uk/id/eprint/4670453/1/Alarcon-etal-2023-The-far-right-and-anti.pdf">political movements</a>. </p>
<p>The conclusion of many vaccine advocacy pieces is often that we must simply educate the public better while simultaneously cutting the flow of disinformation, yet this has often proved to be an uphill struggle. Why? Can vaccine advocates learn anything from the historic defeat of 1906?</p>
<h2>Social media of the Victorian era</h2>
<p>A recently published resource of Victorian anti-vaccination <a href="https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqad075/7330453">“street literature”</a> seeks to contribute to this effort by providing free access to 3.5 million words from 133 documents, ranging from short pamphlets to longer publications over the period 1854-1906.</p>
<p>What the 133 sources have in common is that they were all produced for public consumption, designed to strengthen or maintain the beliefs of the converted while reaching out for new converts. Existing outside the conventional publishing industry, this street literature was the social media of the Victorian era.</p>
<figure class="align-center ">
<img alt="Etching of children being vaccinated in East London in a crowded, chaotic room." src="https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=466&fit=crop&dpr=1 600w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=466&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=466&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=586&fit=crop&dpr=1 754w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=586&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=586&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Children being vaccinated in East London.</span>
<span class="attribution"><a class="source" href="https://wellcomecollection.org/works/fmrb5a8p/images?id=dnmduxyq">Wellcome Collection</a></span>
</figcaption>
</figure>
<p>Computational analysis of these texts reveals anti-vaccination themes that are very similar to those of today. For instance, doubts about the effectiveness of vaccines, what they’re made of and their safety, feature prominently. </p>
<p>Other common themes include complaints that civil liberties are infringed by compulsory vaccination, alongside conspiracy theories of government cover-ups, general distrust of the medical profession, and an orientation towards alternative medicine. </p>
<p>What changes is the detail. For instance, fear of the inadvertent introduction of syphilis, tuberculosis and skin diseases, as very occasionally happened in Victorian times, may be compared to the <a href="https://theconversation.com/under-40s-can-ask-their-gp-for-an-astrazeneca-shot-whats-changed-what-are-the-risks-are-there-benefits-163571">blood clots</a> issue with the COVID vaccine. </p>
<p>Other more spurious scare stories, such as an association between vaccination and tooth decay or mental illness have their parallels in the <a href="https://theconversation.com/autism-and-vaccines-more-than-half-of-people-in-britain-france-italy-still-think-there-may-be-a-link-101930">discredited autism claims</a> of the present day. Likewise, modern conspiracy theories about big pharma have their Victorian parallel in allegations of medical profiteering from vaccination fees.</p>
<p>This study of the Victorian anti-vaxxers shows us that there are indeed recurrent fears more than two centuries old. But it also teaches us that some of the motivations of vaccine hesitancy stem from social, political and religious beliefs that are equally deep in time and often deeply held. </p>
<figure class="align-center ">
<img alt="A calf being used to make vaccines." src="https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The use of cattle to produce vaccines was one of the first biotechnology industries but drew fire from anti-vaccination activists on grounds of animal cruelty.</span>
<span class="attribution"><a class="source" href="https://wellcomecollection.org/works/ju78dfph">Wellcome Collection</a></span>
</figcaption>
</figure>
<p>For example, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9728709/pdf/homoeopathphys132846-0032.pdf">William Tebb</a>, one of the most prominent anti-vaxxers of Victorian times campaigned with equal energy on a whole raft of causes, from women’s suffrage to the abolition of slavery via vegetarianism, animal rights and mystical religion. </p>
<p>For Tebb and many of his followers, these were intimately connected causes. To reach the root of the problem, we need to untangle these connections in sensitive ways that go beyond conventional public engagement.</p><img src="https://counter.theconversation.com/content/218671/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The project described in this article is funded by the UK Economic & Social Research Council.</span></em></p><p class="fine-print"><em><span>Chris Sanderson received funding from the UK Economic & Social Research Council for this project. </span></em></p><p class="fine-print"><em><span>Alice Deignan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Victorian anti-vaccine literature shows that the fears and concerns remain largely the same today.Derek Gatherer, Lecturer, Biomedical and Life Sciences, Lancaster UniversityAlice Deignan, Professor of Applied Linguistics, University of LeedsChris Sanderson, PhD Candidate, ESRC Centre for Corpus Approaches to Social Science, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.