tag:theconversation.com,2011:/us/topics/disinformation-42353/articlesDisinformation – The Conversation2024-03-19T18:17:34Ztag:theconversation.com,2011:article/2247862024-03-19T18:17:34Z2024-03-19T18:17:34ZDeepfakes are still new, but 2024 could be the year they have an impact on elections<figure><img src="https://images.theconversation.com/files/580733/original/file-20240308-30-tf2e5r.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3865%2C2582&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/deep-fake-ai-face-swap-video-2376208005">Tero Vesalainen / Shutterstock</a></span></figcaption></figure><p>Disinformation caught many people off guard during the <a href="https://www.europarl.europa.eu/RegData/etudes/ATAG/2018/620230/EPRS_ATA(2018)620230_EN.pdf">2016 Brexit referendum</a> and <a href="https://www.nature.com/articles/s41467-018-07761-2">US presidential election</a>. Since then, a mini-industry has developed to analyse and counter it.</p>
<p>Yet despite that, we have entered 2024 – a year of <a href="https://en.wikipedia.org/wiki/List_of_elections_in_2024">more than 40 elections</a> worldwide – more fearful than ever about disinformation. In many ways, the problem is more challenging than it was in 2016. </p>
<p>Advances in technology since then are one reason for that, in particular the development that has taken place with synthetic media, otherwise known as deepfakes. It is increasingly difficult to know whether media has been fabricated by a computer or is based on something that really happened. </p>
<p>We’ve yet to really understand how big an impact deepfakes could have on elections. But a number of examples point the way to how they may be used. This may be the year when lots of mistakes are made and lessons learned.</p>
<p>Since the disinformation propagated around the votes in 2016, researchers have produced countless books and papers, journalists have retrained as <a href="https://www.poynter.org/fact-checking/2022/391-global-fact-checking-outlets-slow-growth-2022/">fact checking and verification experts</a>, governments have participated in <a href="https://www.igcd.org/">“grand committees”</a> and centres of excellence. Additionally, <a href="https://royalsociety.org/blog/2022/03/how-libraries-can-fight-disinformation/">libraries</a> have become the focus of resilience building strategies and a range of new bodies has emerged to provide analysis, training, and resources.</p>
<p>This activity hasn’t been fruitless. We now have a more nuanced understanding of disinformation as a social, psychological, political, and technological phenomenon. Efforts to support public interest journalism and the cultivation of critical thinking through education are also promising. Most notably, major tech companies <a href="https://www.reuters.com/technology/meta-set-up-team-counter-disinformation-ai-abuse-eu-elections-2024-02-26/">no longer pretend to be neutral platforms</a>. </p>
<p>In the meantime, policymakers have rediscovered their duty to <a href="https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en">regulate technology</a> in the public interest. </p>
<h2>AI and synthetic media</h2>
<p>Regulatory discussions have added urgency now that AI tools to create synthetic media – media partially or fully generated by computers – have gone mainstream. These deepfakes can be used to imitate the voice and appearance of real people. Deepfake media are impressively realistic and do not require much skill or resources. </p>
<p>This is the culmination of the wider digital revolution whereby successive technologies have made high-quality content production accessible to almost anyone. In contrast, regulatory structures and institutional standards for media were mostly designed in an era when only a minority of professionals had access to production.</p>
<p>Political deepfakes can take different forms. The recent Indonesian election saw a <a href="https://edition.cnn.com/2024/02/12/asia/suharto-deepfake-ai-scam-indonesia-election-hnk-intl/index.html">deepfake video “resurrecting” the late President Suharto</a>. This was ostensibly to encourage people to vote, but it was accused of being propaganda because it produced by the political party that he led.</p>
<p>Perhaps a more obvious use of deepfakes is to spread lies about political candidates. For example, <a href="https://ipi.media/slovakia-deepfake-audio-of-dennik-n-journalist-offers-worrying-example-of-ai-abuse/">fake AI-generated audio</a> released days before Slovakia’s parliamentary election in September 2023 attempted to portray the leader of Progressive Slovakia, Michal Šimečka, as having discussed with a journalist how to rig the vote.</p>
<p>Aside from the obvious effort to undermine a political party, it is worth noting how this deepfake, whose origin was unclear, exemplifies wider efforts to scapegoat minorities and demonise mainstream journalism. </p>
<p>Fortunately, in this instance, the audio was not high-quality, which made it quicker and easier for fact checkers to confirm its inauthenticity. However, the integrity of democratic elections cannot rely on the ineptidude of the fakers.</p>
<p>Deepfake audio technology is at a level of <a href="https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/">sophistication that makes detection difficult</a>. Deepfake videos still struggle with certain human features, such as the representation of hands, but the technology is still young.</p>
<p>It is also important to note the Slovakian video was released during the final days of the election campaign. This is a prime time to launch disinformation and manipulation attacks because the targets and independent journalists have their hands full and therefore have little time to respond.</p>
<p>If it is also expensive, time-consuming, and difficult to investigate deep fakes, then it’s not clear how electoral commissions, political candidates, the media, or indeed the electorate should respond when potential cases arise. After all, a false accusation from a deepfake can be as troubling as the actual deepfake.</p>
<p>Another way deepfakes could be used to affect elections can be seen in the way they are already widely used to <a href="https://www.euronews.com/next/2023/04/22/a-lifelong-sentence-the-women-trapped-in-a-deepfake-porn-hell">harass and abuse</a> women and girls. This kind of sexual harassment fits an <a href="https://theconversation.com/online-abuse-could-drive-women-out-of-political-life-the-time-to-act-is-now-214301">existing pattern</a> of abuse that limits political participation by women. </p>
<h2>Questioning electoral integrity</h2>
<p>The difficulty is that it’s not yet clear exactly what impact deepfakes could have on elections. It’s very possible we could see other, similar uses of deepfakes in upcoming elections this year. And we could even see deepfakes used in ways not yet conceived of.</p>
<p>But it’s also worth remembering that not all disinformation is high-tech. There are other ways to attack democracy. Rumours and conspiracy theories about the integrity of the electoral process are an insidious trend. <a href="https://www.ft.com/content/1abd7fde-20b4-11e9-a46f-08f9738d6b2b">Electoral fraud is a global concern</a> given that many countries are only democracies in name. </p>
<p>Clearly, social media platforms enable and drive disinformation in many ways, but it is a mistake to assume the problem begins and ends online. One way to think about the challenge of disinformation during upcoming elections is to think about the strength of the systems that are supposed to uphold democracy. </p>
<p>Is there an independent media system capable of providing high quality investigations in the public interest? Are there independent electoral administrators and bodies? Are there independent courts to adjudicate if necessary? </p>
<p>And is there sufficient commitment to democratic values over self interest
amongst politicians and political parties? This year of elections, we may well find out the answer to these questions.</p><img src="https://counter.theconversation.com/content/224786/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eileen Culloty coordinates the Ireland Hub of the European Digital Media Observatory, which is part-funded by the European Commission to undertake fact-checks, analysis, and media literacy.</span></em></p>As technology has advanced, AI-generated deepfakes have become more convincing.Eileen Culloty, Assistant Professor, School of Communications, Dublin City UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2200362024-03-18T12:31:28Z2024-03-18T12:31:28ZAI vs. elections: 4 essential reads about the threat of high-tech deception in politics<figure><img src="https://images.theconversation.com/files/582204/original/file-20240315-28-p5czjg.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4977%2C6250&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Like it or not, AI is already playing a role in the 2024 presidential election.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/android-celebrating-4th-july-royalty-free-image/499467267?phrase=Robot+Uncle+Sam">kirstypargeter/iStock via Getty Images</a></span></figcaption></figure><p>It’s official. Joe Biden and Donald Trump have <a href="https://www.washingtonpost.com/politics/2024/03/13/few-voters-decide-trump-biden-nominations/">secured the necessary delegates</a> to be their parties’ nominees for president in the 2024 election. Barring unforeseen events, the two will be formally nominated at the party conventions this summer and face off at the ballot box on Nov. 5. </p>
<p>It’s a safe bet that, as <a href="https://theconversation.com/how-tech-firms-have-tried-to-stop-disinformation-and-voter-intimidation-and-come-up-short-148771">in recent elections</a>, this one will play out largely online and feature a potent blend of news and disinformation delivered over social media. New this year are powerful generative artificial intelligence tools such as <a href="https://openai.com/chatgpt">ChatGPT</a> and <a href="https://openai.com/sora">Sora</a> that make it easier to “<a href="https://ssrn.com/abstract=4040800">flood the zone</a>” with propaganda and disinformation and produce convincing deepfakes: words coming from the mouths of politicians that they did not actually say and events replaying before our eyes that did not actually happen.</p>
<p>The result is an increased likelihood of voters being deceived and, perhaps as worrisome, a growing sense that <a href="https://www.researchgate.net/publication/378236203_Profiling_the_Dynamics_of_Trust_Distrust_in_Social_Media_A_Survey_Study">you can’t trust anything you see online</a>. Trump is already taking advantage of the so-called <a href="https://doi.org/10.1017/S0003055423001454">liar’s dividend</a>, the opportunity to discount your actual words and deeds as deepfakes. Trump implied on his Truth Social platform on March 12, 2024, that real videos of him shown by Democratic House members were <a href="https://www.washingtonpost.com/politics/2024/03/13/trump-video-ai-truth-social/">produced or altered using artificial intelligence</a>.</p>
<p>The Conversation has been covering the latest developments in artificial intelligence that have the potential to undermine democracy. The following is a roundup of some of those articles from our archive. </p>
<h2>1. Fake events</h2>
<p>The ability to use AI to make convincing fakes is particularly troublesome for producing false evidence of events that never happened. Rochester Institute of Technology computer security researcher <a href="https://scholar.google.com/citations?user=UxGWcUYAAAAJ&hl=en">Christopher Schwartz</a> has dubbed these <a href="https://theconversation.com/events-that-never-happened-could-influence-the-2024-presidential-election-a-cybersecurity-researcher-explains-situation-deepfakes-206034">situation deepfakes</a>.</p>
<p>“The basic idea and technology of a situation deepfake are the same as with any other deepfake, but with a bolder ambition: to manipulate a real event or invent one from thin air,” he wrote.</p>
<p>Situation deepfakes could be used to boost or undermine a candidate or suppress voter turnout. If you encounter reports on social media of events that are surprising or extraordinary, try to learn more about them from reliable sources, such as fact-checked news reports, peer-reviewed academic articles or interviews with credentialed experts, Schwartz said. Also, recognize that deepfakes can take advantage of what you are inclined to believe.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/events-that-never-happened-could-influence-the-2024-presidential-election-a-cybersecurity-researcher-explains-situation-deepfakes-206034">Events that never happened could influence the 2024 presidential election – a cybersecurity researcher explains situation deepfakes</a>
</strong>
</em>
</p>
<hr>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/S4gd-EpBlS0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">How AI puts disinformation on steroids.</span></figcaption>
</figure>
<h2>2. Russia, China and Iran take aim</h2>
<p>From the question of what AI-generated disinformation can do follows the question of who has been wielding it. Today’s AI tools put the capacity to produce disinformation in reach for most people, but of particular concern are <a href="https://theconversation.com/ai-disinformation-is-a-threat-to-elections-learning-to-spot-russian-chinese-and-iranian-meddling-in-other-countries-can-help-the-us-prepare-for-2024-214358">nations that are adversaries</a> of the United States and other democracies. In particular, Russia, China and Iran have extensive experience with disinformation campaigns and technology.</p>
<p>“There’s a lot more to running a disinformation campaign than generating content,” wrote security expert and Harvard Kennedy School lecturer <a href="https://www.schneier.com/">Bruce Schneier</a>. “The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral.”</p>
<p>Russia and China have a history of testing disinformation campaigns on smaller countries, according to Schneier. “Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now,” he wrote.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-disinformation-is-a-threat-to-elections-learning-to-spot-russian-chinese-and-iranian-meddling-in-other-countries-can-help-the-us-prepare-for-2024-214358">AI disinformation is a threat to elections − learning to spot Russian, Chinese and Iranian meddling in other countries can help the US prepare for 2024</a>
</strong>
</em>
</p>
<hr>
<h2>3. Healthy skepticism</h2>
<p>But it doesn’t require the resources of shadowy intelligence services in powerful nations to make headlines, as the New Hampshire <a href="https://apnews.com/article/ai-robocall-biden-new-hampshire-primary-2024-f94aa2d7f835ccc3cc254a90cd481a99">fake Biden robocall</a> produced and disseminated by two individuals and aimed at dissuading some voters illustrates. That episode prompted the Federal Communications Commission to <a href="https://theconversation.com/fcc-bans-robocalls-using-deepfake-voice-clones-but-ai-generated-disinformation-still-looms-over-elections-223160">ban robocalls that use voices generated</a> by artificial intelligence. </p>
<p>AI-powered disinformation campaigns are difficult to counter because they can be delivered over different channels, including robocalls, social media, email, text message and websites, which complicates the digital forensics of tracking down the sources of the disinformation, wrote <a href="https://scholar.google.com/citations?hl=en&user=yu4Ew7gAAAAJ&view_op=list_works&sortby=pubdate">Joan Donovan</a>, a media and disinformation scholar at Boston University.</p>
<p>“In many ways, AI-enhanced disinformation such as the New Hampshire robocall poses the same problems as every other form of disinformation,” Donovan wrote. “People who use AI to disrupt elections are likely to do what they can to hide their tracks, which is why it’s necessary for the public to remain skeptical about claims that do not come from verified sources, such as local TV news or social media accounts of reputable news organizations.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fcc-bans-robocalls-using-deepfake-voice-clones-but-ai-generated-disinformation-still-looms-over-elections-223160">FCC bans robocalls using deepfake voice clones − but AI-generated disinformation still looms over elections</a>
</strong>
</em>
</p>
<hr>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/L0X3W1utdRQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">How to spot AI-generated images.</span></figcaption>
</figure>
<h2>4. A new kind of political machine</h2>
<p>AI-powered disinformation campaigns are also difficult to counter because they can include bots – automated social media accounts that pose as real people – and can include <a href="https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051">online interactions tailored to individuals</a>, potentially over the course of an election and potentially with millions of people.</p>
<p>Harvard political scientist <a href="https://scholar.google.com/citations?user=3Bl9cn8AAAAJ&hl=en">Archon Fung</a> and legal scholar <a href="https://scholar.google.com/citations?user=LxG5YWcAAAAJ&hl=en">Lawrence Lessig</a> described these capabilities and laid out a hypothetical scenario of national political campaigns wielding these powerful tools.</p>
<p>Attempts to block these machines could run afoul of the free speech protections of the First Amendment, according to Fung and Lessig. “One constitutionally safer, if smaller, step, already adopted in part by European internet regulators and in California, is to prohibit bots from passing themselves off as people,” they wrote. “For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.”</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051">How AI could take over elections – and undermine democracy</a>
</strong>
</em>
</p>
<hr>
<p><em>This story is a roundup of articles from The Conversation’s archives.</em></p>
<hr>
<p><em><strong><a href="https://theconversation.com/us/topics/election-2024-disinformation-151606">This article is part of Disinformation 2024:</a></strong> a series examining the science, technology and politics of deception in elections.</em></p>
<p><em>You may also be interested in:</em></p>
<p><a href="https://theconversation.com/disinformation-is-rampant-on-social-media-a-social-psychologist-explains-the-tactics-used-against-you-216598">Disinformation is rampant on social media – a social psychologist explains the tactics used against you</a></p>
<p><a href="https://theconversation.com/misinformation-disinformation-and-hoaxes-whats-the-difference-158491">Misinformation, disinformation and hoaxes: What’s the difference?</a></p>
<p><a href="https://theconversation.com/disinformation-campaigns-are-murky-blends-of-truth-lies-and-sincere-beliefs-lessons-from-the-pandemic-140677">Disinformation campaigns are murky blends of truth, lies and sincere beliefs – lessons from the pandemic</a></p>
<hr><img src="https://counter.theconversation.com/content/220036/count.gif" alt="The Conversation" width="1" height="1" />
Using disinformation to sway elections is nothing new. Powerful new AI tools, however, threaten to give the deceptions unprecedented reach.Eric Smalley, Science + Technology EditorLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2241192024-03-17T12:56:19Z2024-03-17T12:56:19ZOnline wellness content: 3 ways to tell evidence-based health information from pseudoscience<figure><img src="https://images.theconversation.com/files/582218/original/file-20240315-20-1ijga2.jpg?ixlib=rb-1.1.0&rect=374%2C66%2C6941%2C4649&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Health information is increasingly being shared online, and often the borders between legitimate health expertise and pseudoscience aren't clear.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>“I drink borax!” proclaims the smiling TikToker. Holding up a box of the laundry additive, she rhymes off a list of its supposed health benefits: “Balances testosterone and estrogen. It’s a powerhouse anti-inflammatory…. It’s amazing for arthritis, osteoporosis…. And obviously it’s great for your gut health.” </p>
<p>Videos like these <a href="https://globalnews.ca/news/9860780/borax-drinking-tiktok-trend/">prompted health authorities to warn the public</a> about the dangers of ingesting this toxic detergent — and away from such viral messaging that promotes unsubstantiated and medically dangerous health claims.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-new-tiktok-trend-has-people-drinking-toxic-borax-an-expert-explains-the-risks-and-how-to-read-product-labels-210278">A new TikTok trend has people drinking toxic borax. An expert explains the risks – and how to read product labels</a>
</strong>
</em>
</p>
<hr>
<p>Health information is increasingly being shared online, and often the borders between legitimate health expertise and pseudoscience aren’t clear. While the internet can be a valuable and accessible way to learn about health, it’s also a place rife with disinformation and grift, as unscrupulous <a href="https://doi.org/10.1249/FIT.0000000000000829">influencers exploit</a> people’s fears about their bodies. </p>
<h2>Evidence and influencers</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Collage of quotes about drinking borax" src="https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/582219/original/file-20240315-24-myasst.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Some TikTokers claimed drinking borax had health benefits. In fact, borax is toxic and shouldn’t be ingested.</span>
<span class="attribution"><span class="source">(Michelle Cohen)</span></span>
</figcaption>
</figure>
<p>In my medical practice, I can usually track online wellness trends, such as a patient refusing a medication because of online claims — many of which are false — that it <a href="https://thefeelgoodagaininstitute.com/medications-that-lower-testosterone/">lowers testosterone</a>, or the several months when it seemed everyone was <a href="https://theconversation.com/turmeric-heres-how-it-actually-measures-up-to-health-claims-205613">taking turmeric</a> for joint pain, or the patients who request an <a href="https://theconversation.com/ivermectin-whether-formulated-for-humans-or-horses-is-not-a-treatment-for-covid-19-167340">ivermectin prescription</a> in case they catch COVID. </p>
<p>So how does someone who simply wants to learn more about the human body sift through the information? How to separate bad-faith grift from good advice? </p>
<p>Wellness influencers tap into a truth about how we process information: it’s <a href="https://lab.research.sickkids.ca/anthony/wp-content/uploads/sites/75/2019/07/Health-misinformation-and-the-power-of-narrative-messaging-in-the-public-sphere..pdf">more trustworthy</a> when it comes from a person we feel like we know. That’s why a charismatic personality’s Instagram account that uses <a href="https://doi.org/10.1177/1440783319846188">intimate stories</a> to promote <a href="https://digitalcommons.liberty.edu/doctoral/4920/">parasocial attachment</a> — the sense of being part of a community — is more memorable than a website offering dry recitations of evidence.</p>
<p>But as social media has become ubiquitous, <a href="https://www.instagram.com/daniellebelardomd/?hl=en">health experts</a> have caught on that sharing their personal side alongside reliable advice can be a good use of their platform. At first glance, these two groups may seem similar, but the following tips can help determine if the person posting health advice is actually knowledgeable on the topic:</p>
<h2>1) Are they selling something?</h2>
<p>Rarely do popular wellness influencers post out of the goodness of their hearts. Almost invariably these accounts are <a href="https://www.conspirituality.net/transmissions/the-wellness-grift-of-jp-sears">trying to profit</a> from the <a href="https://doi.org/10.1002/ace.20486">virality of their content</a>. </p>
<p>Whether it’s a <a href="https://doi.org/10.1080%2F08998280.2022.2124767">supplement store</a>, a <a href="https://www.independent.co.uk/news/health/social-media-weight-loss-diet-twitter-influencers-bloggers-glasgow-university-a8891971.html">diet book</a>, a subscription to a lifestyle community or a Masterclass series, the end goal is the same: transform social media influence into sales. Gushing over life-changing benefits from something the promoter is selling should always prompt skepticism. </p>
<p>Some legitimate health experts also sell advice, usually in the form of newsletters, books or <a href="https://www.bodyofevidence.ca/">podcasts</a>, and this is worth keeping in mind. However, there’s a big difference between selling a subscription to a <a href="https://vajenda.substack.com/">health newsletter</a> that discusses evidence and promoting your own supplement shop, where your financial motives shape how you present the information.</p>
<h2>2) What are the boundaries of their expertise?</h2>
<p>True expertise in a subject requires years of dedicated study and practice. That’s why people are rarely experts in more than one or two domains, and no one is a pan-expert on everything. </p>
<p>If a <a href="https://doi.org/10.1080/17439884.2021.2006691">wellness influencer</a> promotes themselves as erudite on all health topics, that’s actually an excellent indication of their lack of knowledge. A real health expert knows the limitations of their knowledge and can call on others’ expertise when needed. So the podcast host who opines on every health issue is substantially less worthwhile to listen to than the podcast host who brings on guest experts for topics outside their scope. </p>
<h2>3) How do they talk about science?</h2>
<p>Science is a process of discovery, not a static philosophy, so scientists emphasize talking about current evidence rather than “truth”, which is more of a faith-based concept. </p>
<p>If someone wants to post about their personal wellness philosophy or their spiritual journey and how it makes them feel, that’s fine. But dropping in biology jargon without explanation or name-checking one or two questionable studies without fulsome discussion isn’t a meaningful way to engage with the evidence on a health topic. </p>
<p>Science-based information should acknowledge where data are uncertain and where more research is needed. Using the pretext of science to lend credence to a personal “truth” is a <a href="https://www.mcgill.ca/oss/article/critical-thinking-pseudoscience/whats-trending-world-pseudoscience">form of pseudoscience</a> and should raise red flags.</p>
<p>These three principles are a good framework for deciding whether an influencer’s health content is worth consuming or whether they’re simply trying to sell a new supplement or spread viral disinformation about something like borax. </p>
<p>As online health information becomes easier to find (or harder to avoid), this framework can help people quickly scan a wellness influencer’s profile and make a more informed decision about engaging with their content. This is an important type of media literacy that anyone spending time online should cultivate — for the sake of their health.</p><img src="https://counter.theconversation.com/content/224119/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michelle Cohen does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How do we distinguish between valuable information from legitimate health experts, and pseudoscientific nonsense from unscrupulous wellness influencers?Michelle Cohen, Adjunct Assistant Professor, Department of Family Medicine, Queen's University, OntarioLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2257032024-03-14T11:08:01Z2024-03-14T11:08:01ZHow conspiracy theories help to maintain Vladimir Putin’s grip on power in Russia<p>As Russians head to the polls on March 15 for the <a href="https://theconversation.com/what-can-we-expect-from-six-more-years-of-vladimir-putin-an-increasingly-weak-and-dysfunctional-russia-224259">presidential election</a>, conspiracy theories are swirling everywhere. In this episode of <a href="https://theconversation.com/uk/topics/the-conversation-weekly-98901">The Conversation Weekly podcast</a>, we speak to a disinformation expert about the central role these theories play in Vladimir Putin’s Russia.</p>
<iframe src="https://embed.acast.com/60087127b9687759d637bade/65f2bd789be413001781419f" frameborder="0" width="100%" height="190px"></iframe>
<p></p>
<p><iframe id="tc-infographic-561" class="tc-infographic" height="100" src="https://cdn.theconversation.com/infographics/561/4fbbd099d631750693d02bac632430b71b37cd5f/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>As soon as the <a href="https://theconversation.com/navalny-dies-in-prison-but-his-blueprint-for-anti-putin-activism-will-live-on-223774">death of Russian opposition figure Alexei Navalny</a> in a Siberian penal colony was announced in February, conspiracy theories about who was behind it <a href="https://www.youtube.com/watch?v=QPn2zQWOU70">began circulating in Russia</a>.</p>
<p>“That he was killed by his puppet masters from the west, not the Kremlin. That he was killed by them because his murder would actually make Putin look awful in the eyes of global community,” explains Ilya Yablokov, a lecturer in digital journalism and disinformation at the University of Sheffield in the UK.</p>
<p>Yablokov studies the <a href="https://www.wiley.com/en-us/Fortress+Russia%3A+Conspiracy+Theories+in+the+Post+Soviet+World-p-9781509522651">spread of conspiracy theories in post-Soviet Russia</a>, and says the stories about Navalny are the most prominent of many circulating ahead of a presidential election that looks certain to keep Putin in the Kremlin until at least 2030. </p>
<p>Yablokov tells The Conversation Weekly that Russia’s conspiracy culture has become a key tool for Putin’s regime: “Conspiracy theories are one of the few ways of keeping the society together and to prevent the change of the regime.” </p>
<p>Fear of anti-Russian conspiracy now informs many pieces of domestic legislation, such as the 2022 changes to the <a href="https://cpj.org/wp-content/uploads/2022/07/Guide-to-Understanding-the-Laws-Relating-to-Fake-News-in-Russia.pdf">criminal code</a> that were aimed at censoring criticism of the Russian military, and in particular its actions in Ukraine. Yablokov adds:</p>
<blockquote>
<p>Every possible activity that can shake up the regime and question its actions is forbidden on the grounds of an existing conspiracy against Russia and its regime.</p>
</blockquote>
<p>Conspiracy theories used to exist on the margins of Russian culture. Putin typically avoided mentioning them too much, except at key political moments such as elections or Russia’s 2014 annexation of Crimea. But now, and in particular since the Ukraine war, they have moved to the centre of political debate. </p>
<p>Listen to <a href="https://pod.link/1550643487">The Conversation Weekly</a> podcast to hear Ilya Yablokov talk about Putin’s changing relationship with conspiracy theories, plus an introduction from Grégory Rayko, international editor at The Conversation in France. </p>
<p><em>A transcript of this episode will be available shortly.</em></p>
<p><em>This episode of The Conversation Weekly was written and produced by Gemma Ware and Katie Flood, with assistance from Mend Mariwany. Sound design was by Eloise Stevens, and our theme music is by Neeta Sarl. Stephen Khan is our global executive editor, Alice Mason runs our social media and Soraya Nandy does our transcripts.</em></p>
<p><em>Newsclips in this episode were from <a href="https://www.youtube.com/watch?v=QPn2zQWOU70">Russia Media Monitor</a>, <a href="https://www.youtube.com/watch?v=mgydMTmhs50">BBC News</a>, <a href="https://www.youtube.com/watch?v=9nJGDsOswFc">Guardian News</a>, <a href="https://www.youtube.com/watch?v=0nYAM-Jbfh4">NBC</a> <a href="https://www.youtube.com/watch?v=CAvMgUf8nyo">News</a>, <a href="https://www.youtube.com/watch?v=EdKDrIR8ASY&t=88s">CBS Mornings</a> and <a href="https://www.youtube.com/watch?v=tim9AodGLhU">Channel 4 News</a>.</em> </p>
<p><em>You can find us on Instagram at <a href="https://www.instagram.com/theconversationdotcom/">theconversationdotcom</a> or <a href="mailto:podcast@theconversation.com">via email</a>. You can also subscribe to The Conversation’s <a href="https://theconversation.com/newsletter">free daily email here</a>.</em></p>
<p><em>Listen to The Conversation Weekly via any of the apps listed above, download it directly via our <a href="https://feeds.acast.com/public/shows/60087127b9687759d637bade">RSS feed</a> or find out <a href="https://theconversation.com/how-to-listen-to-the-conversations-podcasts-154131">how else to listen here</a>.</em></p><img src="https://counter.theconversation.com/content/225703/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ilya Yablokov does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Russian disinformation expert Ilya Yablokov tells The Conversation Weekly podcast about the president’s shifting relationship with conspiracy theories.Gemma Ware, Editor and Co-Host, The Conversation Weekly Podcast, The ConversationLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2245022024-03-05T19:12:15Z2024-03-05T19:12:15ZIs Australia’s golden age of third-party fact checking over?<figure><img src="https://images.theconversation.com/files/579706/original/file-20240304-30-6eo258.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>With the rise of disinformation, third-party fact checking has grown into a billion-dollar global industry. But debunking false claims is time-consuming and costly, and recent developments suggest it may have hit its peak and is slowing down.</p>
<p>The ABC’s recent <a href="https://www.crikey.com.au/2024/02/21/abc-rmit-fact-check-partnership-abc-news-verify/">announcement</a> that it will dissolve its third-party fact-checker partnership with RMIT University, known as ABC RMIT Fact Check, and replace it with an in-house unit called “ABC News Verify”, suggests Australia is not immune to global trends.</p>
<p>Duke Reporters’ Lab’s most recent <a href="https://reporterslab.org/misinformation-spreads-but-fact-checking-has-leveled-off/">census</a> of third-party fact-checking units across the world found the number of active units fell from 424 in 2022 to 417 in 2023. While this is a small drop, it signals the first contraction in the sector since its initial census in 2014, which recorded a mere 59 units. </p>
<p>Is this cause for concern?</p>
<p><a href="https://theconversation.com/misinformation-how-fact-checking-journalism-is-evolving-and-having-a-real-impact-on-the-world-218379">Many studies</a> have shown that third-party fact checking works to disabuse people of false claims in the media and online. </p>
<p>“Third party” refers to the external verification of controversial claims by an organisation independent of the initial publishing outlet. </p>
<p>But a growing number of studies also show the limitations of fact checking in countering the spread of mis- and disinformation.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/misinformation-how-fact-checking-journalism-is-evolving-and-having-a-real-impact-on-the-world-218379">Misinformation: how fact-checking journalism is evolving – and having a real impact on the world</a>
</strong>
</em>
</p>
<hr>
<p>Our recently published <a href="https://ijoc.org/index.php/ijoc/article/view/21078">study</a> found that Australian third-party fact checkers were highly trusted. However, even after receiving and trusting a fact check – in this case about a false social media post involving former prime minister Scott Morrison during the 2022 floods – a third of respondents said they would engage with the false information anyway. </p>
<p>They did so mostly for political reasons, known as <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2703011">motivated reasoning</a>. It tells us that presenting facts alone is not enough to stop people sharing falsehoods, and it may be one reason why global momentum behind third-party fact checking is slowing.</p>
<p><iframe id="kLG47" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/kLG47/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p><iframe id="6qwxN" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/6qwxN/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The Australian fact checking industry has a short and rocky history, beginning in 2013 – a decade after the United States. Early adopters like PolitiFact Australia, ABC Fact Check and The Conversation’s FactCheck have come and gone, in part because the work is both time- and resource-intensive. </p>
<p>In the case of the ABC, its original in-house unit was <a href="https://www.abc.net.au/news/2016-05-18/abc-fact-check-unit-to-close-14-jobs-to-go/7425638">axed</a> following 2016 Coalition budget cuts. It then got a <a href="https://www.theguardian.com/media/2017/feb/14/abcs-fact-check-unit-relaunched-in-partnership-with-rmit">new lease</a> of life in partnership with RMIT University in 2017. </p>
<p>Our study tested public trust in four current Australian fact checkers: RMIT ABC Fact Check, RMIT Factlab, AAP and Reuters Fact Check – an international fact checker operating in Australia.</p>
<p>Overall, trust was highest in the soon to be disbanded RMIT ABC Fact Check. But there was one important exception: respondents who strongly identified as right-wing on the political spectrum.</p>
<p>These voters regarded ABC RMIT Fact Check as the least trusted. This finding mirrors studies about media trust in Australia, which also finds the ABC is ranked highest overall, but <a href="https://www.canberra.edu.au/research/faculty-research-centres/nmrc/digital-news-report-australia">lower</a> for right-wing partisans. </p>
<p>Our study’s findings suggest that accusations of left-wing bias levelled at the ABC, particularly by right-wing partisans, may intersect with its fact-checking role with RMIT, and foreshadows criticisms that its new unit might encounter. </p>
<p>This is because the politicisation of fact checking – a longstanding feature of the sector in the US – has reached Australia. </p>
<p>To counter concerns of fact-checking bias, the International Fact-Checking Network (<a href="https://www.poynter.org/ifcn/">IFCN</a>) was established in 2015 to try to ensure standards of impartiality and rigour. Meta has since made IFCN accreditation a requirement of partnership when it signs up third-party fact-checkers to test doubtful claims on its social media sites.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-wont-keep-paying-australian-media-outlets-for-their-content-are-we-about-to-get-another-news-ban-224857">Facebook won't keep paying Australian media outlets for their content. Are we about to get another news ban?</a>
</strong>
</em>
</p>
<hr>
<p>During the referendum campaign, the impartiality of third-party Australian fact checkers drew headlines. </p>
<p>In its report titled the “<a href="https://www.skynews.com.au/business/media/the-fact-check-files-inside-the-secretive-and-lucrative-fact-checking-industry-behind-a-foreignfunded-bid-to-censor-voice-debate/news-story/31915e1eb03b029b86a2f03aac19338b">Fact Check Files</a>”, Sky News Australia accused RMIT FactLab (a separate entity from RMIT ABC Fact Check) of working with Meta to “censor Voice debate”. As reported by The Conversation at that time, the story was the <a href="https://theconversation.com/voice-referendum-is-the-yes-or-no-camp-winning-on-social-media-advertising-spend-and-in-the-polls-208956">second</a> most-shared article on social media involving the referendum according to Meltwater data, reaching millions of users.</p>
<p>The story focused particularly on RMIT FactLab’s fact-checking of Sky’s own reports, which is found to contain falsehoods. The Sky report also revealed the factchecker’s IFCN accreditation had expired – a breach of Meta’s own terms and conditions. This led the social media giant to temporarily suspend RMIT FactLab from its paid role fact checking Meta’s social media content.</p>
<p>The conservative Institute of Public Affairs (IPA) later added to the controversy, releasing a <a href="https://ipa.org.au/wp-content/uploads/2023/11/IPA-Research-The-Arbiters-of-Truth-Analysis-of-biased-fact-checking-organisations-during-the-2023-Voice-Referendum-FINAL.pdf">report</a> in November 2023 arguing that RMIT ABC Fact Check, RMIT FactLab and AAP FactCheck had all unduly focused their efforts on the “no” campaign’s claims, resulting in a form of censorship.</p>
<p>In a soon to be published survey of 3,825 Australians after the referendum in late November, we found trust in RMIT FactLab had suffered as a result?. The survey also showed about a quarter of respondents reported using third-party fact checkers during the Voice campaign, and overall public trust in fact checkers was high. </p>
<p>However, among self-identified right-wing supporters, we see a different story with increased levels of distrust, particularly in response to RMIT FactLab – the central target of the Sky News reports.</p>
<p>RMIT FactLab recorded the highest levels of distrust among conservatives, followed by RMIT ABC Fact Check.</p>
<p><iframe id="RfN4F" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/RfN4F/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>The claims of bias against RMIT FactLab follow the path of politicisation and polarisation seen in the well-established US fact-checking sector. This trend further underscores the role of motivated reasoning in opinion formation and the insufficiency of relying solely on fact-checkers – whether external or internal – to combat fake news. </p>
<p>Effective <a href="https://opal.latrobe.edu.au/articles/report/Fighting_Fake_News_A_Study_of_Online_Misinformation_Regulation_in_the_Asia_Pacific/14038340">mitigation</a> of misinformation and disinformation requires a multifaceted approach. This includes fact checkers, but also measures such as bolstering public media literacy, regulating platforms, supporting quality journalism, and fostering collaboration among policymakers, politicians, academics, technology platforms, and civil society to promote responsible discourse.</p><img src="https://counter.theconversation.com/content/224502/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrea Carson receives funding from the Australian Research Council to research media and political trust DP230101777 and has had research grants with Meta examining fact checking, fake news, future newsrooms and the Voice to Parliament. She serves as an academic expert on Meta's global misinformation advisory group, and is also on the research advisory body for Australia's Public Interest Journalism Initiative.</span></em></p>Third-party fact checking appears to be in decline around the world - and Australia is not immune.Andrea Carson, Professor of Political Communication, Department of Politics, Media and Philosophy, La Trobe UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2244382024-03-04T13:41:42Z2024-03-04T13:41:42ZDemand for computer chips fuelled by AI could reshape global politics and security<figure><img src="https://images.theconversation.com/files/578585/original/file-20240228-18-rudxyy.jpg?ixlib=rb-1.1.0&rect=28%2C0%2C6361%2C3592&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-silicon-die-being-extracted-semiconductor-2262331365">IM Imagery / Shutterstock</a></span></figcaption></figure><p>A global race to build powerful computer chips that are essential for the next generation of artificial intelligence (AI) tools could have a major impact on global politics and security. </p>
<p>The US is currently leading the race in the design of these chips, also known as semiconductors. But most of the manufacturing is carried out in Taiwan. The debate has been fuelled by the call by Sam Altman, CEO of ChatGPT’s developer OpenAI, for <a href="https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0">a US$5 trillion to US$7 trillion</a> (£3.9 trillion to £5.5 trillion) global investment to <a href="https://venturebeat.com/ai/sam-altman-wants-up-to-7-trillion-for-ai-chips-the-natural-resources-required-would-be-mind-boggling/">produce more powerful chips</a> for the next generation of AI platforms. </p>
<p>The amount of money Altman called for is more than the chip industry has spent in total since it began. Whatever the facts about those numbers, overall projections for the AI market are mind blowing. The data analytics company GlobalData <a href="https://www.globaldata.com/media/technology/generative-ai-will-go-mainstream-2024-driven-adoption-specialized-custom-models-multimodal-tool-experimentation-says-globaldata/">forecasts that the market will be worth US$909 billion</a> by 2030.</p>
<p>Unsurprisingly, over the past two years, the US, China, Japan and several European countries have increased their budget allocations and put in place measures to secure or maintain a share of the chip industry for themselves. China is catching up fast and is <a href="https://thediplomat.com/2023/09/china-boosts-semiconductor-subsidies-as-us-tightens-restrictions/">subsidising chips, including next-generation ones for AI</a>, by hundreds of billions over the next decade to build a manufacturing supply chain. </p>
<p>Subsidies seem to be the <a href="https://www.reuters.com/technology/germany-earmarks-20-bln-eur-chip-industry-coming-years-2023-07-25/">preferred strategy for Germany too</a>. The UK government has announced its <a href="https://www.ukri.org/news/100m-boost-in-ai-research-will-propel-transformative-innovations/#:%7E:text=%C2%A3100m%20boost%20in%20AI%20research%20will%20propel%20transformative%20innovations,-6%20February%202024&text=Nine%20new%20research%20hubs%20located,help%20to%20define%20responsible%20AI.">plans to invest £100 million</a> to support regulators and universities in addressing challenges around artificial intelligence. </p>
<p>The economic historian Chris Miller, the author of the book Chip War, <a href="https://www.dw.com/en/ai-chip-race-fears-grow-of-huge-financial-bubble/a-68272265">has talked about how powerful chips have become a “strategic commodity”</a> on the global geopolitical stage.</p>
<p>Despite the efforts by several countries to invest in the future of chips, there is currently a shortage of the types currently needed for AI systems. Miller recently explained that 90% of the chips used to train, or improve, AI systems are <a href="https://www.siliconrepublic.com/future-human/chip-war-semiconductors-supply-tech-geopolitics-chris-miller">produced by just one company</a>.</p>
<p>That company is the <a href="https://www.tsmc.com/english">Taiwan Semiconductor Manufacturing Company (TSMC)</a>. Taiwan’s dominance in the chip manufacturing industry is notable because the island is also the focus for tensions between China and the US. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-microchip-industry-would-implode-if-china-invaded-taiwan-and-it-would-affect-everyone-206335">The microchip industry would implode if China invaded Taiwan, and it would affect everyone</a>
</strong>
</em>
</p>
<hr>
<p>Taiwan has, for the most part, <a href="https://www.taiwan.gov.tw/content_3.php#:%7E:text=The%20ROC%20government%20relocated%20to,rule%20of%20a%20different%20government.">been independent since the middle of the 20th century</a>. However, Beijing believes it should be <a href="https://www.reuters.com/world/asia-pacific/china-calls-taiwan-president-frontrunner-destroyer-peace-2023-12-31/">reunited with the rest of China</a> and US legislation requires Washington to <a href="https://www.congress.gov/bill/96th-congress/house-bill/2479#:%7E:text=Declares%20that%20in%20furtherance%20of,defense%20capacity%20as%20determined%20by">help defend Taiwan if it is invaded</a>. What would happen to the chip industry under such a scenario is unclear, but it is obviously a focus for global concern.</p>
<p>The disruption of supply chains in chip manufacturing have the potential to bring entire industries to a halt. Access to the raw materials, such as rare earth metals, used in computer chips has also proven to be an important bottleneck. For example, China <a href="https://securityconference.org/en/publications/munich-security-report-2024/technology/">controls 60% of the production of gallium metal</a> and 80% of the global production of germanium. These are both critical raw products used in chip manufacturing.</p>
<figure class="align-center ">
<img alt="Sam Altman" src="https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/578592/original/file-20240228-30-178em0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">OpenAI CEO Sam Altman has called for a US$5 trillion to $7 trillion investment in chips to support the growth in AI.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/openai-ceo-sam-altman-attends-artificial-2412159621">Photosince / Shutterstock</a></span>
</figcaption>
</figure>
<p>And there are other, lesser known bottlenecks. A process called <a href="https://research.ibm.com/blog/what-is-euv-lithography">extreme ultraviolet (EUV) lithography</a> is vital for the ability to continue making computer chips smaller and smaller – and therefore more powerful. <a href="https://www.asml.com/en">A single company in the Netherlands, ASML</a>, is the only manufacturer of EUV systems for chip production.</p>
<p>However, chip factories are increasingly being built outside Asia again – something that has the potential to reduce over-reliance on a few supply chains. Plants in the US are being subsidised to the tune of <a href="https://securityconference.org/en/publications/munich-security-report-2024/technology/">US$43 billion and in Europe, US$53 billion</a>. </p>
<p>For example, the Taiwanese semiconductor manufacturer TSMC is planning to build a multibillion dollar facility in Arizona. When it opens, that factory <a href="https://theconversation.com/the-microchip-industry-would-implode-if-china-invaded-taiwan-and-it-would-affect-everyone-206335">will not be producing the most advanced chips</a> that it’s possible to currently make, many of which are still produced by Taiwan.</p>
<p>Moving chip production outside Taiwan could reduce the risk to global supplies in the event that manufacturing were somehow disrupted. But this process could take years to have a meaningful impact. It’s perhaps not surprising that, for the first time, this year’s Munich Security Conference <a href="https://securityconference.org/en/publications/munich-security-report-2024/technology/">created a chapter devoted to technology</a> as a global security issue, with discussion of the role of computer chips. </p>
<h2>Wider issues</h2>
<p>Of course, the demand for chips to fuel AI’s growth is not the only way that artificial intelligence will make major impact on geopolitics and global security. The growth of disinformation and misinformation online has transformed politics in recent years by inflating prejudices on both sides of debates. </p>
<p>We have seen it <a href="https://www.jstor.org/stable/26675075">during the Brexit campaign</a>, during <a href="https://journals.sagepub.com/doi/10.1177/20563051231177943">US presidential elections</a> and, more recently, during the <a href="https://apnews.com/article/israel-hamas-gaza-misinformation-fact-check-e58f9ab8696309305c3ea2bfb269258e">conflict in Gaza</a>. AI could be the ultimate amplifier of disinformation. Take, for example, deepfakes – AI-manipulated videos, audio or images of public figures. These could easily fool people into thinking a major <a href="https://www.theguardian.com/us-news/2024/feb/26/ai-deepfakes-disinformation-election">political candidate had said something they didn’t</a>.</p>
<p>As a sign of this technology’s growing importance, at the 2024 Munich Security Conference, 20 of the world’s largest tech companies <a href="https://news.microsoft.com/2024/02/16/technology-industry-to-combat-deceptive-use-of-ai-in-2024-elections/">launched something called the “Tech Accord”</a>. In it, they pledged to cooperate to create tools to spot, label and debunk deepfakes. </p>
<p>But should such important issues be left to tech companies to police? Mechanisms such as the EU’s Digital Service Act, the UK’s Online Safety Bill as well as frameworks to regulate AI itself should help. But it remains to be seen what impact they can have on the issue.</p>
<p>The issues raised by the chip industry and the growing demand driven by AI’s growth are just one way that AI is driving change on the global stage. But it remains a vitally important one. National leaders and authorities must not underestimate the influence of AI. Its potential to redefine geopolitics and global security could exceed our ability to both predict and plan for the changes.</p><img src="https://counter.theconversation.com/content/224438/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alina Vaduva is affiliated with the Labour Party, as a member and elected councillor in Dartford, Kent. </span></em></p><p class="fine-print"><em><span>Kirk Chang does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The effects of AI’s growth on global security could be difficult to predict.Kirk Chang, Professor of Management and Technology, University of East LondonAlina Vaduva, Director of the Business Advice Centre for Post Graduate Students at UEL, Ambassador of the Centre for Innovation, Management and Enterprise, University of East LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2247892024-03-01T17:25:36Z2024-03-01T17:25:36ZIn 2024, we’ll truly find out how robust our democracies are to online disinformation campaigns<figure><img src="https://images.theconversation.com/files/579212/original/file-20240301-24-zxaj4p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/colorful-fo-election-vote-hand-holding-794518426">I'm Friday / Shutterstock</a></span></figcaption></figure><p><a href="https://www.un.org/en/countering-disinformation">Disinformation</a>, sharing false information to deceive and mislead others, can take many forms. From edited “deepfake” videos made on smartphones to vast foreign-led information operations, politics and elections show how varied disinformation can be. </p>
<p>Hailed as <a href="https://www.aljazeera.com/news/2024/1/4/the-year-of-elections-is-2024-democracys-biggest-test-ever">“the year of elections”</a>, with the majority of the world’s population going to the polls, 2024 will also be a year of lessons learned, where we will see whether disinformation can truly subvert our political processes or if we are more resilient than we think.</p>
<p>The dissemination of disinformation, as well as misleading content and methods, is not always high-tech. We often think about social networking, manipulated media, and sophisticated espionage in this regard, but sometimes efforts can be very low budget. In 2019, publications with <a href="https://news.sky.com/story/general-election-is-it-time-to-ban-fake-newspaper-political-ads-11870963">names that sounded like newspapers</a> were posted through letterboxes across the UK. These news publications, however, do not exist. </p>
<p>Bearing headlines such as “90% back remain”, they were imitation newspapers created and disseminated by the UK’s major political parties. These types of publication, which some voters thought were legitimate news publications, led to the Electoral Commission <a href="https://www.electoralcommission.org.uk/sites/default/files/2020-04/UKPGE%20election%20report%202020.pdf">describing this technique as “misleading”</a>. </p>
<p>The News Media Association, the body which represents local and regional media, also wrote to the Electoral Commission <a href="https://newsmediauk.org/blog/2021/03/31/nma-launches-campaign-against-politicians-fake-local-newspapers/">calling for the ban of “fake local newspapers”</a>.</p>
<h2>Zone flooding</h2>
<p>Research has shown that for some topics, such as politics and civil rights, all figures across the political spectrum are often <a href="https://benjamins.com/catalog/scl.98.07chr">both attacked and supported</a>, in an attempt to cause confusion and to obfuscate who and what can be believed. </p>
<p>This practice often goes hand-in-hand with <a href="https://www.cambridge.org/core/books/disinformation-age/flooded-zone/388DFBCC7E50B02921023B28E87DD26F">something called “zone flooding”</a>, where the information environment is deliberately overloaded with any and all information, just to confuse people. The aim of these broad disinformation campaigns is to make it difficult for people to believe any information, leading to <a href="https://www.eesc.europa.eu/en/news-media/press-releases/disinformation-and-lack-interest-are-main-reasons-poor-voter-turnout-european-elections">a disengaged and potentially uninformed electorate</a>.</p>
<p><a href="https://www.disinfo.eu/publications/foreign-election-interferences-an-overview-of-trends-and-challenges/">Hostile state information operations</a> and disinformation from abroad will continue to threaten countries such as the UK and US. Adversarial countries such as Russia, China and Iran continuously seek to subvert trust in our institutions and processes with the goal of increasing apathy and resentment. </p>
<p>Just two weeks ago, the US congressional Republicans’ impeachment proceedings against President Joe Biden began to fall apart when it was revealed that a witness was <a href="https://edition.cnn.com/2024/02/20/politics/biden-former-fbi-informant-russian-intelligence/index.html">supplied with false information</a> by Russian <a href="https://www.forbes.com/sites/mollybohannon/2024/02/20/russians-involved-in-passing-a-story-to-key-biden-impeachment-witness-about-hunter-biden-prosecutors-say/">intelligence officials</a>.</p>
<figure class="align-center ">
<img alt="President Joe Biden in the foreground with Donald Trump in the background." src="https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/579215/original/file-20240301-21-raeco0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Disinformation is certain to feature in 2024 elections. But are some of the risks overstated?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/democratic-candidate-joe-biden-sharp-foreground-2401520329">Below the Sky / Shutterstock</a></span>
</figcaption>
</figure>
<p>Disinformation can also be found much closer to home. Although it is often uncomfortable for academics and fact checkers to talk about, disinformation can come from the very top, with <a href="https://www.ft.com/content/5da52770-b474-4547-8d1b-9c46a3c3bac9">members of the political elite</a> embracing and promoting false content knowingly. This is further compounded by the reality that fact checks and corrections may not reach the same audience as the original content, causing some disinformation to go unchecked.</p>
<h2>AI-fuelled campaigns</h2>
<p>Recently, there has been increased focus <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8551.12554">on the role of</a> artificial intelligence (AI) <a href="https://journals.sagepub.com/doi/pdf/10.1177/2056305120903408">in spreading disinformation</a>. AI allows computers to carry out tasks that could previously have only been done by humans. So AI and AI-enabled tools can carry out very sophisticated tasks with low effort from humans and at low cost.</p>
<p>Disinformation can be both mediated and enabled by artificial intelligence. Bad actors can use sophisticated algorithms to identify and target swathes of people with disinformation on social media platforms. One key focus, however, has been on generative AI, the use of this technology to produce text and media that seem as if they were created by a human. </p>
<p>This can vary from using tools such as ChatGPT to write social media posts, to using AI-powered image, video and audio generation tools to create media of <a href="https://www.bbc.co.uk/news/uk-68146053">politicians in embarrassing, but fabricated situations</a>. This encompasses what are known as “deepfakes”, which can vary from poor to convincing in their quality.</p>
<p>While some say that AI will shape the coming elections in ways we can’t yet understand, others think the effects of disinformation are exaggerated. The simple reality is that, at present, we do not know how AI will affect the year of elections. </p>
<p>We could see vast deception at a scale only previously imagined, <a href="https://www.britannica.com/technology/Y2K-bug">or this could be a Y2K moment</a>, where our fears simply do not come to fruition. We are at a pivotal moment and the extent to which these elections are affected, or otherwise, will inform our regulatory and policy decisions for years to come.</p>
<p>If 2024 is the year of elections, then 2025 is likely to be the year of reflections. Reflecting on how susceptible our democracies are to disinformation, whether as societies we are vulnerable to sweeping deception and manipulation, and how we can safeguard our future elections. </p>
<p>Whether it’s profoundly consequential or simply something that bubbles under the surface, disinformation will always exist. But the coming year will determine whether it’s top of the agenda for governments, journalists and educators to tackle, or simply something that we learn to live with.</p><img src="https://counter.theconversation.com/content/224789/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>William Dance does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Low tech or hi-tech, the next year will determine how much action nations take on election interference.William Dance, Senior Research Associate, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2246262024-02-29T03:52:46Z2024-02-29T03:52:46ZAlgorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?<figure><img src="https://images.theconversation.com/files/578812/original/file-20240229-22-ki29m8.jpg?ixlib=rb-1.1.0&rect=3%2C7%2C2462%2C1608&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/online-news-mobile-phone-close-smartphone-1204164946">Tero Vesalainen/Shutterstock</a></span></figcaption></figure><p>Generative artificial intelligence (AI) tools are supercharging the problem of misinformation, disinformation and fake news. OpenAI’s ChatGPT, Google’s Gemini, and various image, voice and video generators have made it easier than ever to produce content, while making it harder to tell what is factual or real.</p>
<p>Malicious actors looking to spread disinformation can use AI tools to largely automate the generation of <a href="https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and">convincing and misleading text</a>. </p>
<p>This raises pressing questions: how much of the content we consume online is true and how can we determine its authenticity? And can anyone stop this?</p>
<p>It’s not an idle concern. Organisations seeking to covertly influence public opinion or sway elections can now <a href="https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and">scale their operations</a> with AI to unprecedented levels. And their content is being widely disseminated by search engines and social media. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-sora-a-new-generative-ai-tool-could-transform-video-production-and-amplify-disinformation-risks-223850">What is Sora? A new generative AI tool could transform video production and amplify disinformation risks</a>
</strong>
</em>
</p>
<hr>
<h2>Fakes everywhere</h2>
<p>Earlier this year, <a href="https://www.techradar.com/computing/search-engines/google-search-might-be-getting-worse-and-ai-threatens-to-ruin-it-entirely">a German study</a> on search engine content quality noted “a trend toward simplified, repetitive and potentially AI-generated content” on Google, Bing and DuckDuckGo.</p>
<p>Traditionally, readers of news media could rely on editorial control to uphold journalistic standards and verify facts. But AI is rapidly changing this space.</p>
<p>In a report published this week, the internet trust organisation NewsGuard <a href="https://www.newsguardtech.com/special-reports/ai-tracking-center/">identified 725 unreliable websites</a> that publish AI-generated news and information “with little to no human oversight”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1761047409243603406"}"></div></p>
<p>Last month, Google <a href="https://www.adweek.com/media/google-paying-publishers-unreleased-gen-ai/">released an experimental AI tool</a> for a select group of independent publishers in the United States. Using generative AI, the publisher can summarise articles pulled from a list of external websites that produce news and content relevant to their audience. As a condition of the trial, the users have to publish three such articles per day.</p>
<p>Platforms hosting content and developing generative AI blur the traditional lines that enable trust in online content. </p>
<h2>Can the government step in?</h2>
<p>Australia has already seen tussles between government and online platforms over the display and moderation of news and content.</p>
<p>In 2019, the Australian government <a href="https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=s1201">amended the criminal code</a> to mandate the swift removal of “abhorrent violent material” by social media platforms. </p>
<p>The Australian Competition and Consumer Commission’s (ACCC) inquiry into power imbalances between Australian news media and digital platforms led to the 2021 implementation of <a href="https://www.accc.gov.au/by-industry/digital-platforms-and-services/news-media-bargaining-code/news-media-bargaining-code">a bargaining code</a> that forced platforms to pay media for their news content.</p>
<p>While these might be considered partial successes, they also demonstrate the scale of the problem and the difficulty of taking action.</p>
<p><a href="https://journals.sagepub.com/doi/full/10.1177/02683962221114408">Our research</a> indicates these conflicts saw online platforms initially open to changes and later resisting them, while the Australian government oscillated from enforcing mandatory measures to preferring voluntary actions.</p>
<p>Ultimately, the government realised that relying on platforms’ “trust us” promises wouldn’t lead to the desired outcomes. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-google-and-meta-owe-news-publishers-much-more-than-you-think-and-billions-more-than-theyd-like-to-admit-216818">Why Google and Meta owe news publishers much more than you think – and billions more than they’d like to admit</a>
</strong>
</em>
</p>
<hr>
<p>The takeaway from our study is that once digital products become integral to millions of businesses and everyday lives, they serve as a tool for platforms, AI companies and big tech to anticipate and push back against government.</p>
<p>With this in mind, it is right to be sceptical of early calls for regulation of generative AI by tech leaders like <a href="https://fortune.com/2023/11/02/elon-musk-ai-regulations-uk-prime-minister-sunak-ai-safety-summit/">Elon Musk</a> and Sam Altman. Such calls have faded as AI takes a hold on our lives and online content.</p>
<p>A challenge lies in the sheer speed of change, which is so swift that safeguards to mitigate the potential risks to society are not yet established. Accordingly, the World Economic Forum’s 2024 Global Risk Report has predicted mis- and disinformation as the <a href="https://www.weforum.org/publications/global-risks-report-2024/">greatest threats</a> in the next two years.</p>
<p>The problem gets worse through generative AI’s ability to create multimedia content. Based on current trends, we can expect an increase in <a href="https://www.nbcnews.com/tech/social-media/emma-watson-deep-fake-scarlett-johansson-face-swap-app-rcna73624">deepfake incidents</a>, although social media platforms like Facebook are responding to these issues. They aim to <a href="https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/">automatically identify and tag</a> AI-generated photos, video and audio.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-openai-saga-demonstrates-how-big-corporations-dominate-the-shaping-of-our-technological-future-218540">The OpenAI saga demonstrates how big corporations dominate the shaping of our technological future</a>
</strong>
</em>
</p>
<hr>
<h2>What can we do?</h2>
<p>Australia’s eSafety commissioner <a href="https://www.esafety.gov.au/industry/tech-trends-and-challenges/generative-ai">is working on ways to regulate and mitigate</a> the potential harm caused by generative AI while balancing its potential opportunities.</p>
<p>A key idea is “safety by design”, which requires tech firms to place these safety considerations at the core of their products.</p>
<p>Other countries like the US are further ahead with the regulation of AI. For example, US President Joe Biden’s recent executive order <a href="https://www.theguardian.com/technology/2023/oct/30/biden-orders-tech-firms-to-share-ai-safety-test-results-with-us-government">on the safe deployment of AI</a> requires companies to share safety test results with the government, regulates <a href="https://en.wikipedia.org/wiki/Red_team">red-team testing</a> (simulated hacking attacks), and guides watermarking on content.</p>
<p>We call for three steps to help protect against the risks of generative AI in combination with disinformation.</p>
<p>1. Regulation needs <a href="https://www.linkedin.com/posts/noamsp_3-steps-to-reshaping-our-digital-landscape-activity-7152649121189797889-WEct">to pose clear rules</a> without allowing for nebulous “best effort” aims or “trust us” approaches.</p>
<p>2. To protect against large-scale disinformation operations, we need to teach media literacy in the same way we teach maths.</p>
<p>3. Safety tech or “safety by design” needs to become a non-negotiable part of every product development strategy.</p>
<p>People are aware AI-generated content is on the rise. In theory, they should adjust their information habits accordingly. However, research shows users <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8196605/">generally tend to underestimate</a> their own risk of believing fake news compared to the perceived risk for others.</p>
<p>Finding trustworthy content shouldn’t involve sifting through AI-generated content to make sense of what is factual.</p><img src="https://counter.theconversation.com/content/224626/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stan Karanasios receives funding from Emergency Management Victoria, Asia-Pacific Telecommunity, and the International Telecommunications Union.
Stan is a Distinguished Member of the Association for Information Systems.</span></em></p><p class="fine-print"><em><span>Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award (project number DE220101597) funded by the Australian Government.</span></em></p>It’s increasingly hard to tell which content online is fake. As malicious actors use generative AI to fuel disinformation, governments must regulate now before it’s too late.Stan Karanasios, Associate Professor, The University of QueenslandMarten Risius, Senior Lecturer in Business Information Systems, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2237172024-02-23T00:02:10Z2024-02-23T00:02:10ZHow people get sucked into misinformation rabbit holes – and how to get them out<figure><img src="https://images.theconversation.com/files/576118/original/file-20240216-28-bwac7i.jpeg?ixlib=rb-1.1.0&rect=0%2C35%2C6000%2C3952&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/sleepy-exhausted-woman-lying-bed-using-2142188351">Shutterstock</a></span></figcaption></figure><p>As misinformation and radicalisation rise, it’s tempting to look for something to blame: the internet, social media personalities, sensationalised political campaigns, religion, or conspiracy theories. And once we’ve settled on a cause, solutions usually follow: do more fact-checking, regulate advertising, ban YouTubers deemed to have “gone too far”.</p>
<p>However, if these strategies were the whole answer, we should already be seeing a decrease in people being drawn into fringe communities and beliefs, and less misinformation in the online environment. We’re not.</p>
<p>In new research <a href="https://doi.org/10.1177/14407833241231756">published in the Journal of Sociology</a>, we and our colleagues found radicalisation is a process of increasingly intense stages, and only a small number of people progress to the point where they commit violent acts. </p>
<p>Our work shows the misinformation radicalisation process is a pathway driven by human emotions rather than the information itself – and this understanding may be a first step in finding solutions.</p>
<h2>A feeling of control</h2>
<p>We analysed dozens of public statements from newspapers and online in which former radicalised people described their experiences. We identified different levels of intensity in misinformation and its online communities, associated with common recurring behaviours. </p>
<p>In the early stages, we found people either encountered misinformation about an anxiety-inducing topic through algorithms or friends, or they went looking for an explanation for something that gave them a “bad feeling”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/three-reasons-why-disinformation-is-so-pervasive-and-what-we-can-do-about-it-188457">Three reasons why disinformation is so pervasive and what we can do about it</a>
</strong>
</em>
</p>
<hr>
<p>Regardless, they often reported finding the same things: a new sense of certainty, a new community they could talk to, and feeling they had regained some control of their lives.</p>
<p>Once people reached the middle stages of our proposed radicalisation pathway, we considered them to be invested in the new community, its goals, and its values. </p>
<h2>Growing intensity</h2>
<p>It was during these more intense stages that people began to report more negative impacts on their own lives. This could include the loss of friends and family, health issues caused by too much time spent on screens and too little sleep, and feelings of stress and paranoia. To soothe these pains, they turned again to their fringe communities for support. </p>
<p>Most people in our dataset didn’t progress past these middle stages. However, their continued activity in these spaces kept the misinformation ecosystem alive. </p>
<figure class="align-center ">
<img alt="Photo showing man and woman lying in bed in the dark, facing away from each other and looking at their phones." src="https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=293&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=293&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=293&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=368&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=368&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577193/original/file-20240222-18-94qg55.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=368&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Engagement with misinformation proceeds in stages.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-asian-couple-using-smartphone-midnight-2131573395">TimeImage / Shutterstock</a></span>
</figcaption>
</figure>
<p>When people did move further and reach the extreme final stages in our model, they were doing active harm. </p>
<p>In their recounting of their experiences at these high levels of intensity, individuals spoke of choosing to break ties with loved ones, participating in public acts of disruption and, in some cases, engaging in violence against other people in the name of their cause. </p>
<p>Once people reached this stage, it took pretty strong interventions to get them out of it. The challenge, then, is how to intervene safely and effectively when people are in the earlier stages of being drawn into a fringe community.</p>
<h2>Respond with empathy, not shame</h2>
<p>We have a few suggestions. For people who are still in the earlier stages, friends and trusted advisers, like a doctor or a nurse, can have a big impact by simply responding with empathy. </p>
<p>If a loved one starts voicing possible fringe views, like a fear of vaccines, or animosity against women or other marginalised groups, a calm response that seeks to understand the person’s underlying concern can go a long way. </p>
<p>The worst response is one that might leave them feeling ashamed or upset. It may drive them back to their fringe community and accelerate their radicalisation. </p>
<p>Even if the person’s views intensify, maintaining your connection with them can turn you into a lifeline that will see them get out sooner rather than later.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/out-of-the-rabbit-hole-new-research-shows-people-can-change-their-minds-about-conspiracy-theories-222507">Out of the rabbit hole: new research shows people can change their minds about conspiracy theories</a>
</strong>
</em>
</p>
<hr>
<p>Once people reached the middle stages, we found third-party online content – not produced by government, but regular users – could reach people without backfiring. Considering that many people in our research sample had their radicalisation instigated by social media, we also suggest the private companies behind such platforms should be held responsible for the effects of their automated tools on society. </p>
<p>By the middle stages, arguments on the basis of logic or fact are ineffective. It doesn’t matter whether they are delivered by a friend, a news anchor, or a platform-affiliated fact-checking tool.</p>
<p>At the most extreme final stages, we found that only heavy-handed interventions worked, such as family members forcibly hospitalising their radicalised relative, or individuals undergoing government-supported deradicalisation programs.</p>
<h2>How not to be radicalised</h2>
<p>After all this, you might be wondering: how do you protect <em>yourself</em> from being radicalised? </p>
<p>As much of society becomes more dependent on digital technologies, we’re going to get exposed to even more misinformation, and our world is likely going to get smaller through online echo chambers. </p>
<p>One strategy is to foster your critical thinking skills by <a href="https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(23)00198-5">reading long-form texts from paper books</a>. </p>
<p>Another is to protect yourself from the emotional manipulation of platform algorithms by <a href="https://guilfordjournals.com/doi/10.1521/jscp.2018.37.10.751">limiting your social media use</a> to small, infrequent, purposefully-directed pockets of time.</p>
<p>And a third is to sustain connections with other humans, and lead a more analogue life – which has other benefits as well.</p>
<p>So in short: log off, read a book, and spend time with people you care about. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-month-at-sea-with-no-technology-taught-me-how-to-steal-my-life-back-from-my-phone-127501">A month at sea with no technology taught me how to steal my life back from my phone</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/223717/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Emily Booth is supported by funding from the Australian Department of Home Affairs and the Defence Innovation Network.</span></em></p><p class="fine-print"><em><span>Marian-Andrei Rizoiu receives funding from the Australian Department of Home Affairs, the Defence Science and Technology Group, the Defence Innovation Network and the Australian Academy of Science.</span></em></p>People who dive into misinformation are driven to satisfy an emotional need, according to our new research.Emily Booth, Research assistant, University of Technology SydneyMarian-Andrei Rizoiu, Associate Professor in Behavioral Data Science, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2233922024-02-15T15:58:50Z2024-02-15T15:58:50ZDisinformation threatens global elections – here’s how to fight back<figure><img src="https://images.theconversation.com/files/575950/original/file-20240215-22-at0x1v.jpg?ixlib=rb-1.1.0&rect=180%2C90%2C5826%2C3890&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some Republicans still believe the 2020 election was "stolen" from Donald Trump.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/helena-montana-nov-7-2020-protesters-1849449790">Lyonstock/Shutterstock</a></span></figcaption></figure><p>With over half the world’s population heading to the polls in 2024, disinformation season is upon us — and the warnings are dire. The World Economic Forum <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">declared</a> misinformation a top societal threat over the next two years and major news organisations <a href="https://www.nbcnews.com/tech/misinformation/disinformation-unprecedented-threat-2024-election-rcna134290">caution</a> that disinformation poses an unprecedented threat to democracies worldwide. </p>
<p>Yet, some scholars and pundits have <a href="https://theconversation.com/disinformation-is-often-blamed-for-swaying-elections-the-research-says-something-else-221579">questioned</a> whether disinformation can really sway election outcomes. Others think concern over disinformation is just a <a href="https://undark.org/2023/10/26/opinion-misinformation-moral-panic/">moral panic</a> or merely a <a href="https://iai.tv/articles/misinformation-is-the-symptom-not-the-disease-daniel-walliams-auid-2690">symptom</a> rather than the cause of our societal ills. Pollster Nate Silver even thinks that misinformation “<a href="https://twitter.com/NateSilver538/status/1745556135157899389">isn’t a coherent concept</a>”.</p>
<p>But we argue the evidence tells a different story.</p>
<p>A 2023 study showed that the vast majority of academic <a href="https://misinforeview.hks.harvard.edu/article/a-survey-of-expert-views-on-misinformation-definitions-determinants-solutions-and-future-of-the-field/">experts</a> are in agreement about how to define misinformation (namely as false and misleading content) and what this looks like (for example lies, conspiracy theories and pseudoscience). Although the study didn’t cover disinformation, such experts generally agree that this can be defined as intentional misinformation.</p>
<p>A recent paper <a href="https://www.nature.com/articles/s44271-023-00054-5">clarified</a> that misinformation can both be a symptom and the disease. In 2022, nearly 70% of Republicans still <a href="https://www.politifact.com/article/2022/jun/14/most-republicans-falsely-believe-trumps-stolen-ele/">endorsed</a> the false conspiracy theory that the 2020 US presidential election was “stolen” from Donald Trump. If Trump had never floated this theory, how would millions of people have possibly acquired these beliefs?</p>
<p>Moreover, although it is clear that people do not always act on dangerous beliefs, the January 6 US Capitol riots, incited by false claims, serve as an important reminder that a <a href="https://www.politifact.com/article/2021/jun/30/misinformation-and-jan-6-insurrection-when-patriot/">misinformed</a> crowd can disrupt and undermine democracy. </p>
<p>Given that nearly 25% of elections are decided by a margin of <a href="https://www.pnas.org/doi/full/10.1073/pnas.1419828112">under 3%</a>, mis- and disinformation can have important influence. One <a href="https://www.sciencedirect.com/science/article/pii/S0261379418303019">study</a> found that among previous Barack Obama voters who did not buy into any fake news about Hillary Clinton during the 2016 presidential election, 89% voted for Clinton. By contrast, among prior Obama voters who believed at least two fake headlines about Clinton, only 17% voted for her. </p>
<p>While this doesn’t necessarily prove that the misinformation caused the voting behaviour, we do know that <a href="https://www.channel4.com/news/revealed-trump-campaign-strategy-to-deter-millions-of-black-americans-from-voting-in-2016">millions</a> of black voters were targeted with misleading ads discrediting Clinton in key swing states ahead of the election. </p>
<p>Research has shown that such micro-targeting of specific audiences based on
variables such as their personality not only influences <a href="https://www.pnas.org/doi/full/10.1073/pnas.1710966114">decision-making</a> but also impacts <a href="https://journals.sagepub.com/doi/full/10.1177/0093650220961965">voting intentions</a>. A recent <a href="https://academic.oup.com/pnasnexus/advance-article/doi/10.1093/pnasnexus/pgae035/7591134">paper</a> illustrated how large language models can be deployed to craft micro-targeted ads at scale, estimating that for every 100,000 individuals targeted, at least several thousand can be persuaded.</p>
<p>We also know that not only are people bad at <a href="https://www.cell.com/iscience/pdf/S2589-0042(21)01335-3.pdf">discerning</a> deepfakes (AI generated images of fake events) from genuine content, studies find that deepfakes do influence <a href="https://journals.sagepub.com/doi/full/10.1177/1940161220944364">political</a> attitudes among a small target group. </p>
<p>There are more indirect consequences of disinformation too, such as eroding public <a href="https://journals.sagepub.com/doi/full/10.1177/1461444820943878">trust</a> and <a href="https://www.pnas.org/doi/abs/10.1073/pnas.2115900119">participation</a> in elections.</p>
<p>Other than hiding under our beds and worrying, what can we do to protect ourselves?</p>
<h2>The power of prebunking</h2>
<p>Many efforts have focused on fact-checking and debunking false beliefs. In contrast, <a href="https://www.tandfonline.com/doi/full/10.1080/10463283.2021.1876983">“prebunking”</a> is a new way to prevent false beliefs from forming in the first place. Such “inoculation” involves warning people not to fall for a false narrative or propaganda tactic, together with an explanation as to why. </p>
<p>Misinforming rhetoric has clear <a href="https://journals.sagepub.com/doi/full/10.1177/09579265221076609">markers</a>, such as scapegoating or use of false dichotomies (there are many others), that people can learn to identify. Like a medical vaccine, the prebunk exposes the recipient to a “weakened dose” of the infectious agent (the disinformation) and refutes it in a way that confers protection. </p>
<p>For example, we created an online <a href="https://www.vice.com/en/article/dy8vzm/homeland-security-funded-this-game-about-destabilizing-a-small-us-town">game</a> for the Department of Homeland Security to empower Americans to spot foreign influence techniques during the 2020 presidential election. The weakened dose? <a href="https://www.nbcnews.com/news/us-news/u-s-cybersecurity-agency-uses-pineapple-pizza-demonstrate-vulnerability-foreign-n1035296">Pineapple pizza</a>.</p>
<p>How could pineapple pizza possibly be the way to tackle misinformation? It shows how bad-faith actors can take an innocuous issue such as whether or not to put pineapple on pizza, and use this to try to start a culture war. They might claim it’s offensive to Italians or urge Americans not to let anybody restrict their pizza-topping freedom.</p>
<p>They can then buy bots to amplify the issue on both sides, disrupt debate – and sow chaos. Our <a href="https://misinforeview.hks.harvard.edu/article/breaking-harmony-square-a-game-that-inoculates-against-political-misinformation/">results</a> showed that people improved in their ability to recognise these tactics after playing our inoculation game. </p>
<p>In 2020, <a href="https://www.npr.org/2022/10/28/1132021770/false-information-is-everywhere-pre-bunking-tries-to-head-it-off-early">Twitter</a> identified false election tropes as potential “vectors of misinformation” and sent out prebunks to millions of US users warning them of fraudulent claims, such as that voting by mail is not safe. </p>
<p>These prebunks armed people with a fact — that experts agree that voting by mail is reliable — and it worked insofar as the prebunks inspired confidence in the election process and motivated users to seek out more factual information. Other social media companies, such as <a href="https://medium.com/jigsaw/prebunking-to-build-defenses-against-online-manipulation-tactics-in-germany-a1dbfbc67a1a">Google</a> and <a href="https://sustainability.fb.com/blog/2022/10/24/climate-science-literacy-initiative/">Meta</a> have followed suit across a range of issues. </p>
<p>A new <a href="https://bpb-us-e1.wpmucdn.com/sites.dartmouth.edu/dist/5/2293/files/2024/02/voter-fraud-corrections-e163369556a2d7a4.pdf">paper</a> tested inoculation against false claims about the election process in the US and Brazil. Not only did it found that prebunking worked better than traditional debunking, but that the inoculation improved discernment between true and false claims, effectively reduced election fraud beliefs and improved confidence in the integrity of the upcoming 2024 elections. </p>
<p>In short, inoculation is a <a href="https://futurefreespeech.org/background-report-empowering-audiences-against-misinformation-through-prebunking/">free speech</a>-empowering intervention that can work on a global scale. When Russia was looking for a pretext to invade Ukraine, US president Joe Biden used this approach to “<a href="https://www.deseret.com/opinion/2022/3/2/22955870/opinion-how-the-white-house-prebunked-putins-lies-disinformation-joe-biden-donald-trump-russia">inoculate</a>” the world against Putin’s plan to stage and film a fabricated Ukrainian atrocity, complete with actors, a script and a movie crew. Biden declassified the intelligence and exposed the plot.</p>
<p>In effect, he warned the world not to fall for fake videos with actors pretending to be Ukrainian soldiers on Russian soil. Forewarned, the international community was <a href="https://www.economist.com/united-states/2022/02/26/deploying-reality-against-putin">unlikely</a> to fall for it. Russia found another pretext to invade, of course, but the point remains: forewarned is forearmed.</p>
<p>But we need not rely on government or tech firms to build <a href="https://harpercollins.co.uk/products/mental-immunity-infectious-ideas-mind-parasites-and-the-search-for-a-better-way-to-think-andy-norman?variant=39295503597646">mental immunity</a>. We can all <a href="https://interventions.withgoogle.com/static/pdf/A_Practical_Guide_to_Prebunking_Misinformation.pdf">learn</a> how to spot misinformation by studying the markers accompanying misleading rhetoric.</p>
<p>Remember that polio was a highly infectious disease that was eradicated through vaccination and herd immunity. Our challenge now is to build herd immunity to the tricks of disinformers and propagandists. </p>
<p>The future of our democracy may depend on it.</p><img src="https://counter.theconversation.com/content/223392/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sander van der Linden consults for or receives funding from the UK Government's Cabinet Office, The U.S. State Department, the American Psychological Association, the US Center for Disease Control, the European Commission, the Templeton World Charity Foundation, the United Nations, the World Health Organization, Google, and Meta. </span></em></p><p class="fine-print"><em><span>Lee McIntyre advises the UK Government on how to fight disinformation.</span></em></p><p class="fine-print"><em><span>Stephan Lewandowsky receives funding from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO), the
Humboldt Foundation through a research award, the Volkswagen Foundation (grant ``Reclaiming individual autonomy and democratic discourse online: How to rebalance human and algorithmic decision making''), and the European Commission (Horizon 2020 grants 964728 JITSUVAX and 101094752 SoMe4Dem). He also receives funding from Jigsaw (a technology incubator created by Google) and from UK Research and Innovation (through EU Horizon replacement funding grant number 10049415). He collaborates with the European Commission's Joint Research Centre.</span></em></p>Scientists estimate that for every 100,000 people targeted with specific political ads, several thousand can be persuaded.Sander van der Linden, Professor of Social Psychology in Society, University of CambridgeLee McIntyre, Research Fellow, Center for Philosophy and History of Science, Boston UniversityStephan Lewandowsky, Chair of Cognitive Psychology, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2158152024-02-15T01:53:26Z2024-02-15T01:53:26ZCan we be inoculated against climate misinformation? Yes – if we prebunk rather than debunk<figure><img src="https://images.theconversation.com/files/575202/original/file-20240213-24-2257zy.jpg?ixlib=rb-1.1.0&rect=239%2C58%2C4606%2C2971&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/montreal-canada-september-27-2019-woman-1547586671">Adrien Demers/Shutterstock</a></span></figcaption></figure><p>Last year, the world experienced the hottest day <a href="https://www.washingtonpost.com/climate-environment/2023/07/05/hottest-day-ever-recorded">ever recorded</a>, as we endured the first year where temperatures were 1.5°C warmer than the pre-industrial era. The link between extreme events and climate change is <a href="https://www.worldweatherattribution.org/extreme-heat-in-north-america-europe-and-china-in-july-2023-made-much-more-likely-by-climate-change/#:%7E:text=July%202023%20saw%20extreme%20heatwaves,China%20(CNN%2C2023).">clearer than ever</a>. But that doesn’t mean climate misinformation has stopped. Far from it. </p>
<p>Misleading or incorrect information on climate still spreads like wildfire, even during the angry northern summer of 2023. Politicians falsely claimed the heatwaves were “<a href="https://www.politico.com/news/magazine/2023/08/09/phoenix-heat-wave-republicans-00110325">normal</a>” for summer. Conspiracy theorists claimed the devastating fires in Hawaii were ignited by <a href="https://www.forbes.com/sites/mattnovak/2023/08/11/conspiracy-theorists-go-viral-with-claim-space-lasers-are-to-blame-for-hawaii-fires/?sh=1d46579e4529">government lasers</a>. </p>
<p>People producing misinformation have shifted tactics, too, often moving from the old denial (claiming climate change isn’t happening) to the <a href="https://edition.cnn.com/2024/01/16/climate/climate-denial-misinformation-youtube/index.html">new denial</a> (questioning climate solutions). Spreading doubt and scepticism has hamstrung our response to the enormous threat of climate change. And with sophisticated generative AI making it easy to generate plausible lies, it could become an <a href="https://www.stockholmresilience.org/download/18.889aab4188bda3f44912a32/1687863825612/SRC_Climate%20misinformation%20brief_A4_.pdf">even bigger issue</a>.</p>
<p>The problem is, debunking misinformation <a href="https://www.nature.com/articles/s41562-023-01623-8">is often not sufficient</a> and you run the risk of giving false information <a href="https://link.springer.com/article/10.1007/s12144-024-05651-z">credibility</a> when you have to debunk it. Indeed, a catchy lie can often stay in people’s heads while sober facts are forgotten. </p>
<p>But there’s a new option: the <a href="https://interventions.withgoogle.com/static/pdf/A_Practical_Guide_to_Prebunking_Misinformation.pdf">prebunking method</a>. Rather than waiting for misinformation to spread, you lay out clear, accurate information in advance – along with describing common manipulation techniques. Prebunking often has a better chance of success, according to <a href="https://harpercollins.co.uk/products/foolproof-why-we-fall-for-misinformation-and-how-to-build-immunity-sander-van-der-linden?variant=39973011980366">recent research</a> from co-author Sander van der Linden. </p>
<h2>How does prebunking work?</h2>
<p><a href="https://engineering.stanford.edu/magazine/article/how-fake-news-spreads-real-virus">Misinformation spreads</a> much like a virus. The way to protect ourselves and everyone else is similar: through vaccination. Psychological inoculation via prebunking acts like a vaccine and reduces the probability of infection. (We focus on misinformation here, which is shared accidentally, not <a href="https://frontline.thehindu.com/news/what-is-climate-misinformation-and-why-does-it-matter-disinformation-opponents-of-climate-science-greenwashing/article67771776.ece">disinformation</a>, which is where people deliberately spread information they know to be false). </p>
<p>If you’re forewarned about dodgy claims and questionable techniques, you’re more likely to be sceptical when you come across a YouTube video claiming electric cars are dirtier than those with internal combustion engines, or a Facebook page suggesting offshore wind turbines will kill whales. </p>
<p>Inoculation is not just a metaphor. By exposing us to a weakened form of the types of misinformation we might see in the future and giving us ways to identify it, we reduce the chance false information takes root in our psyches. </p>
<p>Scientists have tested these methods with some success. In <a href="https://publichealth.jmir.org/2022/6/e34615/">one study</a> exploring ways of countering anti-vaccination misinformation, researchers created simple videos to warn people manipulators might try to influence their thinking about vaccination with anecdotes or scary images rather than evidence. </p>
<p>They also gave people relevant facts about how low the actual injury rate from vaccines is (around two injuries per million). The result: compared to a control group, people with the psychological inoculation were more likely to recognise misleading rhetoric, less likely to share this type of content with others, and more likely to want to get vaccinated. </p>
<p>Similar studies have <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/gch2.201600008">been conducted</a> on climate misinformation. Here, one group was forewarned that politically motivated actors will try to make it seem as if there was a lot of disagreement on the causes of climate change by appealing to fake experts and bogus petitions, while in fact <a href="https://theconversation.com/the-97-climate-consensus-is-over-now-its-well-above-99-and-the-evidence-is-even-stronger-than-that-170370">97% or more</a> of climate scientists have concluded humans are causing climate change. This inoculation proved effective. </p>
<p>The success of these early studies has spurred social media companies <a href="https://sustainability.fb.com/blog/2022/10/24/climate-science-literacy-initiative/">such as Meta</a> to adopt the technique. You can now find prebunking efforts on Meta sites such as Facebook and Instagram intended to protect people against common misinformation techniques, such as cherry-picking isolated data. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/youtube-how-a-team-of-scientists-worked-to-inoculate-a-million-users-against-misinformation-189007">YouTube: how a team of scientists worked to inoculate a million users against misinformation</a>
</strong>
</em>
</p>
<hr>
<h2>Prebunking in practice</h2>
<p>A hotter world will experience increasing climate extremes and <a href="https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020RG000726">more fire</a>. Even though many of the fires we have seen in recent years in Australia, Hawaii, Canada and <a href="https://www.theguardian.com/global-development/2024/feb/10/chile-wildfires-vina-del-mar-achupallas">now Chile</a> are the worst on record, climate misinformation actors routinely try to minimise their severity. </p>
<p>As an example, let’s prebunk claims likely to circulate after the next big fire. </p>
<p><strong>1. The claim: “Climate change is a hoax – wildfires have always been a part of nature.”</strong></p>
<p>How to prebunk it: ahead of fire seasons, scientists can demonstrate claims like this rely on the “<a href="https://newslit.org/tips-tools/news-lit-tip-false-equivalence/">false equivalence</a>” logical fallacy. Misinformation falsely equates the recent rise in extreme weather events with natural events of the past. A devastating fire 100 years ago does not disprove <a href="https://www.unep.org/resources/report/spreading-wildfire-rising-threat-extraordinary-landscape-fires">the trend</a> towards more fires and larger fires. </p>
<p><strong>2. Claim: “Bushfires are caused by arsonists.”</strong> </p>
<p>How to prebunk it: media professionals have an important responsibility here in fact-checking information before publishing or broadcasting. Media can give information on the most common causes of bushfires, from lightning (about 50%) to accidental fires to arson. <a href="https://www.theaustralian.com.au/nation/bushfires-firebugs-fuelling-crisis-as-national-arson-toll-hits-183/news-story/52536dc9ca9bb87b7c76d36ed1acf53f#:%7E:text=Victoria's%20Crime%20Statistics%20agency%20told,older%20men%20in%20their%2060s.">Media claims</a> arsonists were the main cause of the unprecedented 2019-2020 Black Summer fires in Australia were used by climate deniers worldwide, even though arson was <a href="https://www.abc.net.au/news/2020-01-11/australias-fires-reveal-arson-not-a-major-cause/11855022">far from the main cause</a>.</p>
<p><strong>3. Claim: “The government is using bushfires as an excuse to bring in climate regulations.”</strong> </p>
<p>How to prebunk it: explain this recycled conspiracy theory is likely to circulate. Point out how it was used to claim COVID-19 lockdowns were a government ploy to soften people up for <a href="https://www.nbcnews.com/news/world/climate-lockdowns-became-new-battleground-conspiracy-driven-protest-mo-rcna80370">climate lockdowns</a> (which never happened). Show how government agencies can and do communicate openly about why climate regulations <a href="https://www.dcceew.gov.au/climate-change/strategies">are necessary</a> and how they are intended to stave off the worst damage. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="firefighter putting out bushfire" src="https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=220&fit=crop&dpr=1 600w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=220&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=220&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=276&fit=crop&dpr=1 754w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=276&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/575160/original/file-20240212-26-6ztcl9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=276&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">False information on bushfires can spread like a bushfire.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/australia-bushfires-fire-fueled-by-wind-1566620281">Toa55/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Misinformation isn’t going away</h2>
<p>Social media and the open internet have made it possible to broadcast information to millions of people, regardless of whether it’s true. It’s no wonder it’s a golden age for misinformation. Misinformation actors have found effective ways to cast scepticism on established science and then sell a false alternative. </p>
<p>We have to respond. Doing nothing means the lies win. And getting on the front foot with prebunking is one of the best tools we have. </p>
<p>As the world gets hotter, prebunking offers a way to anticipate new variants of lies and misinformation and counter them – before they take root. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/7-ways-to-avoid-becoming-a-misinformation-superspreader-157099">7 ways to avoid becoming a misinformation superspreader</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/215815/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Turney receives funding from the Australian Research Council. He is a scientific adviser and holds shares in cleantech biographite company, CarbonScape. Chris is affiliated with the virtual Climate Recovery Institute, is a volunteer firefighter with the New South Wales Rural Fire Service (the NSW RFS), and is a Non-Executive Director on the boards of the NSW Environment Protection Authority (EPA) and deeptech incubator, Cicada.</span></em></p><p class="fine-print"><em><span>Sander van der Linden consults for or has received funding from Google, the EU Commission, the United Nations (UN), the World Health Organization (WHO), the Alfred Landecker Foundation, Omidyar Network India, the American Psychological Association, the Centers for Disease Control, UK Government, Facebook/Meta, and the Gates Foundation.</span></em></p>When we see false information circulating, we might move to debunk it. But prebunking lies and explaining manipulation techniques can work better.Christian Turney, Pro Vice-Chancellor of Research, University of Technology SydneySander van der Linden, Professor of Social Psychology in Society, University of CambridgeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2232532024-02-14T14:26:07Z2024-02-14T14:26:07ZWagner Group is now Africa Corps. What this means for Russia’s operations on the continent<p><em>In August 2023, Wagner Group leader Yevgeny Prigozhin died after <a href="https://www.theguardian.com/world/2023/oct/05/hand-grenade-explosion-caused-plane-crash-that-killed-wagner-boss-says-putin">his private jet crashed</a> about an hour after taking off in Moscow. He had been Russia’s pointman in Africa since the Wagner Group <a href="https://www.cfr.org/in-brief/what-russias-wagner-group-doing-africa">began operating on the continent in 2017</a>.</em></p>
<p><em>The group is known for <a href="https://theconversation.com/wagner-group-in-africa-russias-presence-on-the-continent-increasingly-relies-on-mercenaries-198600">deploying paramilitary forces, running disinformation campaigns and propping up influential political leaders</a>. It has had a destabilising effect. Prigozhin’s death – and his <a href="https://www.aljazeera.com/news/2023/6/24/timeline-how-wagner-groups-revolt-against-russia-unfolded">aborted mutiny</a> against Russian military commanders two months earlier – has led to a shift in Wagner Group’s activities.</em></p>
<p><em>What does this mean for Africa? <a href="https://scholar.google.com/citations?hl=en&user=fvXhZxQAAAAJ&view_op=list_works&sortby=pubdate">Alessandro Arduino’s research</a> includes mapping the evolution of <a href="https://rowman.com/ISBN/9781538170311/Money-for-Mayhem-Mercenaries-Private-Military-Companies-Drones-and-the-Future-of-War">mercenaries</a> and private military companies across Africa. He provides some answers.</em></p>
<h2>What is the current status of the Wagner Group?</h2>
<p>Following Yevgeny Prigozhin’s death, the Russian ministries of foreign affairs and defence quickly reassured Middle Eastern and African states that it would be <a href="https://jamestown.org/program/the-wagner-group-evolves-after-the-death-of-prigozhin/">business as usual</a> – meaning unofficial Russian boots on the ground would keep operating in these regions.</p>
<p><a href="https://adf-magazine.com/2024/01/with-new-name-same-russian-mercenaries-plague-africa/">Recent reports</a> on the Wagner Group suggest a <a href="https://www.cnbc.com/2024/02/12/russias-wagner-group-expands-into-africas-sahel-with-a-new-brand.html#:%7E:text=Wagner%20Group%20has%20been%20replaced,its%20new%20leader%20has%20confirmed.">transformation</a> is underway. </p>
<p>The group’s activities in Africa are now under the <a href="https://www.brookings.edu/articles/what-is-the-fallout-of-russias-wagner-rebellion/">direct supervision</a> of the Russian ministry of defence. </p>
<p>Wagner commands an estimated force of <a href="https://www.cfr.org/in-brief/what-russias-wagner-group-doing-africa#:%7E:text=Rather%20than%20a%20single%20entity%2C%20Wagner%20is%20a,of%20former%20Russian%20soldiers%2C%20convicts%2C%20and%20foreign%20nationals.">5,000 operatives</a> deployed throughout Africa, from Libya to Sudan. As part of the transformation, the defence ministry has renamed it the <a href="https://www.bloomberg.com/news/newsletters/2024-01-30/russia-raises-the-stakes-in-tussle-over-africa">Africa Corps</a>. </p>
<p>The choice of <a href="https://www.businessinsider.com/new-russian-military-unit-recruiting-former-wagner-fighters-ukraine-veterans-2023-12?r=US&IR=T">name</a> could be an attempt to add a layer of obfuscation to cover what has been in plain sight for a long time. That Russian mercenaries in Africa <a href="https://www.theglobeandmail.com/business/article-canadian-owned-mine-seized-by-russian-mercenaries-in-africa-is-helping/">serve one master</a> – the Kremlin. </p>
<p>Nevertheless, the direct link to Russia’s ministry of defence will make it difficult for Russia to argue that a foreign government has requested the services of a Russian private military company without the Kremlin’s involvement. The head of the Russian ministry of foreign affairs <a href="https://www.reuters.com/world/africa/mali-asked-private-russian-military-firm-help-against-insurgents-ifx-2021-09-25/">attempted to use this defence in Mali</a>.</p>
<p>The notion of transforming the group into the Africa Corps may have been inspired by World War II German field marshal <a href="https://www.encyclopedia.com/history/encyclopedias-almanacs-transcripts-and-maps/afrika-korps">Erwin Rommel’s Afrika Korps</a>. Nazi Germany wove myths around his <a href="https://academic.oup.com/ahr/article-abstract/115/4/1243/35179?redirectedFrom=fulltext">strategic and tactical successes in north Africa</a>.</p>
<p>But will the Wagner Group under new leadership uphold the <a href="https://nationalinterest.org/feature/wagner-group-africa-where-rubber-meets-road-206202">distinctive modus operandi</a> that propelled it to infamy during Prigozhin’s reign? This included the intertwining of boots on the ground with propaganda and disinformation. It also leveraged technologies and a sophisticated network of financing to enhance combat capabilities.</p>
<h2>What will happen to Wagner’s modus operandi now?</h2>
<p>In my recent book, <a href="https://rowman.com/ISBN/9781538170311/Money-for-Mayhem-Mercenaries-Private-Military-Companies-Drones-and-the-Future-of-War">Money for Mayhem: Mercenaries, Private Military Companies, Drones and the Future of War</a>, I record Prigozhin’s adept weaving of disinformation and misinformation. </p>
<p>Numerous meticulously orchestrated campaigns flooded Africa’s online social platforms <a href="https://www.state.gov/disarming-disinformation/yevgeniy-prigozhins-africa-wide-disinformation-campaign/">promoting</a> the removal of French and western influence across the Sahel. </p>
<p>Prigozhin oversaw the creation of the Internet Research Agency, which operated as the propaganda arm of the group. It supported Russian disinformation campaigns and was sanctioned in 2018 by the US government for meddling in American elections. Prigozhin <a href="https://edition.cnn.com/2023/02/14/europe/russia-yevgeny-prigozhin-internet-research-agency-intl/index.html">admitted</a> to founding the so-called troll farm: </p>
<blockquote>
<p>I’ve never just been the financier of the Internet Research Agency. I invented it, I created it, I managed it for a long time.</p>
</blockquote>
<p>From a financial perspective, Prigozhin’s approach involved establishing a <a href="https://home.treasury.gov/news/press-releases/jy1581">convoluted network of lucrative natural resources mining operations</a>. These spanned gold mines in the Central African Republic to diamond mines in Sudan. </p>
<p>This strategy was complemented by significant cash infusions from the <a href="https://www.theguardian.com/world/2023/nov/09/how-russia-recruiting-wagner-fighters-continue-war-ukraine">Russian state</a> to support the Wagner Group’s direct involvement in hostilities. This extended from Syria to Ukraine, and across north and west Africa.</p>
<p>My research shows Prigozhin networks are solid enough to last. But only as long as the golden rule of the mercenary remains intact: guns for hire are getting paid.</p>
<p>In Libya and Mali, Russia is unlikely to yield ground due to enduring geopolitical objectives. These include generating revenue from oil fields, securing access to ports for its navy and securing its position as a kingmaker in the region. However, the Central African Republic may see less attention from Moscow. The Wagner Group’s involvement here was <a href="https://foreignpolicy.com/2024/02/07/africa-corps-wagner-group-russia-africa-burkina-faso/">primarily linked</a> to Prigozhin’s personal interests in goldmine revenues.</p>
<p>The Russian ministry of defence will no doubt seek to create a unified and loyal force dedicated to military action. But with the enduring legacy of Soviet-style bureaucracy, marked by excessive paperwork and procrastination in today’s Russian officials, one might surmise that greater allegiance to Moscow will likely come at the cost of reduced flexibility.</p>
<p>History has shown that Africa serves as a <a href="https://theconversation.com/wagner-group-mercenaries-in-africa-why-there-hasnt-been-any-effective-opposition-to-drive-them-out-207318">lucrative arena for mercenaries</a> due to various factors. These include: </p>
<ul>
<li><p>the prevalence of low-intensity conflicts reduces the risks to mercenaries’ lives compared to full-scale wars like in <a href="https://www.aljazeera.com/news/2024/2/13/russia-ukraine-war-list-of-key-events-day-720">Ukraine</a></p></li>
<li><p>the continent’s abundant natural resources are prone to exploitation</p></li>
<li><p>pervasive instability allows mercenaries to operate with relative impunity.</p></li>
</ul>
<p>As it is, countries in Africa once considered allies of the west are looking for alternatives. Russia is increasingly looking like a <a href="https://theconversation.com/five-essential-reads-on-russia-africa-relations-187568">viable candidate</a>. In January 2024, Chad’s junta leader, Mahamat Idriss Deby, met with Russian president Vladimir Putin in Moscow to “<a href="https://www.reuters.com/world/africa/putin-meets-chad-junta-leader-russia-competes-with-france-africa-2024-01-24/">develop bilateral ties</a>”. Chad previously had taken a pro-western policy.</p>
<p>A month earlier, Russia’s deputy defence minister Yunus-Bek Yevkurov, who’s been tasked with overseeing Wagner’s activities in the Middle East and north Africa, <a href="https://www.africanews.com/2023/12/04/russian-officials-visit-niger-to-strengthen-military-ties/">visited Niger</a>. The two countries <a href="https://theconversation.com/niger-and-russia-are-forming-military-ties-3-ways-this-could-upset-old-allies-221696">agreed to strengthen military ties</a>. Niger is currently led by the military after a <a href="https://www.iiss.org/en/publications/strategic-comments/2023/the-coup-in-niger/">coup in July 2023</a>.</p>
<h2>Where does it go from here?</h2>
<p>There are a number of paths that the newly named Africa Corps could take.</p>
<ul>
<li><p>It gets deployed by Moscow to fight in conflicts meeting Russia’s geopolitical ends. </p></li>
<li><p>It morphs into paramilitary units under the guise of Russian foreign military intelligence agencies.</p></li>
<li><p>It splinters into factions, acting as heavily armed personal guards for local warlords. </p></li>
</ul>
<p>The propaganda machinery built by Prigozhin may falter during the transition. But this won’t signal the immediate disappearance of the Russian disinformation ecosystem. </p>
<p>Russian diplomatic efforts are already mobilising to preserve the status quo. This is clear from Moscows’s <a href="https://jamestown.org/program/brief-russia-deepens-counter-terrorism-ties-to-sahelian-post-coup-regimes/">backing</a> of the recent Alliance of Sahelian States encompassing Mali, Burkina Faso and Niger. All three nations are led by military rulers who overthrew civilian governments a recently announced <a href="https://www.reuters.com/world/africa/niger-mali-burkina-faso-say-they-are-leaving-ecowas-regional-block-2024-01-28/">plans to exit</a> from the 15-member Economic Community of West African States.</p><img src="https://counter.theconversation.com/content/223253/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alessandro Arduino is a member of the International Code of Conduct Advisory Group.</span></em></p>Will the Wagner Group under new leadership uphold the ruthless modus operandi that propelled it to the spotlight in Africa?Alessandro Arduino, Affiliate Lecturer, King's College LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2231192024-02-12T11:47:56Z2024-02-12T11:47:56ZDeepfakes and disinformation swirl ahead of Indonesian election – podcast<figure><img src="https://images.theconversation.com/files/574372/original/file-20240208-28-gpe2qm.png?ixlib=rb-1.1.0&rect=28%2C14%2C1866%2C902&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A screenshot from a deepfake video shared on X purporting to show former Indonesian President Suharto. </span> <span class="attribution"><a class="source" href="https://x.com/erwinaksa_id/status/1754370873992360113?s=20">Erwin Aksa via X</a></span></figcaption></figure><p>Indonesia, the world’s third-largest democracy, goes to the polls on February 14 to elect a new president. It’s one of the largest elections to take place since an explosion of generative AI tools became available that can manipulate video and audio – and a number of deepfake videos have gone viral during the campaign. </p>
<p>In this episode of <a href="https://theconversation.com/uk/topics/the-conversation-weekly-98901">The Conversation Weekly</a> podcast, we look at what Indonesia’s experience is revealing about the disinformation battleground ahead in 2024, when an estimated <a href="https://theconversation.com/more-than-4-billion-people-are-eligible-to-vote-in-an-election-in-2024-is-this-democracys-biggest-test-220837">4 billion voters</a> will be eligible to vote in an election. </p>
<iframe src="https://embed.acast.com/60087127b9687759d637bade/65ca010a896a6400158c6dde" frameborder="0" width="100%" height="190px"></iframe>
<p></p>
<p><iframe id="tc-infographic-561" class="tc-infographic" height="100" src="https://cdn.theconversation.com/infographics/561/4fbbd099d631750693d02bac632430b71b37cd5f/site/index.html" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>Some of Indonesia’s deepfake videos are fairly easy to debunk. One, which went <a href="https://www.benarnews.org/english/news/indonesian/suharto-deepfake-used-in-election-campaign-01122024135217.html">viral in January</a>, shows a video of Suharto, the former president of Indonesia, endorsing his former political party, Golkar. Suharto, whose 32 years in power were marked by a brutal military dictatorship, died in 2008. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-professor-the-general-and-the-populist-meet-the-three-candidates-running-for-president-in-indonesia-217811">The professor, the general and the populist: meet the three candidates running for president in Indonesia</a>
</strong>
</em>
</p>
<hr>
<p>Others are a bit more subtle. Lilik Mardjianto, a journalism lecturer at Universitas Multimedia Nusantara in Indonesia, says a few deepfakes use “factual videos but manipulate the voice using AI” to make it sound like a politician is speaking in another language. In one, Joko Widodo, the outgoing president, is depicted speaking in Mandarin. Other videos have depicted <a href="https://factcheck.afp.com/doc.afp.com.342A6RJ">two candidates</a> for the 2024 election <a href="https://factcheck.afp.com/doc.afp.com.34324G7">speaking in Arabic</a>. </p>
<p>For Nuurianti Jalli, an expert on disinformation in south-east Asia at Oklahoma State University in the US, these deepfakes, even when they’re crude, can influence the political conversation.</p>
<blockquote>
<p>Indonesia is a Muslim majority country, number one in the world. And having presidential candidates speaking fluent Arabic, people see it as a good reflection of Muslim leaders. So that you can see how, in Indonesia, AI-generated content can create more awareness about the presidential candidate and eventually can, perhaps create more positive perception of this candidate. </p>
</blockquote>
<p>Indonesians are no strangers to disinformation spread on social media. “Hoaks”, as they’re called in Indonesia, proliferated during the last presidential election campaign in 2019. Jalli says rumours spread online also contributed to post-election violence in <a href="https://www.aljazeera.com/news/2019/10/29/rights-group-10-unlawfully-killed-in-indonesia-election-riots">which ten people were killed</a> during protests against the re-election of Widodo. </p>
<p>In 2024, teams of journalists are racing to fact check claims and content ahead of the polls, including <a href="https://theconversation.com/tcid-aji-kompas-com-dan-tempo-co-luncurkan-kolaborasi-panel-ahli-cek-fakta-220238">The Conversation</a> in Indonesia.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/200-million-voters-820-000-polling-stations-and-10-000-candidates-indonesias-massive-election-by-the-numbers-222604">200 million voters, 820,000 polling stations and 10,000 candidates: Indonesia's massive election, by the numbers</a>
</strong>
</em>
</p>
<hr>
<p>But Jalli says the factcheckers she’s spoken to worry they’re now playing catch-up with AI-generated hoaxes. “Everything goes viral first” and then journalists try to “debunk that after millions of people watched it”, she says. </p>
<p>Listen to Jalli and Mardjianto plus Nurul Fitri Ramadhani, politics editor at The Conversation in Indonesia on <a href="https://podfollow.com/the-conversation-weekly/view">The Conversation Weekly podcast</a>. You can also read more about disinformation in the <a href="https://theconversation.com/uk/topics/indonesia-election-2024-147192">Indonesian election on The Conversation</a>. </p>
<p>A <a href="https://cdn.theconversation.com/static_files/files/3074/Indonesia_Deepfakes_Transcript.docx.pdf?1709054499">transcript of this episode</a> is now available. </p>
<p><em>This episode of The Conversation Weekly was written by Mend Mariwany, and produced by Mend Mariwany and Gemma Ware. Sound design was by Eloise Stevens, and our theme music is by Neeta Sarl. Stephen Khan is our global executive editor, Alice Mason runs our social media and Soraya Nandy does our transcripts.</em></p>
<p><em>You can find us on X, formerly known as Twitter <a href="https://twitter.com/TC_Audio">@TC_Audio</a>, on Instagram at <a href="https://www.instagram.com/theconversationdotcom/">theconversationdotcom</a> or <a href="mailto:podcast@theconversation.com">via email</a>. You can also subscribe to The Conversation’s <a href="https://theconversation.com/newsletter">free daily email here</a>.</em></p>
<p><em>Listen to The Conversation Weekly via any of the apps listed above, download it directly via our <a href="https://feeds.acast.com/public/shows/60087127b9687759d637bade">RSS feed</a> or find out <a href="https://theconversation.com/how-to-listen-to-the-conversations-podcasts-154131">how else to listen here</a>.</em></p><img src="https://counter.theconversation.com/content/223119/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>F.X. Lilik Dwi Mardjianto's research on the fact-checking audience has been supported by the Indonesian Cyber Media Association. Nuurrianti Jalli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Disinformation experts, Lilik Mardjianto and Nuurrianti Jalli, tell The Conversation Weekly podcast about the deepfakes circulating ahead of the Indonesian election.Gemma Ware, Editor and Co-Host, The Conversation Weekly Podcast, The ConversationLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2228082024-02-09T12:19:12Z2024-02-09T12:19:12ZIt may be too late to stop the great election disinformation campaigns of 2024 but we have to at least try<figure><img src="https://images.theconversation.com/files/573598/original/file-20240205-19-3y7yaz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C953%2C494&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/human-hand-holding-megaphone-vote-words-2410794649">Shutterstock/Master1305</a></span></figcaption></figure><p>Global liberal democracy faces a near unprecedented list of digital threats in 2024 as the increasing exploitation of AI and the rampant spread of disinformation threaten the integrity of elections in more than 60 countries. And we are woefully unprepared.</p>
<p>Votes are scheduled in India, Pakistan, Mexico and South Africa, to name but a few. A hotly contested election will be held for the European parliament in June and the US presidential elections are on the horizon in November. A general election is also due in the UK at some stage in the coming year.</p>
<p>These elections are all happening at a time when global security and the very foundations of democracy are under significant strain from the rise of populism, far-right ideologies and fascist movements. Meanwhile, trust in mainstream institutions like politicians and the media <a href="https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2023">remains extremely low</a>.</p>
<p>What we might have once dismissed as outlandish conspiracy theories, such as that <a href="https://www.npr.org/2024/02/01/1228373511/heres-why-conspiracy-theories-about-taylor-swift-and-the-super-bowl-are-spreadin">Taylor Swift is secretly working for the Pentagon</a> and the Super Bowl is rigged, are gaining traction, and <a href="https://spssi.onlinelibrary.wiley.com/doi/full/10.1111/sipr.12091">social cohesion is fraying</a> as people segregate into isolated echo chambers online. </p>
<p>There is a real danger that unless we act now to protect the public these issues will only be exacerbated by the threats posed by AI, Russian disinformation campaigns, and the invasive use of technology to target voters in the coming months.</p>
<h2>AI, deepfakes and disinformation</h2>
<p>It’s already clear that 2024 will be known as the year of the first AI elections. AI’s ability to harvest near infinite amounts of data into actionable intelligence, and produce personalised content to sway public opinion will assuredly be used by mainstream political parties seeking to gain a tactical advantage in campaigning. </p>
<p>We are already seeing parties use <a href="https://www.cnbc.com/2023/12/17/how-2024-presidential-candidates-are-using-ai-in-election-campaigns.html">AI to analyse data on voting patterns</a> and targeting voters in real-time with <a href="https://theconversation.com/four-trends-youll-see-in-online-election-campaigns-this-year-222433">algorithmically-driven ad placements</a>.</p>
<p>There’s nothing inherently wrong or illegal about that, though it will alarm civil libertarians and does need to be regulated. The malevolent uses of AI by rogue actors is far more concerning. Deepfakes – false or manipulated texts, images, video and audio – are already being spread via the gaming of algorithms with the intention of <a href="https://www.cnbc.com/2023/09/20/ai-could-harm-2024-us-election-senate-intelligence-chair-warns.html">manipulating voters</a>.</p>
<p>A deepfake <a href="https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5">AI manipulated voice of US president Joe Biden</a> was already deployed in New Hampshire, urging voters not to turn out in its primary contest last month. During Slovakia’s parliamentary elections last year, a deepfake audio recording went viral on social media, falsely depicting a party leader claiming to have <a href="https://www.cfr.org/blog/campaign-roundup-deepfake-threat-2024-election">rigged the election and planning to increase beer prices</a>. </p>
<p>There are allegations that deepfakes were used in an attempt to sway voters in <a href="https://www.context.news/ai/are-ai-deepfakes-a-threat-to-elections">Argentina</a>, <a href="https://www.instagram.com/p/Cr2C5aqJXyy/">New Zealand</a> and <a href="https://www.reuters.com/world/middle-east/erdogan-rival-accuses-russia-deep-fake-campaign-ahead-presidential-vote-2023-05-12/">Turkey</a> in the past year. It’s certain we will see highly sophisticated deepfakes circulated in many countries by rogue actors in the coming months in an attempt to influence voters, sow dissent, and put politicians on the defensive.</p>
<h2>Bad actors</h2>
<p>The potential for state orchestrated disinformation campaigns is evidently also a concern in the democracies holding elections this year. US State Department officials have claimed that <a href="https://www.state.gov/disarming-disinformation/">Russia is planning to use disinformation</a> to try to influence public opinion against Ukraine during the numerous elections scheduled across Europe this year. </p>
<p>In October last year the US sent a <a href="https://www.state.gov/russias-pillars-of-disinformation-and-propaganda-report/">declassified intelligence assessment to more than 100 governments</a> accusing Moscow of using spies, social media and sympathetic media to spread disinformation and erode public faith in the integrity of election outcomes. Just last month the German Foreign Ministry disclosed that its security agencies had <a href="https://www.theguardian.com/world/2024/jan/26/germany-unearths-pro-russia-disinformation-campaign-on-x">exposed an extensive pro-Russian disinformation operation</a>, orchestrated using thousands of fake social media accounts.</p>
<p><a href="https://www.nato.int/cps/en/natohq/115204.htm">NATO</a> and the <a href="https://www.consilium.europa.eu/en/documents-publications/library/library-blog/posts/the-fight-against-pro-kremlin-disinformation/">European Union</a> have also warned against the threats to democratic cohesion caused by Kremlin-fuelled disinformation campaigns.</p>
<p>In India, the ultra-nationalist government of Narendra Modi has been accused of <a href="https://www.washingtonpost.com/world/2023/12/10/india-the-disinfo-lab-discredit-critics/">running a covert disinformation operation,</a> circulating propaganda to discredit foreign critics, attack political opponents and target Muslims and other ethnic and religious minorities. Human Rights Watch <a href="https://www.hrw.org/news/2024/01/11/india-increased-abuses-against-minorities-critics">reports</a> increased attacks against ethnic and religious minorities including Muslims, as well as journalists and opposition leaders.</p>
<h2>Taking action</h2>
<p>Calling for action now is almost moot as it’s probably already too late. The fact that there are so many elections happening simultaneously around the world in 2024 only exacerbates the problem. However, we must at least try.</p>
<p>An urgent global effort among nations is needed to set the ground rules for how the use of AI is to be regulated, particularly around elections. The US Senate is currently considering the <a href="https://www.congress.gov/bill/118th-congress/senate-bill/2770?q=%257B%2522search%2522:%2522deceptive+AI%2522%257D&s=1&r=1">Protecting Elections from Deceptive AI</a> Act, while <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473">the EU reached a tentative agreement</a> in December to regulate AI, becoming the first major global power to do so. </p>
<p>Laws need to force transparency in how AI models are trained and deployed, and require disclosure for when they are used in political campaigning. The worry is that the pace at which the technology is advancing is outpacing efforts to safeguard the public.</p>
<p>Social media platforms must be held accountable for disinformation spread. Companies like X, Meta and Alphabet have <a href="https://www.cnbc.com/2023/05/26/tech-companies-are-laying-off-their-ethics-and-safety-teams-.html">downsized teams dedicated to integrity</a>, hindering proactive disinformation countermeasures. Tough new laws are needed to force these tech monoliths to tackle disinformation and force transparency in algorithms and political ad targeting.</p>
<p>Proactive strategies like <a href="https://misinforeview.hks.harvard.edu/article/global-vaccination-badnews/">pre-bunking</a> (teaching people to spot fake news) and rapid response strategies are essential to combat election interference. Media outlets also need to learn from past mistakes and balance truthful reporting with free speech, avoiding the “false balance” trap of amplifying disinformation from populist politicians masquerading as legitimate discourse.</p>
<p>Finally, we must find ways to tackle the echo chambers and conspiracy theories that threaten to derail social cohesion. Gaining back public trust in institutions such as the mainstream media and government is not going to be easy. </p>
<p>There are no magic spells to fix this overnight. But we can’t just sit back and accept the status quo. Education in media literacy is also vital to defend against disinformation.</p>
<p>But while these steps may keep the mainstream parties honest, they will do nothing to stop the bad actors. Russia, China and Iran are all likely to attempt to shape geo-political outcomes in their favour in 2024 by attempting to <a href="https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2023/11/MTAC-Report-2024-Election-Threat-Assessment-11082023-2-1.pdf">interfere in elections</a>.</p>
<p>The stability of global democracy may well depend on how these emerging threats are navigated in the months to come. When Donald Trump claimed the 2020 election was stolen, thousands of his supporters stormed the US Capitol. He may well be the president of the US again in November.</p><img src="https://counter.theconversation.com/content/222808/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Felle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>With an unprecedented number of votes happening around the world, the information environment will be chaotic, to say the least.Tom Felle, Associate Professor of Journalism, University of GalwayLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2231602024-02-08T20:58:13Z2024-02-08T20:58:13ZFCC bans robocalls using deepfake voice clones − but AI-generated disinformation still looms over elections<figure><img src="https://images.theconversation.com/files/574478/original/file-20240208-22-rxy9j8.jpg?ixlib=rb-1.1.0&rect=532%2C1022%2C3864%2C2116&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The FCC is responding to the threat of deepfakes.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/MediaOwnershipRules/12da8ef2697340328725c4bae9edd719/photo">AP Photo/Andrew Harnik</a></span></figcaption></figure><p>The Federal Communications Commission on Feb. 8, 2024, <a href="https://apnews.com/article/fcc-elections-artificial-intelligence-robocalls-regulations-a8292b1371b3764916461f60660b93e6">outlawed robocalls</a> that use voices generated by artificial intelligence. </p>
<p>The 1991 <a href="https://www.congress.gov/bill/102nd-congress/senate-bill/1462">Telephone Consumer Protection Act</a> bans artificial voices in robocalls. The FCC’s <a href="https://docs.fcc.gov/public/attachments/FCC-24-17A1.pdf">Feb. 8 ruling</a> declares that AI-generated voices, including clones of real people’s voices, are artificial and therefore banned by law. </p>
<p>The move follows on the heels of a robocall on Jan. 21, 2024, from what sounded like President Joe Biden. The <a href="https://soundcloud.com/user-429524614/fake-joe-biden-robocall-nh">call had Biden’s voice</a> urging voters inclined to support Biden and the Democratic Party not to participate in New Hampshire’s Jan. 23 GOP primary election. The call <a href="https://www.nytimes.com/2024/01/22/us/politics/nh-primary-explainer-how-vote.html">falsely implied</a> that a registered Democrat could vote in the Republican primary and that a voter who voted in the primary would be ineligible to vote in the general election in November.</p>
<p>The call, two days before the primary, appears to have been <a href="https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5">an artificial intelligence deepfake</a>. It also appears to have been <a href="https://www.doj.nh.gov/news/2024/20240122-voter-robocall.html">an attempt to discourage voting</a>. </p>
<p>The FCC and the New Hampshire attorney general’s office are investigating the call. On Feb. 6, 2024, New Hampshire Attorney General John Formella <a href="https://www.doj.nh.gov/news/2024/20240206-voter-robocall-update.html">identified two Texas companies</a>, Life Corp. and Lingo Telecom, as the source and transmitter, respectively, of the call.</p>
<h2>Injecting confusion</h2>
<p>Robocalls in elections are nothing new and <a href="https://www.fcc.gov/rules-political-campaign-calls-and-texts">not illegal</a>; many are simply efforts to get out the vote. But they have also been used in <a href="https://www.thedailybeast.com/michigan-ag-files-felony-charges-again-jack-burkman-jacob-wohl-for-alleged-voter-suppression-scheme">voter suppression</a> campaigns. Compounding this problem in this case is the application of AI to clone Biden’s voice.</p>
<p>In a media ecosystem full of noise, scrambled signals such as deepfake robocalls make it virtually impossible to tell facts from fakes.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/wZYIwHqDJBg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The New Hampshire attorney general’s office is investigating the call.</span></figcaption>
</figure>
<p>Recently, a number of companies have popped up online <a href="https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale">offering impersonation as a service</a>. For users like you and me, it’s as easy as selecting a politician, celebrity or executive like Joe Biden, Donald Trump or Elon Musk from a menu and typing a script of what you want them to appear to say, and the website creates the deepfake automatically.</p>
<p>Though the audio and video output is usually choppy and stilted, when the audio is delivered via a robocall it’s very believable. You could easily think you are hearing a recording of Joe Biden, but really it’s machine-made misinformation.</p>
<h2>Context is key</h2>
<p>I’m a <a href="https://scholar.google.com/citations?hl=en&user=yu4Ew7gAAAAJ&view_op=list_works&sortby=pubdate">media and disinformation scholar</a>. In 2019, information scientist <a href="https://scholar.google.com/citations?hl=en&user=WHtDxZsAAAAJ&view_op=list_works&sortby=pubdate">Brit Paris</a> and I <a href="https://datasociety.net/library/deepfakes-and-cheap-fakes/#">studied how generative adversarial networks</a> – what most people today think of as AI – would transform the ways institutions assess evidence and make decisions when judging realistic-looking audio and video manipulation. What we found was that no single piece of media is reliable on its face; rather, context matters for making an interpretation.</p>
<p>When it comes to AI-enhanced disinformation, the believability of deepfakes hinges on where you see or hear them or who shares them. Without a valid and confirmed source vouching for it as a fact, a deepfake might be interesting or funny but will never pass muster in a courtroom. However, deepfakes can still be damaging when used in efforts to suppress the vote or shape public opinion on divisive issues. </p>
<p>AI-enhanced disinformation campaigns are difficult to counter because unmasking the source requires tracking the trail of metadata, which is the data about a piece of media. How this is done varies, depending on the method of distribution: robocalls, social media, email, text message or websites. Right now, research on audio and video manipulation is more difficult because many big tech companies have shut down access to their application programming interfaces, which make it possible for researchers to collect data about social media, and the companies have <a href="https://www.cnbc.com/2023/05/26/tech-companies-are-laying-off-their-ethics-and-safety-teams-.html">laid off their trust and safety teams</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1749521325394129296"}"></div></p>
<h2>Timely, accurate, local knowledge</h2>
<p>In many ways, AI-enhanced disinformation such as the New Hampshire robocall poses the same problems as every other form of disinformation. People who use AI to disrupt elections are likely to do what they can to hide their tracks, which is why it’s necessary for the public to remain skeptical about claims that do not come from verified sources, such as local TV news or social media accounts of reputable news organizations. </p>
<p>It’s also important for the public to understand what new audio and visual manipulation technology is capable of. Now that the technology has become widely available, and with a pivotal election year ahead, the fake Biden robocall is only the latest of what is likely to be a series of AI-enhanced disinformation campaigns, even though these calls are now explicitly illegal. </p>
<p>I believe society needs to learn to venerate what I call TALK: timely, accurate, local knowledge. I believe that it’s important to design social media systems that value timely, accurate, local knowledge over disruption and divisiveness.</p>
<p>It’s also important to make it more difficult for disinformers to profit from undermining democracy. For example, the malicious use of technology to suppress voter turnout should be vigorously investigated by federal and state law enforcement authorities. </p>
<p>While deepfakes may catch people by surprise, they should not catch us off guard, no matter how slow the truth is compared with the speed of disinformation.</p>
<p><em>This is an updated version of an article originally published on Jan. 23, 2024.</em></p><img src="https://counter.theconversation.com/content/223160/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joan Donovan is on the board of Free Press and the founder of the Critical Internet Studies Institute.</span></em></p>Deepfake technology is widely available, and a pivotal election year lies ahead. The FCC banned AI robocalls, but AI-enhanced disinformation campaigns remain a threat.Joan Donovan, Assistant Professor of Journalism and Emerging Media Studies, Boston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2213562024-02-07T12:03:08Z2024-02-07T12:03:08ZGaza is now the frontline of a global information war<p>The conflicts in Gaza and Ukraine have become key battlegrounds in an information war that goes far wider than their tightly drawn physical borders. Carefully crafted social media posts and other online propaganda are fighting to make people around the world take sides, harden their positions and even <a href="https://time.com/6549544/israel-and-hamas-the-media-war/">move broader public opinion</a>. </p>
<p>Propaganda has always been a weapon of war, but the digital <a href="https://www.ieworldconference.org/content/WP2023/Papers/GDRKMCC23_4.pdf">revolution</a> increases its reach, immediacy and effectiveness and makes it a more potent tool. This makes it harder and harder for the average person, as well as professionals with expertise, to work out what is true and what isn’t. </p>
<p>To understand this information war, we need to understand where and how arguments and ideologies are promoted and developed online. </p>
<p>In some instances, online propaganda simply involves the framing of real events, <a href="https://www.isdglobal.org/digital_dispatches/capitalising-on-crisis-russia-china-and-iran-use-x-to-exploit-israel-hamas-information-chaos/?cmplz-force-reload=1705683801885">violent images and videos, and hate speech</a> to emphasise the guilt of one side and vindicate the other.</p>
<p>But much material relies on the creation of what’s commonly referred to as fake news. This often takes the form of fabricated stories published on social media that repurpose or mislabel real photos or videos. </p>
<p>For example, <a href="https://perma.cc/5H76-YBBP">one post</a> on X (formerly Twitter) that was viewed 300,000 times used a photo of an accidental fire at a McDonald’s restaurant in New Zealand to falsely claim the company had been attacked by pro-Palestinian protestors for its perceived support of Israel. Despite <a href="https://factcheck.afp.com/doc.afp.com.34GE6ZA">being debunked</a>, the story was still the focus of heated <a href="https://twitter.com/search?q=mcdonalds%20IDF&src=typed_query&f=live">discussions</a> on social media channels. </p>
<p>There are also <a href="https://news.sky.com/story/its-important-to-separate-the-facts-from-speculation-what-we-actually-know-about-the-viral-report-of-beheaded-babies-in-israel-12982329">reports of excerpts from video</a> games and old TikToks being shared with claims they are from real current events in Gaza, and fake government agency <a href="https://www.euronews.com/my-europe/2023/11/07/israel-hamas-war-fake-mossad-account-creates-online-confusion">social media accounts</a> posting disinformation. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-houthis-four-things-you-will-want-to-know-about-the-yemeni-militia-targeted-by-uk-and-us-military-strikes-221040">The Houthis: four things you will want to know about the Yemeni militia targeted by UK and US military strikes</a>
</strong>
</em>
</p>
<hr>
<p>Advances in AI are also playing a role. Experts in digital <a href="https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47">forensics</a> have shown how AI-faked photographs of bloodied babies and abandoned children in Gaza were being widely used in November 2023. These were being published at the same time as the media was trying to investigate allegations that <a href="https://news.sky.com/story/its-important-to-separate-the-facts-from-speculation-what-we-actually-know-about-the-viral-report-of-beheaded-babies-in-israel-12982329">babies</a> had been beheaded in the Hamas attack of October 7. </p>
<p>Deep fake videos have been used in the Gaza conflict, to show prominent <a href="https://fortune.com/2023/12/04/deepfakes-israel-hamas-war-ai-detection-tech-startups/">figures</a> in the Middle East saying things they never said, and it is reasonable to think they don’t believe. Edited battlefield footage from Ukraine and modified footage from high-end military computer games have also been used as deepfaked “Gazan footage”, with the Associated Press keeping an extensive <a href="https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47">archive</a> of examples.</p>
<p>Based on what we know about misinformation on other subjects, it’s likely that much of this online propaganda about Gaza isn’t being generated by individual supporters posting randomly on social media. <a href="https://www.zdnet.com/article/the-dark-webs-latest-offering-disinformation-as-a-service">Misinformation contractors</a> now make their <a href="https://www.theguardian.com/world/2023/feb/15/revealed-disinformation-team-jorge-claim-meddling-elections-tal-hanan">services available</a> on the dark web (an encrypted part of the web that makes it very difficult to identify users) to people looking to mount widespread campaigns.</p>
<p>Inside the dark web, those developing mis- and disinformation can use techniques that are used by legitimate marketing companies in the outside world. They can <a href="https://www.tandfonline.com/doi/pdf/10.1080/23738871.2020.1797135?casa_token=BqdCKN1_d5YAAAAA:Fn7CF8QaYy62jxJFJOKOfkiK7yneTQ_Tz7PMcR7B6KHBHdBa_xHDP0A8S1VWMcLGbl-gBCTFukY">experiment</a> with messages, and test the responses they receive to them. On dark web forums, <a href="https://dl.acm.org/doi/pdf/10.1145/3366424.3385775?casa_token=1EVIZafzOkQAAAAA:O0_x_p8Teo-BifB8gkMRs7T247ebOH08wO7QkFLgqDLLARJERRguRHwAjCdAwvDowiC3fE6AYk0">groups of activists</a> can collaborate on <a href="https://www.tandfonline.com/doi/pdf/10.1080/10584609.2019.1661888?casa_token=DFHnxgRL-mAAAAAA:3UYaV5i58bAcJ1i_5S_XWzIkOPcdO1Qe8OeIW8A5U8o-myGRu1ZXDuqiVNiMdIhqkbL8V_iaGeg">messaging</a>, imagery, timing and targeting to best effect.</p>
<p>Another origin of much misinformation is <a href="https://www.nato.int/nato_static_fl2014/assets/pdf/2020/5/pdf/2005-deepportal2-troll-factories.pdf">“troll farms”</a>, which are staffed by government agents or their proxies in <a href="https://www.rollingstone.com/politics/politics-features/china-internet-trolls-russia-copycat-1234728307/">China</a>, <a href="https://www.rand.org/content/dam/rand/pubs/research_reports/RR4300/RR4373z1/RAND_RR4373z1.pdf">North Korea</a> and <a href="https://www.cnn.com/2023/02/14/europe/russia-yevgeny-prigozhin-internet-research-agency-intl/index.html">Russia</a>, among other countries. These are groups who identify the messages they think will change attitudes and amplify them through coordinating social media campaigns.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/israel-now-ranks-among-the-worlds-leading-jailers-of-journalists-we-dont-know-why-theyre-behind-bars-221411">Israel now ranks among the world’s leading jailers of journalists. We don't know why they're behind bars</a>
</strong>
</em>
</p>
<hr>
<p>They are increasingly using AI-driven bots programmed to spread particular narratives or key words or phrases. “Viral” bots magnify the reach of their content by getting networks of other bots to repost it, which in turn encourages <a href="https://www.econstor.eu/bitstream/10419/214101/1/IntPolRev-2019-4-1442.pdf">search engine</a> and <a href="https://oro.open.ac.uk/66155/8/70-Article%20Text-258-2-10-20190906.pdf">social media</a> algorithms that favour popular and provocative posts to give it greater prominence. </p>
<p>The dark web origins of misinformation makes it much harder for governments to track and stop the people creating it, as does the use of encrypted messaging services such as WhatsApp and <a href="https://www.rollingstone.com/politics/politics-features/telegram-fueling-israel-hamas-war-misinformation-1234854300/">Telegram</a> to share content. By the time the authorities have identified a piece of misinformation it may have been seen by many thousands of people across multiple channels. </p>
<p>The traditional media is also struggling to sift through and counter the weight of misinformation about Gaza, which appears in social media much faster than journalists can verify or debunk it. And the death of <a href="https://www.theguardian.com/world/2023/dec/21/israel-idf-accused-targeting-journalists-gaza#:%7E:text=The%20Committee%20to%20Protect%20Journalists,workers%20in%20any%20recent%20conflict">so many journalists</a> in Gaza is making accurate news harder to gather. </p>
<p>Media outlets are often <a href="https://www.theguardian.com/media/2023/oct/16/bbc-gets-1500-complaints-over-israel-hamas-coverage-split-50-50-on-each-side">accused of bias</a> in both directions. So when traditional news is seen as inadequate or hard to come by, people are more likely to turn social media and its flood of dark web-created misinformation. </p>
<p>The information war in Gaza is a war of values, and a war of behaviours, of establishing who is “them” and who are “us”. The war in Ukraine is exactly the same. The danger is that in shaping the view of the public, the information war could have an impact on governments and on the battlefield.</p><img src="https://counter.theconversation.com/content/221356/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert M. Dover does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Viral bots are ‘tricking’ social media algorithms to get more coverage for disinformation.Robert M. Dover, Professor of Intelligence and National Security, University of HullLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2211712024-01-28T13:53:55Z2024-01-28T13:53:55ZDeepfakes: How to empower youth to fight the threat of misinformation and disinformation<figure><img src="https://images.theconversation.com/files/571710/original/file-20240126-23-6oiuw5.jpg?ixlib=rb-1.1.0&rect=0%2C73%2C8171%2C4464&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Deepfakes pose a profound social threat, and education along with technology and legislation matters for containing and addressing this. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/deepfakes-how-to-empower-youth-to-fight-the-threat-of-misinformation-and-disinformation" width="100%" height="400"></iframe>
<p>The <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">World Economic Forum’s Global Risks Report 2024</a> has issued a stark warning: misinformation and disinformation, primarily driven by <a href="https://www.merriam-webster.com/dictionary/deepfake">deepfakes</a>, are ranked as the most severe global short-term risks the world faces in the next two years.</p>
<p>In October 2023, the Innovation council of Québec <a href="https://conseilinnovation.quebec/wp-content/uploads/2023/10/CIQ_Impacts_societe_IA_EDS-1.pdf">shared the same realization</a> after months of <a href="https://conseilinnovation.quebec/intelligence-artificielle/publications-de-la-reflexion-collective/">consultations</a> with experts and the public.</p>
<p>This <a href="https://arxiv.org/pdf/2208.10913.pdf">digital deception</a>, which leverages artificial intelligence and, more recently generative AI, to create hyper-realistic fabrications, extends beyond being a technological marvel; it <a href="https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/democracys-new-challenge-navigating-the-era-of-generative-ai.html">poses a profound societal threat</a>. </p>
<p>In response to the gap in effectively combating deepfakes with technology and legislation alone, a <a href="https://pedagogienumerique.chaire.ulaval.ca/en/projets/lagentivite-numerique-pour-contrecarrer-la-desinformation-exploration-du-cas-des-hypertrucages/">research project</a> led by my team and I sheds light on a vital solution: human intervention through education.</p>
<h2>Technological solutions alone are inadequate</h2>
<p>Despite ongoing development of <a href="https://doi.org/10.1080/23742917.2023.2192888">deepfake detection tools</a>, these <a href="https://doi.org/10.1007/s10489-022-03766-z">technological solutions</a> are racing to catch up with the rapidly advancing capabilities of deepfake algorithms. </p>
<p><a href="https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=2150&context=hastings_constitutional_law_quaterly">Legal systems</a> and <a href="https://www.canada.ca/en/security-intelligence-service/corporate/publications/the-evolution-of-disinformation-a-deepfake-future/deepfakes-a-real-threat-to-a-canadian-future.html">governments</a> are struggling to keep pace with this swift advancement of digital deception.</p>
<p>There is an urgent need for education to adopt a more serious, aggressive and strategic approach in equipping youth to combat this imminent threat.</p>
<h2>Political disinformation concerns</h2>
<p>The <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf">potential for political polarization is particularly alarming</a>. </p>
<p>Nearly three billion people are expected to vote in countries including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and <a href="https://montrealgazette.com/news/world/as-social-media-guardrails-fade-and-ai-deepfakes-go-mainstream-experts-warn-of-impact-on-elections">the United States</a> within the next two years. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-can-we-learn-from-the-history-of-pre-war-germany-to-the-atmosphere-today-in-the-u-s-220730">What can we learn from the history of pre-war Germany to the atmosphere today in the U.S.?</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.cbc.ca/news/politics/ai-deepfake-election-canada-1.7084398">Disinformation campaigns</a> threaten to undermine the legitimacy of newly elected governments. </p>
<p>Deepfakes of prominent figures like Palestinian American supermodel <a href="https://www.newarab.com/news/pro-israel-activists-deep-fake-bella-hadid-palestine-video">Bella Hadid</a> and <a href="https://www.boomlive.in/fact-check/video-of-jordans-queen-rania-supporting-israel-is-a-deepfake-23493">others</a> have been manipulated to falsify their political statements, exemplifying the technology’s capacity to sway public opinion and skew political narratives. </p>
<p>A deepfake of <a href="https://www.reuters.com/fact-check/greta-thunberg-vegan-grenades-tv-interview-is-deepfake-2023-10-30/#">Greta Thunberg</a> advocating for “vegan grenades” highlights the nefarious use of this technology. </p>
<p><a href="https://www.france24.com/en/technology/20230930-counterfeit-people-the-dangers-posed-by-meta-s-ai-celebrity-lookalike-chatbots">Meta’s unveiling of an AI assistant featuring celebrities’ likenesses</a> raises concerns about misuse and spreading disinformation. </p>
<h2>Financial fraud, pornographic harms</h2>
<p><a href="https://ca.investing.com/news/stock-market-news/ripple-ceo-warns-of-deepfake-scams-93CH-3178269">Deepfake videos</a> are also, unsurprisingly, <a href="https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html">being leveraged</a> to commit <a href="https://www.lapresse.ca/actualites/2024-01-22/hypertrucage-audio/berne-par-la-fausse-voix-de-son-fils.php?fbclid=IwAR3J0MTmXOhx8tAus2Z_4_F72Xh_6WC65bg01FzwYK9i40BMfyYjM5IDcC4">financial fraud</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1724139272884883936"}"></div></p>
<p>The popular YouTuber <a href="https://www.bbc.com/news/technology-66993651">MrBeast was impersonated in a deepfake scam on TikTok</a>, falsely promising an iPhone 15 giveaway that led to financial deceit. </p>
<p>These incidents highlight vulnerability to sophisticated <a href="https://www.engadget.com/taylor-swift-deepfake-used-for-le-creuset-giveaway-scam-123231417.html">AI-driven frauds and scams</a> targeting people of all ages.</p>
<p><a href="https://www.wired.com/story/deepfake-porn-is-out-of-control/">Deepfake pornography</a> represents a grave concern for young people and adults alike, where individuals’ faces are <a href="https://www.dexerto.com/entertainment/tiktok-model-mortified-by-ai-deepfake-video-showing-her-getting-dressed-2392928">non-consensually superimposed onto explicit content</a>. Sexually explicit deepfake images of Taylor Swift <a href="https://www.washingtonpost.com/technology/2024/01/26/ai-deepfakes-taylor-swift-nude/">spread on social media before platforms took them down</a>. One was viewed over 45 million times.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/cyberbullying-girls-with-pornographic-deepfakes-is-a-form-of-misogyny-217182">Cyberbullying girls with pornographic deepfakes is a form of misogyny</a>
</strong>
</em>
</p>
<hr>
<h2>Policy and technology approaches</h2>
<p><a href="https://www.bbc.com/news/technology-67366311">Meta’s policy</a> now mandates political advertisers to disclose any AI manipulation in ads, a move mirrored by Google.</p>
<p><a href="https://www.rochester.edu/newscenter/audio-deepfake-detective-developing-new-sleuthing-techniques-573482/">Neil Zhang</a>, a PhD student at the University of Rochester, is developing detection tools for audio deepfakes, including advanced algorithms and watermarking techniques.</p>
<p>The U.S. has introduced several acts: the <a href="https://clarke.house.gov/clarke-leads-legislation-to-regulate-deepfakes/">Deepfakes Accountability Act of 2023</a>, the <a href="https://dean.house.gov/2024/1/representatives-dean-and-salazar-introduce-bipartisan-legislation-to-protect-americans-images-online">No AI FRAUD Act</a> safeguarding identities against AI misuse and the <a href="https://www.upi.com/Top_News/US/2023/05/05/representative-joe-morelle-legislation-bans-deepfakes/7981683327579/">Preventing Deepfakes of Intimate Images Act</a> targeting non-consensual pornographic deepfakes.</p>
<p><a href="https://www.canada.ca/en/campaign/online-disinformation.html">In Canada</a>, legislators <a href="https://www.parl.ca/legisinfo/en/bill/44-1/c-27">have proposed</a> <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/canadas-digital-charter/bill-summary-digital-charter-implementation-act-2020">Bill C-27</a> and the <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document">Artificial Intelligence and Data Act (AIDA)</a> which emphasize AI transparency and data privacy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Q5V0yap77yg?wmode=transparent&start=2" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">‘Disinformation can cause harm’ video from the Communications Security Establishment (CSE), a Canadian federal agency devoted to security and intelligence.</span></figcaption>
</figure>
<p>The <a href="https://www.washingtonexaminer.com/news/2442119/uk-adopts-law-forcing-big-tech-to-rein-in-child-pornography-and-deepfakes/">United Kingdom adopted its Online Safety Bill</a>. The EU recently announced a provisional deal surrounding <a href="https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act/">its AI Act</a>; the EU’s <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf">AI Liability Directive</a> addresses broader online safety and AI regulation issues. </p>
<p>The Indian government announced <a href="https://www.nationalheraldindia.com/science-tech/india-sixth-most-susceptible-country-to-deepfakes-can-laws-tackle-the-menace">plans to draft regulations</a> targeting deepfakes. </p>
<p>These measures reflect growing global commitments to curbing the pernicious effects of deepfakes. However, these efforts are insufficient to contain, let alone stop, the proliferation of deepfake dissemination.</p>
<h2>Research study with youth</h2>
<p><a href="https://doi.org/10.1080/10720537.2023.2294314">Research I have conducted with colleagues</a>, funded by the Social Sciences and Humanities Research Council (SSHRC) and Canadian Heritage, unveils how empowering youth with digital agency can be a force against the rising tide of disinformation fueled by deepfake and artificial intelligence technologies.</p>
<p>Our study focused on how youth perceive the impact of deepfakes on critical issues and their own process of constructing knowledge in digital contexts. We explored their capacity and willingness to effectively counterbalance disinformation.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/ZFcBZVUwm38?wmode=transparent&start=3" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Author Nadia Naffi shares some results of a study on youth digital agency and deepfakes.</span></figcaption>
</figure>
<p>The study brought together Canadian university students, aged 18 to 24, for a series of hands-on workshops, in-depth individual interviews and focus group discussions. </p>
<p>Participants created deepfakes, gaining a firsthand understanding of easy access to and use of this technology and its potential for misuse. This experiential learning proved invaluable in demystifying how easily deepfakes are generated.</p>
<p>Participants initially perceived deepfakes as an uncontrollable and inevitable part of the digital landscape. </p>
<p>Through engagement and discussion, they went from being passive deepfake bystanders to developing a deeper realization of their grave threat. Critically, they also developed a sense of responsibility in preventing and mitigating deepfakes’ spread, and a readiness to counter deepfakes. </p>
<p>Students shared recommendations for concrete actions, including urging educational systems to empower youth and help them recognize their actions can make a difference. This includes:</p>
<ul>
<li><p>teaching the detrimental effects of disinformation on society;</p></li>
<li><p>providing spaces for youth to reflect on and challenge societal norms, inform them about social media policies and outlining permissible and prohibited content;</p></li>
<li><p>training students in recognizing deepfakes through exposure to the technology behind them;</p></li>
<li><p>encouraging involvement in meaningful causes while staying alert to disinformation and guiding youth in respectfully and productively countering disinformation.</p></li>
</ul>
<figure class="align-center ">
<img alt="Students seen at a laptop." src="https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/571715/original/file-20240126-19-bl9u4h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Educational systems have an important role empowering youth and helping them recognize their actions can make a difference.</span>
<span class="attribution"><span class="source">(Allison Shelley/EDUimages)</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<h2>Multifaceted strategy needed</h2>
<p>Based on our research and the participants’ recommendations, we propose a multifaceted strategy to counter the proliferation of deepfakes. </p>
<p>Deepfake education needs to be integrated into educational curricula, along with nurturing critical thinking and digital agency in our youth. Youth need to be encouraged in active, yet safe, well-informed and strategic, participation in the fight against malicious deepfakes in digital spaces. </p>
<p>We emphasize the importance of hands-on collaborative learning experiences. We also advocate for an interdisciplinary educational approach that marries technology, psychology, media studies and ethics to fully grasp the implications of deepfakes. </p>
<h2>The human element</h2>
<p>Our research underscores a crucial realization: The human element, particularly the role of education, is indispensable in the fight against deepfakes. We cannot rely solely on technology and legal fixes. </p>
<p>By equipping younger generations, but also every single member of our society, with the skills to critically analyze and challenge disinformation, we are nurturing a digitally literate society resilient enough to withstand the manipulative power of deepfakes. </p>
<p>To do so, we must equip people to understand they have roles and agency in safeguarding the integrity of our digital world.</p><img src="https://counter.theconversation.com/content/221171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nadia Naffi receives funding from the National Bank to support the work of her Chair in Educational Leadership (CEL) on Innovative Pedagogical Practices in Digital Contexts. Her project on disinformation is funded by the Social Sciences and Humanities Research Council of Canada (SSHRC) and Canadian Heritage.
Naffi is affiliated with the International Observatory on the Societal Impacts of AI and Digital Technology (OBVIA), the Institute Intelligence and Data (IID), the Centre de recherche et d'intervention sur la réussite scolaire (CRIRES), the Centre de recherche interuniversitaire sur la formation et la profession enseignante (CRIFPE) and the Centre de recherche et d'intervention sur l'éducation et la vie au travail (CRIEVAT).</span></em></p>Youth in a study went from being passive deepfake bystanders to developing a sense of responsibility and readiness to help prevent deepfakes’ spread.Nadia Naffi, Assistant Professor, Educational Technology, Chair in Educational Leadership in the Innovative Pedagogical Practices in Digital Contexts - National Bank, Université LavalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2215792024-01-26T17:57:55Z2024-01-26T17:57:55ZDisinformation is often blamed for swaying elections – the research says something else<figure><img src="https://images.theconversation.com/files/571138/original/file-20240124-29-k5hu7q.jpg?ixlib=rb-1.1.0&rect=50%2C175%2C5575%2C3530&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/color-image-some-people-voting-polling-435657658">Alexandru Nika/Shutterstock</a></span></figcaption></figure><p>Many countries <a href="https://en.wikipedia.org/wiki/List_of_elections_in_2024">face general elections</a> this year. Political campaigning will include misleading and even false information. Just days ago, <a href="https://www.bloomberg.com/news/articles/2024-01-23/fake-biden-robocall-message-in-new-hampshire-alarms-election-experts?leadSource=uverify%20wall">it was reported</a> that a robocall impersonating US president Joe Biden had told recipients not to vote in the presidential primary. </p>
<p>But can disinformation significantly influence voting? </p>
<p>There are two typical styles of election campaigning. One is positive, presenting favourable attributes of politicians and their policies, and the other is negative – disparaging the opposition. The latter <a href="https://link.springer.com/article/10.1057/s41253-019-00084-8;">can backfire</a>, though, or lead to <a href="https://www.journals.uchicago.edu/doi/abs/10.1111/j.1468-2508.2007.00618.x?casa_token=kG3-EyUhaHYAAAAA:UydVoChML-dFiFC370Su8gRQmPSAMV1E0cqg0cZ2owdl-NSw4uvQvHsjXIpdxpebgYZXAYb5aDWX">voters disengaging</a> with the entire democratic process. </p>
<p>Voters are already <a href="https://www.annualreviews.org/doi/abs/10.1146/annurev.polisci.10.071905.101448?casa_token=a0oggffzdCkAAAAA:61ee1-KZtnN5OvUoordIlQChJwegerDlKfg6q5bCJZXUy-ND70U_4ZcapONNd1mibsDPVD8jjSvHYw">fairly savvy</a> – they know that campaigning tactics often include distortions and untruths. Both types of tactics, positive and negative, <a href="https://www.unhcr.org/innovation/wp-content/uploads/2022/02/Factsheet-4.pdf">can feature misinformation</a>, which loosely refers to inaccurate, false and misleading information. Sometimes this even counts as disinformation, because the details are deliberately designed to be misleading. </p>
<p>Unfortunately, recent research shows that the <a href="https://theconversation.com/misinformation-why-it-may-not-necessarily-lead-to-bad-behaviour-199123">lack of clarity in defining</a> misinformation and disinformation is a problem. There is no consensus. Scientifically and practically, this is bad. It’s hard to chart the scale of a problem if your starting point includes <a href="https://journals.sagepub.com/doi/full/10.1177/17456916221141344">vague or confused</a> concepts. This is a problem for the general public, too, given it makes it harder to decipher and trust research on the topic.</p>
<p>For example, depending on how inclusive the definition is, <a href="https://books.google.com/books?hl=en&lr=&id=hB5sEAAAQBAJ&oi=fnd&pg=PA173&dq=public+perceptions+negative+election+campaigning+%22propaganda%22&ots=i47RTsBtju&sig=JYS30Bjr6Hu17xdxRn50HXlsAPY">propaganda</a>, <a href="https://ideas.repec.org/a/taf/rcybxx/v5y2020i2p199-217.html">deep fakes</a>, <a href="https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211">fake news</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/35039654/">conspiracy theories</a> are all examples of disinformation. But <a href="https://edisciplinas.usp.br/pluginfile.php/4948550/mod_resource/content/1/Fake%20News%20Digital%20Journalism%20-%20Tandoc.pdf">news parody or political satire</a> can be too. </p>
<p>Unfortunately, researchers <a href="https://doi.org/10.1016/j.copsyc.2020.03.014">often fail to provide clear definitions</a>, and do not carefully compare different types of disinformation, adding uncertainty to evidence examining its effect on voting behaviour. </p>
<p>Nevertheless, let’s investigate the research on disinformation so far, which is generally viewed as more serious than misinformation, to see <a href="https://misinforeview.hks.harvard.edu/article/explaining-beliefs-in-electoral-misinformation-in-the-2022-brazilian-election-the-role-of-ideology-political-trust-social-media-and-messaging-apps/">how much influence it can really have</a> on the way we vote. </p>
<h2>Unconvincing findings</h2>
<p>Consider <a href="https://www.sciencedirect.com/science/article/pii/S0048733322001494">a study published in 2023</a>, investigating the role of fake news in the Italian general elections in 2013 and 2018. It used debunking websites to help create a fake news score for articles published in the run-up to the election.</p>
<p>Then the researchers analysed populist parties’ pre-election Facebook posts containing such news content. This also generated an engagement score based on the number of likes and shares of the posts. </p>
<p>Finally, scores were combined with actual electoral votes for populist parties to gauge the possible influence of fake news on such votes. The researchers estimated that fake news added a small but statistically significant electoral gain for populist parties. But the researchers suggested that fake news could not be the sole cause of the overall increase in vote share for populist parties – it only seemed to add a small amount to the overall increase in vote share.</p>
<p>Similar studies showing <a href="https://www.science.org/doi/10.1126/science.aau2706">low effects</a> of fake news on persuading voters has led some researchers <a href="https://www.nature.com/articles/s41562-020-0833-x">to argue</a> that the panic about fake news is overblown. </p>
<p>Other recent studies have looked at the potential influence of disinformation by asking people how they intended to vote and whether they believed specific pieces of disinformation. This was examined in national or presidential elections in <a href="https://www.martenscentre.eu/wp-content/uploads/2023/04/15.pdf">the Czech Republic in 2021</a>, <a href="https://www.tandfonline.com/doi/abs/10.1080/23743670.2020.1719858?casa_token=G5kslUWsQRkAAAAA:ZW_ghmhO0phxYhgElEnuToqcAK_f_3o2BLrzew-RW0tlNZBX9_UuXgricYyuzZ-qgvZVQUgfoycKXw">Kenya in 2017</a>, <a href="https://www.ajpor.org/article/12982-analysis-of-fake-news-in-the-2017-korean-presidential-election">South Korea in 2017</a>, <a href="https://www.tandfonline.com/doi/abs/10.1080/23743670.2020.1719858">Indonesia in 2019, Malaysia in 2018</a>, <a href="https://www.martenscentre.eu/wp-content/uploads/2023/04/15.pdf">Philippines in 2022</a> and <a href="https://www.ajpor.org/article/12985-does-fake-news-matter-to-election-outcomes-the-case-study-of-taiwan-s-2018-local-elections">Taiwan in 2018</a>. </p>
<p>The general finding among all these studies was that it is hard to establish a reliable causal influence of fake news on voting. One reason was that who people say they vote for and how they actually vote can be vastly different. </p>
<p>In fact, research has gone into understanding the reasons for dramatic failures of traditional pollsters to predict elections and referendums <a href="https://journalofbigdata.springeropen.com/articles/10.1186/s40537-021-00525-8">in Argentina in 2019</a>, <a href="https://www.cambridge.org/core/journals/canadian-journal-of-political-science-revue-canadienne-de-science-politique/article/abs/quebec-2018-a-failure-of-the-polls/97380BA7567B11B95E88FAA2149BDC51">Quebec in 2018</a>, <a href="https://www.researchgate.net/publication/319982710_Collective_failure_Lessons_from_combining_forecasts_for_the_UK's_referendum_on_EU_membership">UK in 2016</a> and <a href="https://digitalcommons.unl.edu/sociologyfacpub/543/">US in 2016</a>. People didn’t, for many reasons, reveal their actual voting intentions to pollsters and researchers. </p>
<h2>Who is susceptible?</h2>
<p>What about specific groups of voters, though? Might there be some that are more influenced by disinformation than others? Political affiliation doesn’t seem to matter. People tend <a href="https://doi.org/10.1016/j.copsyc.2020.03.014">to rate fake news as accurate</a> when it’s in line with their own political beliefs. For instance, in the 2016 US presidential elections, both Hillary Clinton and Donald Trump supporters <a href="https://doi.org/10.1111/ajpy.12233">were equally likely</a> to rate fake news about their opposition as accurate. </p>
<p>How about undecided voters? Some studies show that undecided voters are more likely than decided voters to <a href="https://doi.org/10.1080/1369118X.2021.1883706">consider fake news headlines as credible</a>. But the opposite has also been shown – that they are <a href="https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211">less susceptible to political fake news</a>. </p>
<p>Still, to maximise the influence of disinformation in an election, undecided voters would be the obvious target, especially in close-run elections. But accurately profiling undecided voters <a href="https://doi.org/10.1111/rssa.12414">is difficult</a> – especially since people are cautious in revealing their voting intentions and the reasons behind them.</p>
<p>And if politicians or campaign staff use <a href="https://journals.sagepub.com/doi/abs/10.1177/1369148119842038">disinformation in aggressive negative campaigning</a> to sway undecided voters, they can end up increasing disengagement in the election process – making some people even more undecided.</p>
<p>Ultimately, most research suggests that fake news <a href="https://journals.sagepub.com/doi/full/10.1177/17456916221141344">is more likely to enhance existing beliefs</a> and views rather than <a href="https://link.springer.com/article/10.1007/s00146-020-00980-6">radically change voting intentions</a> of the undecided. </p>
<p>Another issue that often gets ignored is a phenomenon known in psychology as <a href="https://psycnet.apa.org/record/2001-16230-004">the third-person effect</a> – that we think that others are more persuadable, and even gullible, than ourselves. </p>
<p>So when it comes to who is susceptible to disinformation, it is likely that those studying it, as well as those participating in the studies, <a href="https://misinforeview.hks.harvard.edu/article/the-presumed-influence-of-election-misinformation-on-others-reduces-our-own-satisfaction-with-democracy/">assume they are immune</a>, but that anyone else, such as supporters of the opposing political party, are not – making the evidence harder to interpret. </p>
<p>It would be naive to say that disinformation, <a href="https://books.google.co.uk/books/about/Politics_and_Propaganda.html?id=FTrgh74moswC">such as political propaganda</a>, doesn’t have any influence on voting. But we should be careful not to assign disinformation as the sole explanation for election results that go against predictions.</p>
<p>If we assign disinformation such a high level of influence, we ultimately deny people’s agency in making free voting choices. And studies show that <a href="https://www.researchgate.net/publication/375301055_Folk_beliefs_about_where_manipulation_outside_of_awareness_occurs_and_how_much_awareness_and_free_choice_is_still_maintained">we are aware</a> that manipulative methods are used on us. Still, we all judge that we can maintain <a href="https://psycnet.apa.org/record/2023-13856-001">an ability to make our own choice</a> when voting.</p>
<p>It’s important to take this seriously. Our belief in free will is ultimately a reason so many of us back democracy in the first place. Denying it can arguably be more damaging than a few fake news posts lurking on social media.</p><img src="https://counter.theconversation.com/content/221579/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Magda Osman receives funding from Research England, ESRC, Wellcome Trust, and Turing Institute. </span></em></p>Most studies suggests that fake news is more likely to enhance existing beliefs and views rather than radically change voting intentions of those who are undecided.Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2217442024-01-23T20:41:07Z2024-01-23T20:41:07ZFake Biden robocall to New Hampshire voters highlights how easy it is to make deepfakes − and how hard it is to defend against AI-generated disinformation<figure><img src="https://images.theconversation.com/files/570887/original/file-20240123-23-3sxfsm.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5343%2C3559&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The fake robocall urged Democratic voters in New Hampshire not to vote in the Jan. 23, 2024, primary election.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/campaign-signs-asking-voters-to-write-in-president-joe-news-photo/1945939451">Chip Somodevilla/Getty Images</a></span></figcaption></figure><p><em>An updated version of this article was published on Feb. 8, 2024. <a href="https://theconversation.com/fcc-bans-robocalls-using-deepfake-voice-clones-but-ai-generated-disinformation-still-looms-over-elections-223160">Read it here</a>.</em></p>
<p>An unknown number of New Hampshire voters received a phone call on Jan. 21, 2024, from what sounded like President Joe Biden. A <a href="https://soundcloud.com/user-429524614/fake-joe-biden-robocall-nh">recording contains Biden’s voice</a> urging voters inclined to support Biden and the Democratic Party not to participate in New Hampshire’s Jan. 23 GOP primary election.</p>
<blockquote>
<p>Republicans have been trying to push nonpartisan and Democratic voters to participate in their primary. What a bunch of malarkey. We know the value of voting Democratic when our votes count. It’s important that you save your vote for the November election. We’ll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday. If you would like to be removed from future calls, please press two now.</p>
</blockquote>
<p>The call <a href="https://www.nytimes.com/2024/01/22/us/politics/nh-primary-explainer-how-vote.html">falsely implies</a> that a registered Democrat could vote in the Republican primary and that a voter who votes in the primary would be ineligible to vote in the general election in November. The state does allow unregistered voters to participate in either the Republican or Democratic primary.</p>
<p>The call, two days before the primary, appears to have been <a href="https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5">an artificial intelligence deepfake</a>. It also appears to have been <a href="https://www.doj.nh.gov/news/2024/20240122-voter-robocall.html">an attempt to discourage voting</a>. Biden is <a href="https://thehill.com/homenews/campaign/4418655-biden-new-hampshire-democratic-primary-ballot/">not on the ballot</a> because of a dispute between the Democratic National Committee and New Hampshire Democrats about New Hampshire’s position in the primary schedule, but there is <a href="https://www.npr.org/2024/01/23/1226172266/biden-nh-write-in-ballot">a write-in campaign</a> for Biden.</p>
<p>Robocalls in elections are nothing new and not illegal; many are simply efforts to get out the vote. But they have also been used in <a href="https://www.thedailybeast.com/michigan-ag-files-felony-charges-again-jack-burkman-jacob-wohl-for-alleged-voter-suppression-scheme">voter suppression</a> campaigns. Compounding this problem in this case is what I believe to be the application of AI to clone Biden’s voice.</p>
<p>In a media ecosystem full of noise, scrambled signals such as deepfake robocalls make it virtually impossible to tell facts from fakes.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/wZYIwHqDJBg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The New Hampshire attorney general’s office is investigating the call.</span></figcaption>
</figure>
<p>Recently, a number of companies have popped up online <a href="https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale">offering impersonation as a service</a>. For users like you and me, it’s as easy as selecting a politician, celebrity or executive like Joe Biden, Donald Trump or Elon Musk from a menu and typing a script of what you want them to appear to say, and the website creates the deepfake automatically. Though the audio and video output is usually choppy and stilted, when the audio is delivered via a robocall it’s very believable. You could easily think you are hearing a recording of Joe Biden, but really it’s machine-made misinformation.</p>
<h2>Context is key</h2>
<p>I’m a <a href="https://scholar.google.com/citations?hl=en&user=yu4Ew7gAAAAJ&view_op=list_works&sortby=pubdate">media and disinformation scholar</a>. In 2019, information scientist <a href="https://scholar.google.com/citations?hl=en&user=WHtDxZsAAAAJ&view_op=list_works&sortby=pubdate">Brit Paris</a> and I <a href="https://datasociety.net/library/deepfakes-and-cheap-fakes/#">studied how generative adversarial networks</a> – what most people today think of as AI – would transform the ways institutions assess evidence and make decisions when judging realistic-looking audio and video manipulation. What we found was that no single piece of media is reliable on its face; rather, context matters for making an interpretation.</p>
<p>When it comes to AI-enhanced disinformation, the believability of deepfakes hinges on where you see or hear it or who shares it. Without a valid and confirmed source vouching for it as a fact, a deepfake might be interesting or funny but will never pass muster in a courtroom. However, deepfakes can still be damaging when used in efforts to suppress the vote or shape public opinion on divisive issues. </p>
<p>AI-enhanced disinformation campaigns are difficult to counter because unmasking the source requires tracking the trail of metadata, which is the data about a piece of media. How this is done varies, depending on the method of distribution: robocalls, social media, email, text message or websites. Right now, research on audio and video manipulation is more difficult because many big tech companies have shut down access to their application programming interfaces, which make it possible for researchers to collect data about social media, and the companies have <a href="https://www.cnbc.com/2023/05/26/tech-companies-are-laying-off-their-ethics-and-safety-teams-.html">laid off their trust and safety teams</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1749521325394129296"}"></div></p>
<h2>Timely, accurate, local knowledge</h2>
<p>In many ways, AI-enhanced disinformation such as the New Hampshire robocall poses the same problems as every other form of disinformation. People who use AI to disrupt elections are likely to do what they can to hide their tracks, which is why it’s necessary for the public to remain skeptical about claims that do not come from verified sources, such as local TV news or social media accounts of reputable news organizations. </p>
<p>It’s also important for the public to understand what new audio and visual manipulation technology is capable of. Now that the technology has become widely available, and with a pivotal election year ahead, the fake Biden robocall is only the latest of what is likely to be a series of AI-enhanced disinformation campaigns.</p>
<p>I believe society needs to learn to venerate what I call TALK: timely, accurate, local knowledge. I believe that it’s important to design social media systems that value timely, accurate, local knowledge over disruption and divisiveness.</p>
<p>It’s also important to make it more difficult for disinformers to profit from undermining democracy. For example, the malicious use of technology to suppress voter turnout should be vigorously investigated by federal and state law enforcement authorities. </p>
<p>While deepfakes may catch people by surprise, they should not catch us off guard, no matter how slow the truth is compared with the speed of disinformation.</p><img src="https://counter.theconversation.com/content/221744/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joan Donovan is on the board of Free Press and the founder of the Critical Internet Studies Institute.</span></em></p>Deepfake technology is widely available, and a pivotal election year lies ahead. The fake Biden robocall is likely to be just the latest of a series of AI-enhanced disinformation campaigns.Joan Donovan, Assistant Professor of Journalism and Emerging Media Studies, Boston UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2181712024-01-16T14:13:09Z2024-01-16T14:13:09ZUganda’s battle for the youth vote – how Museveni keeps Bobi Wine’s reach in check<p>Uganda is one of the youngest countries in the world, with an <a href="https://www.worldbank.org/en/news/factsheet/2020/02/25/uganda-jobs-strategy-for-inclusive-growth#:%7E:text=Uganda%20is%20one%20of%20the,working%20age%20population%20is%20rapid.">average age of 15.9 years</a>. Young people aged below 30 make up about <a href="https://www.issuelab.org/resources/4998/4998.pdf#page=1">77%</a> of the country’s population of <a href="https://data.worldbank.org/indicator/SP.POP.TOTL?locations=UG">47 million</a> people.</p>
<p>Young people have legitimate and wide-ranging grievances, from unemployment to disenfranchisement. Opportunities remain limited, with <a href="https://www.worldbank.org/en/news/factsheet/2020/02/25/uganda-jobs-strategy-for-inclusive-growth">two-thirds of Ugandans</a> working for themselves or doing family-based agricultural work.</p>
<p>Yet, young people in Uganda haven’t coalesced as an electoral bloc. This is despite the emergence of a presidential candidate who champions youth issues. In the last presidential election in 2021, those aged between 18 and 30 made up <a href="https://www.monitor.co.ug/uganda/oped/commentary/what-young-voting-population-means-for-2021-elections-3206502">41%</a> of the total voter roll of 18 million. </p>
<p>Robert Kyagulanyi, the 41-year-old musician-turned-politician popularly known as Bobi Wine, leads the National Unity Platform. It is Uganda’s largest opposition party, known for its <a href="https://www.theguardian.com/world/2018/sep/21/young-africa-new-wave-of-politicians-challenges-old-guard">youth appeal</a>. </p>
<p><a href="https://theconversation.com/bobi-wine-has-already-changed-the-ugandan-opposition-can-he-change-the-government-150231">Bobi Wine’s run at the presidency in the 2021 election</a> highlights the reality that capturing the youth vote in Uganda is complex. And that this broad category and the role it plays in Ugandan politics is poorly understood.</p>
<p>As it is, the term “youth” lacks a clear definition. Uganda’s government defines the youth as those aged between 18 and 30. However, in practice the “youth” category is much more amorphous. It tends to <a href="https://opendocs.ids.ac.uk/opendocs/handle/20.500.12413/13550">encompass</a> those who are no longer considered children, but haven’t yet realised the “social markers” that signify adulthood. These include financial independence, marriage and children.</p>
<p>The outcome of the 2021 elections defied expectations, given Uganda’s <a href="https://www.ubos.org/wp-content/uploads/publications/11_2022NLFS_2021_main_report.pdf#page=135">large and underemployed youth population</a> and the emergence of Bobi Wine. In a recent <a href="https://www.tandfonline.com/doi/full/10.1080/17531055.2023.2235661">paper</a>, we examined youth political mobilisation in this election. </p>
<p>Despite widespread “youth wave” optimism, we identified diverse, embedded strategies and tactics from the ruling party, the <a href="https://www.nrm.ug/manifesto-2021-2026">National Resistance Movement</a>, that obstructed Bobi Wine’s efforts to build a powerful national youth constituency. </p>
<p>The strategies were:</p>
<ul>
<li><p>the structural capture of youth representation in Ugandan politics</p></li>
<li><p>diverse economic incentives for political loyalty in the form of loan schemes, grants and short-term employment </p></li>
<li><p>well-spun political narratives that draw on entrenched views of youth as beholden to their elders and the state. </p></li>
</ul>
<h2>New wine, old bottles</h2>
<p>When Bobi Wine ran in the presidential election, he was aged 38. Commentators worldwide suggested his candidacy represented a <a href="https://www.csmonitor.com/World/Africa/2019/1003/A-rapper-s-quest-to-be-president">real</a> and <a href="http://democracyinafrica.org/bobi_wine_threat_museveni/">unprecedented threat</a> to Yoweri Museveni’s longstanding rule. Museveni, 79, has been Uganda’s president since 1986.</p>
<p>Bobi Wine got <a href="https://www.theguardian.com/world/2021/jan/16/uganda-president-wins-decisive-election-as-bobi-wine-alleges">35%</a> of the vote. This is about the <a href="https://academic.oup.com/afraf/article-abstract/120/481/629/6406415?redirectedFrom=fulltext">same proportion of votes</a> that has accrued to the main opposition candidates in Uganda since multi-party elections resumed in 2006. </p>
<p>For a new entrant on the political scene, this was an impressive achievement – particularly in the light of political repression and patronage that make the <a href="https://time.com/5913625/bobi-wine-uganda-presidential-candidate/">playing field far from fair</a> in Uganda. </p>
<p>Bobi Wine’s <a href="https://www.amnesty.org/en/latest/press-release/2020/12/uganda-stop-killings-and-human-rights-violations-ahead-of-election-day/">violent arrest</a> in November 2020 gained international attention, as did the government’s aggressive response to protests calling for his release. These resulted in the <a href="https://www.hrw.org/news/2021/01/21/uganda-elections-marred-violence">death of at least 54 National Unity Platform supporters</a>. Security forces perpetrated <a href="https://www.hrw.org/news/2021/01/21/uganda-elections-marred-violence">widespread violence and human rights abuses</a> in the run-up to the election.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/black-november-remembering-ugandas-massacre-of-the-opposition-three-years-on-217847">Black November: remembering Uganda's massacre of the opposition three years on</a>
</strong>
</em>
</p>
<hr>
<p>On the eve of the election, the government ordered a <a href="https://apnews.com/article/kampala-elections-coronavirus-pandemic-uganda-united-states-65942284f4e73dbf120ace23775baae4">five-day internet shutdown</a>. There were also <a href="https://www.monitor.co.ug/uganda/special-reports/elections/nrm-dishes-out-money-to-locals-ahead-of-polls-3248892">reports</a> of the ruling party dishing out money to potential voters, with instructions to vote for Museveni. </p>
<p><a href="https://www.tandfonline.com/doi/full/10.1080/17531055.2023.2235661">Our research</a> reviewed Ugandan history since its independence from the British in 1962. We found that the possibility of a national youth constituency had been a concern of Uganda’s post-colonial governments. Regimes have long sought to integrate the youth into their political project, while keeping them fragmented and regionally embedded to prevent broader political mobilisation. </p>
<p>Contemporary tactics used by the ruling party to co-opt the youth converge with these historically rooted methods of regime consolidation. </p>
<h2>Splitting the youth</h2>
<p>The National Resistance Movement has an elaborate set of measures in place –from state level to the villages – to prevent youth discontent from becoming a national political threat. </p>
<p>First, the youth are organised into a “special interest group” <a href="https://www.jstor.org/stable/41653703">reinforced through quota systems</a>. These are closely allied with the ruling party’s leadership. Political structures, such as youth MPs and representatives, absorb youth representation under regime authority and entrench regional divisions. </p>
<p>Second, the ruling party uses patronage networks and tactics to mobilise young voters. It offers economic rewards for allegiance and generous material compensation for “party-switching” – which is when supporters defect from the opposition to the National Resistance Movement, often quite publicly. Ahead of the 2021 election, Museveni <a href="https://observer.ug/news/headlines/62550-inside-museveni-s-war-on-the-ghetto">gave state appointments to popular musicians with wide youth appeal</a> who had been working closely with Bobi Wine’s party. </p>
<p>The ruling party also offers young people <a href="https://www.monitor.co.ug/uganda/news/national/opposition-cries-foul-as-museveni-gives-shs741m-in-cash-donations-1484578">economic incentives</a> during campaigns. These include short-term employment, loans and cash handouts. Youth are often recruited as election workers, special police constables and crime preventers. In these short-term positions, tens of thousands of youth survey their communities and share local intelligence with the authorities, acting as the <a href="https://www.tandfonline.com/doi/full/10.1080/17531055.2016.1272283">state’s eyes and ears</a> at a village level. Among young, economically precarious men, this is seen as <a href="https://www.tandfonline.com/doi/full/10.1080/17531055.2023.2235661">an opportunity</a>, even though they become engaged in supporting the re-election of a regime they may oppose. </p>
<p>Third, during the last election, campaign observers were optimistic about the power of social media to amplify Bobi Wine’s message and increase support. But social media is also a tool the National Resistance Movement uses adeptly. Beyond internet shutdowns and disinformation campaigns, we found that Museveni and the National Resistance Movement used social media channels to promote powerful narratives that linked social order and prosperity to a culture of gerontocracy. This refers to a system of governance in which older people dominate.</p>
<h2>What hope for Bobi Wine?</h2>
<p>Well-developed structures, practices and narratives that fragment national youth mobilisation have been seen in recent Ugandan history. In northern Uganda, for example, young people have lived through a recent history of <a href="https://theconversation.com/managing-life-after-war-how-young-people-in-uganda-are-coping-108351">devastating conflict</a> and still struggle with its legacies. </p>
<p>This, combined with long-standing regional and ethnic tensions throughout the country, means that his opponents often describe Bobi Wine first as a <a href="https://academic.oup.com/afraf/article/120/481/629/6406415?login=true">political agitator</a> who could tear the country apart, not as the youth’s best chance for <a href="https://www.tandfonline.com/doi/full/10.1080/17531055.2023.2235661">political liberation and progress</a>. </p>
<p>Against this backdrop, if Bobi Wine contests in 2026, he is likely to struggle again. He may attract global media attention, but Museveni and the National Resistance Movement are familiar with his brand of <a href="https://academic.oup.com/afraf/article/120/481/629/6406415?login=trueopposition">“defiance-based” opposition politics</a>. </p>
<p>As commentators increasingly note, the big question remains whether Bobi Wine and the National Unity Platform, without experience in government and in the absence of strong links to powerful military and state players, can realistically achieve a political <a href="https://academic.oup.com/afraf/article/120/481/629/6406415?login=trueopposition">transition</a> in Uganda. </p>
<p>The overall picture is one in which the elite have long seen the youth as an important resource and potential threat – and as such fear and value them. While Uganda’s young people have real and legitimate grievances, they lack modes of political and social organisation – by long-standing design.</p>
<p><em>Arthur Owor, the director for research and operations at the Centre for African Research, is a co-author of this article.</em></p><img src="https://counter.theconversation.com/content/218171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rebecca Tapscott receives funding from the ESRC-funded Centre for Public Authority and International Development (CPAID) and the Gerda Henkel Foundation's Special Programme for Security, Society and the State.</span></em></p><p class="fine-print"><em><span>Anna Macdonald receives funding from the ESRC-funded Centre for Public Authority and International Development (CPAID). </span></em></p>Bobi Wine’s run at the presidency in 2021 had appeared to present an unprecedented threat to Yoweri Museveni’s longstanding rule.Rebecca Tapscott, Lecturer, University of YorkAnna Macdonald, Associate Professor, Global Development, University of East AngliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2185102024-01-09T19:15:38Z2024-01-09T19:15:38ZWanting to ‘move on’ is natural – but women’s pandemic experiences can’t be lost to ‘lockdown amnesia’<p>The COVID-19 pandemic was – and continues to be – hugely disruptive and stressful for individuals, communities and countries. Yet many seem desperate to close the chapter entirely, almost as if it had never happened. </p>
<p>This desire to <a href="https://www.washingtonpost.com/wellness/2023/03/13/brain-memory-pandemic-covid-forgetting/">forget and move on</a> – labelled “<a href="https://www.ft.com/content/be70b24e-8ca0-4681-a23b-0c59c69a2616">lockdown amnesia</a>” by some – is understandable at one level. But it also risks missing the opportunity to learn from what happened.</p>
<p>And while various official enquiries and royal commissions have been established to examine the wider government responses (including in New Zealand), the experiences of ordinary people are equally important to understand.</p>
<p>As researchers interested in women and gender roles, we wanted to capture some of this. For the past three years, our research has focused on what happened to everyday women during this period of uncertainty and disruption – and what lessons might be learned.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/QNZac2mmi7o?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<h2>Pandemic amnesia</h2>
<p>Individual memory can become vague as time goes on. But this can also be affected by broader narratives (in the media or official responses) that overwrite our own recollections of the pandemic.</p>
<p>Political calls to “<a href="https://www.mdpi.com/2076-0760/11/8/340">live with the virus</a>”, and <a href="https://www.rnz.co.nz/national/programmes/mediawatch/audio/2018849569/sick-and-tired-of-the-sickness">media hesitancy</a> to publish COVID-related stories due to perceived audience fatigue, can create a collective sense of needing to “move on”. Looking back can be seen as questionable, or even attacked.</p>
<p>Indeed, misinformation and disinformation have been used, <a href="https://www.routledge.com/Risk/Lupton/p/book/9781032327006">in the words</a> of leading pandemic social scientist Deborah Lupton, to “challenge science and manufacture dissent against attempts to tackle [such] crises”.</p>
<p>But as the memory scholar <a href="https://journals.sagepub.com/doi/pdf/10.1177/17506980231184563?casa_token=Wrs8pMKoFqcAAAAA:N9DN9rb9XNopHSIF2af2q8z4Ue457oW6l-mqPtBlmUQSy6dw53DYhQWxgk8BLe3SyWIzlkXTnvAPrYw">Sydney Goggins has put it</a>, such “public forgetting leads to a cascade of impacts on policy and social wellbeing”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/jacinda-arderns-resignation-gender-and-the-toll-of-strong-compassionate-leadership-198152">Jacinda Ardern's resignation: gender and the toll of strong, compassionate leadership</a>
</strong>
</em>
</p>
<hr>
<h2>A gendered pandemic</h2>
<p>Responding to the rapidly changing social, cultural and economic impacts of the pandemic, feminist scholars have highlighted the particular <a href="https://www.frontiersin.org/Articles/10.3389/Fgwh.2020.588372/Full">physical and emotional toll</a> on women worldwide.</p>
<p>This has included <a href="https://academic.oup.com/biomedgerontology/article/77/Supplement_1/S31/6463712">social isolation and loneliness</a>, increased <a href="https://www.tandfonline.com/doi/full/10.1080/15487733.2020.1776561?src=recsys">domestic and emotional labour</a>, the rise in <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7262164/">domestic and gender-based violence</a>, <a href="https://www.tandfonline.com/doi/full/10.1080/13545701.2021.1876906">job losses and financial insecurity</a>. Black, Indigenous, minority and migrant women have <a href="https://journals.sagepub.com/doi/full/10.1177/08912432211001302">felt these impacts</a> particularly keenly.</p>
<p>The <a href="https://search.informit.org/doi/abs/10.3316/informit.777013552598989">same trends</a> have been observed in Aotearoa New Zealand. And whereas some countries embraced pandemic recovery strategies that recognised these gender differences, this <a href="https://theconversation.com/nz-budget-2021-women-left-behind-despite-the-focus-on-well-being-161187">hasn’t been the case</a> in New Zealand.</p>
<p>The gendered abuse of women leaders – former prime minister <a href="https://theconversation.com/jacinda-arderns-resignation-gender-and-the-toll-of-strong-compassionate-leadership-198152">Jacinda Ardern</a> and scientist <a href="https://www.rnz.co.nz/national/programmes/atthemovies/audio/2018913516/review-ms-information">Siouxsie Wiles</a>, for example – have been well documented. But the experiences of ordinary women, their struggles and strategies to look after themselves and others, have had much less attention.</p>
<h2>Experiences of everyday women</h2>
<p>Our study involved 110 women in Aotearoa New Zealand. We set out to understand how they adapted their everyday practices – work, leisure, exercise, sport – to maintain or regain wellbeing, social connections and a sense of community.</p>
<p>Despite many differences between the women in our sample, there were also shared experiences. We referred to the ruptures in the patterns, rhythms and routines of their lives as “<a href="https://onlinelibrary.wiley.com/doi/full/10.1111/gwao.12987">gender arrhythmia</a>”.</p>
<p>The women responded to the psycho-social and physical challenges, such as disrupted sleep or weight changes, by creating counter-rhythms – taking up hobbies, exercising, changing diet.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-pandemics-disproportionate-impact-on-women-is-derailing-decades-of-progress-on-gender-equality-180941">The pandemic’s disproportionate impact on women is derailing decades of progress on gender equality</a>
</strong>
</em>
</p>
<hr>
<p>The pandemic also prompted many to reflect on how their pre-pandemic routines and rhythms had caused various forms of “alienation”: from their own health and wellbeing, meaningful social connections, ethical and sustainable work practices, and pleasure.</p>
<p>The disruption of the pandemic caused many to reevaluate the importance of work in their lives. As one reflected: </p>
<blockquote>
<p>COVID-19 has made me reassess what is the most important thing. Is it making money? Actually, no, not at all.</p>
</blockquote>
<p>Others were prompted to question and challenge the gendered demands on women to “do everything” and “be everywhere” for everyone:</p>
<blockquote>
<p>I think as women, because we’re so good at multitasking, we just put so much on our plates. I think we need to learn just to say no, because we’re not superhuman. And ultimately, all of this responsibility is weighing us down.</p>
</blockquote>
<p>Our research also highlighted how the pandemic affected women’s relationships with <a href="https://www.sciencedirect.com/science/article/pii/S1755458623000270?casa_token=KcmGBPnpKLQAAAAA:MmQhDue20CoR0f6lK8rjWfxtBSHsjpzjbJu8tIc03StdccyCvduAs3CUVPwk18rPbklx3_j8DEo">familiar spaces and places</a>. Leaving home for a walk, run or bike ride became important everyday practices that proved highly beneficial for most women’s subjective wellbeing. </p>
<p>Some came to <a href="https://journals.sagepub.com/doi/abs/10.1177/01937235231200288">appreciate physical activity</a> for the general joys of movement and connection with people and places, rather than simply to achieve particular goals like fitness or weight loss. </p>
<h2>Special challenges for young women</h2>
<p>As part of our overall project, we also <a href="https://www.tandfonline.com/doi/full/10.1080/13668803.2023.2268818?needAccess=true">focused on 45 young women</a> (aged 16 to 25). This highlighted the importance of recognising how gender, ethnicity and socioeconomic circumstances intersect. </p>
<p>Listening to their <a href="https://www.tepunahamatatini.ac.nz/2023/11/07/the-invisible-glue-holding-families-together-during-the-pandemic/">pandemic stories</a>, we found young women played important roles in supporting their families and communities. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/covid-19-has-laid-bare-how-much-we-value-womens-work-and-how-little-we-pay-for-it-136042">COVID-19 has laid bare how much we value women's work, and how little we pay for it</a>
</strong>
</em>
</p>
<hr>
<p>In particular, Māori, Pacific and others from diverse ethnic or migrant backgrounds carried increased responsibilities in the home, including childcare, cleaning, cooking and shopping. While many did so willingly, these extra burdens took a toll on their schooling, mental health and wellbeing.</p>
<p>For many young women, the pandemic was a radical disruption to their everyday lives and routines during a critical stage of identity development. They missed key milestones and events, and crucial phases of education and social development. </p>
<p>Many still grieve for some of those losses. And some are struggling to rebuild social connections, motivation and aspirations.</p>
<p>For example, some described being passionate and aspiring athletes before the pandemic. But social anxieties and body-image issues left over from lockdowns have been hard to shake, and have seen them <a href="https://www.mdpi.com/2673-995X/3/3/55">struggle to return</a> to sport. </p>
<h2>The invisible work of migrant women</h2>
<p>We also looked deeply at the experiences of <a href="https://link.springer.com/chapter/10.1007/978-3-031-38797-5_9">12 middle-class migrant women</a>, and how prolonged border closures created real anxiety about “not being there” for families overseas. </p>
<p>As one nurse working on the front line of COVID care in NZ explained:</p>
<blockquote>
<p>About a year ago, the cases of COVID in my homeland were increasing so rapidly. My family were not very well and I was depending on social media […] trying to reach out to them. I was really scared at that time, not being able to see your family when they really need you, not being able to be with them.</p>
</blockquote>
<p>Some of the women in our sample also experienced <a href="https://www.tandfonline.com/doi/full/10.1080/14649365.2023.2275761">increased anti-immigrant sentiments</a> which further affected their health and wellbeing – and their feelings of belonging. As one said:</p>
<blockquote>
<p>I’ve become extremely sensitive. I cry about small things. My doctor said “go and get some fresh air, it’s good for you” […] I went outside for a walk, and someone shouted at me, screamed at me. I got terrified for my life. How do you expect me to have wellbeing when no one in the society accepts you?</p>
</blockquote>
<p>This arm of the research suggests a real need for <a href="https://www.belong.org.nz/migrant-experiences-in-the-time-of-covid">investment in policies and support strategies</a> specifically for migrant women and their communities in any future global health emergency.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealanders-are-learning-to-live-with-covid-but-does-that-mean-having-to-pay-for-protection-ourselves-219698">New Zealanders are learning to live with COVID – but does that mean having to pay for protection ourselves?</a>
</strong>
</em>
</p>
<hr>
<h2>Communities of care</h2>
<p>A key feature of our study was the highly creative ways women cultivated “<a href="https://journals.sagepub.com/doi/full/10.1177/2043820620934268">communities of care</a>” during the pandemic. Even when they were struggling themselves, they reached out to friends and family – and particularly other women. </p>
<p>The majority of our participants were prompted to think differently about their own health and wellbeing, and what is important in their lives (now and in the future). </p>
<p>Throughout the pandemic, women have worked quietly, behind the scenes, in their families, communities and workplaces, supporting their own and others’ health and wellbeing. This invisible labour is rarely acknowledged or celebrated. </p>
<p>Many still feel the toll of economic hardship, violence and exhaustion. And less tangible feelings of disillusionment remain in a society that has so quickly “moved on” from the pandemic.</p>
<p>Acknowledging and addressing pandemic amnesia – personal and collective – is an important first step in documenting, learning from, and using these experiences to <a href="https://www.sciencedirect.com/science/article/pii/S0277953622008176">better prepare for future events</a>. Next time, we need to ensure the necessary support is available for those most in need.</p>
<hr>
<p><em>The authors wish to acknowledge the other members of the research team: Dr Nikki Barrett, Dr Julie Brice, Dr Allison Jeffrey and Dr Anoosh Soltani.</em></p>
<hr><img src="https://counter.theconversation.com/content/218510/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Holly Thorpe receives funding from a Royal Society Te Apārangi James Cook Research Fellowship.</span></em></p><p class="fine-print"><em><span>Grace O'Leary, Mihi Joy Nemani, and Nida Ahmad do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>COVID was a ‘gendered pandemic’, with women carrying very different burdens to men. A three-year New Zealand research project aimed to overcome the urge to forget, and provide lessons for the future.Holly Thorpe, Professor in Sociology of Sport and Gender, University of WaikatoGrace O'Leary, Research Fellow, University of WaikatoMihi Joy Nemani, Senior Lecturer, Te Huataki Waiora School of Health, University of WaikatoNida Ahmad, Research Fellow, University of WaikatoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2186712023-12-18T16:17:17Z2023-12-18T16:17:17ZVictorian Britain had its own anti-vaxxers – and they helped bring down a government<figure><img src="https://images.theconversation.com/files/565425/original/file-20231213-31-19s6sa.jpg?ixlib=rb-1.1.0&rect=11%2C5%2C3663%2C2886&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://wellcomecollection.org/works/tr7x4acf/images?id=chsz86gd">E.E. Hillemacher/Wellcome Collection</a></span></figcaption></figure><p>As the 1906 UK general election results <a href="https://www.theguardian.com/politics/1906/jan/15/electionspast.past">rolled in</a>, it became clear that the Conservative party, after 11 years in power, had suffered one of the most disastrous defeats in its history. Of 402 Conservative MPs, 251 lost their seats, including <a href="https://www.gov.uk/government/history/past-prime-ministers/arthur-james-balfour">their candidate for prime minister</a>, defeated on a 22.5% swing against him in the constituency he had held for two decades. </p>
<p>Rising food prices, unpopular taxes and an opposition that promised to spend heavily on an expanded welfare state all contributed to the <a href="https://liberalhistory.org.uk/history/1906-election/">Tory downfall that year</a>. But something else had tipped the opposition Liberal landslide over the edge – compulsory vaccination. </p>
<p>Anti-vaccination campaigner <a href="https://www.bmj.com/content/1/2374/1566.1">Arnold Lupton</a> had taken Sleaford in Lincolnshire for the Liberals on a 12% swing and immediately started his parliamentary campaign to abolish compulsory vaccination against smallpox, a public health policy that had been in place in England and Wales since 1853 (with Scottish and Irish legislation following suit in later years). </p>
<p>Hardly a single Conservative MP was an anti-vaccinator, but 174 of the 397 Liberal MPs in the new parliament signed Lupton’s petition. </p>
<p>Their attempt at changing the law was unsuccessful, but this flexing of parliamentary muscle by the anti-vaccinators persuaded the new Liberal government that the most expedient option was to reach a compromise with its backbench rebels.</p>
<p>In 1907, the law was changed to permit quick and easy opt-out by parents. Vaccination of all babies against smallpox remained theoretically compulsory until 1946, but in practice, it was now optional. A five-decade-long campaign, in the streets, the courts and finally parliament, had resulted in victory for the opponents of vaccination.</p>
<p>This is a sobering story for those of us who are researchers, medical professionals or public health activists campaigning against the spread of vaccine hesitancy in the modern world. </p>
<p>The success of vaccination in saving millions of lives, not just from <a href="https://theconversation.com/eradicating-smallpox-the-global-vaccination-push-that-brought-the-world-arm-to-arm-162091">smallpox</a> but a host of other diseases, seems so obvious that the case scarcely needs to be made. And yet it does, as just a cursory glance at social, <a href="https://www.theguardian.com/media/2023/may/09/gb-news-censured-after-naomi-wolf-compared-covid-jab-to-mass-murder">even at times mainstream</a>, media will reveal. </p>
<p>In response to this tide of dangerous disinformation, vaccine advocacy work often focuses on issues such as the lack of <a href="https://www.cidrap.umn.edu/lack-high-school-education-predicts-vaccine-hesitancy">public comprehension of scientific concepts</a> of “relative risk” and “efficacy”, and the connections of the anti-vaccine activists to more general conspiracy theories and <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9111101/">extreme religious</a> or <a href="https://researchonline.lshtm.ac.uk/id/eprint/4670453/1/Alarcon-etal-2023-The-far-right-and-anti.pdf">political movements</a>. </p>
<p>The conclusion of many vaccine advocacy pieces is often that we must simply educate the public better while simultaneously cutting the flow of disinformation, yet this has often proved to be an uphill struggle. Why? Can vaccine advocates learn anything from the historic defeat of 1906?</p>
<h2>Social media of the Victorian era</h2>
<p>A recently published resource of Victorian anti-vaccination <a href="https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqad075/7330453">“street literature”</a> seeks to contribute to this effort by providing free access to 3.5 million words from 133 documents, ranging from short pamphlets to longer publications over the period 1854-1906.</p>
<p>What the 133 sources have in common is that they were all produced for public consumption, designed to strengthen or maintain the beliefs of the converted while reaching out for new converts. Existing outside the conventional publishing industry, this street literature was the social media of the Victorian era.</p>
<figure class="align-center ">
<img alt="Etching of children being vaccinated in East London in a crowded, chaotic room." src="https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=466&fit=crop&dpr=1 600w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=466&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=466&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=586&fit=crop&dpr=1 754w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=586&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/565150/original/file-20231212-25-cw1fnh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=586&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Children being vaccinated in East London.</span>
<span class="attribution"><a class="source" href="https://wellcomecollection.org/works/fmrb5a8p/images?id=dnmduxyq">Wellcome Collection</a></span>
</figcaption>
</figure>
<p>Computational analysis of these texts reveals anti-vaccination themes that are very similar to those of today. For instance, doubts about the effectiveness of vaccines, what they’re made of and their safety, feature prominently. </p>
<p>Other common themes include complaints that civil liberties are infringed by compulsory vaccination, alongside conspiracy theories of government cover-ups, general distrust of the medical profession, and an orientation towards alternative medicine. </p>
<p>What changes is the detail. For instance, fear of the inadvertent introduction of syphilis, tuberculosis and skin diseases, as very occasionally happened in Victorian times, may be compared to the <a href="https://theconversation.com/under-40s-can-ask-their-gp-for-an-astrazeneca-shot-whats-changed-what-are-the-risks-are-there-benefits-163571">blood clots</a> issue with the COVID vaccine. </p>
<p>Other more spurious scare stories, such as an association between vaccination and tooth decay or mental illness have their parallels in the <a href="https://theconversation.com/autism-and-vaccines-more-than-half-of-people-in-britain-france-italy-still-think-there-may-be-a-link-101930">discredited autism claims</a> of the present day. Likewise, modern conspiracy theories about big pharma have their Victorian parallel in allegations of medical profiteering from vaccination fees.</p>
<p>This study of the Victorian anti-vaxxers shows us that there are indeed recurrent fears more than two centuries old. But it also teaches us that some of the motivations of vaccine hesitancy stem from social, political and religious beliefs that are equally deep in time and often deeply held. </p>
<figure class="align-center ">
<img alt="A calf being used to make vaccines." src="https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/565148/original/file-20231212-21-thvmg9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The use of cattle to produce vaccines was one of the first biotechnology industries but drew fire from anti-vaccination activists on grounds of animal cruelty.</span>
<span class="attribution"><a class="source" href="https://wellcomecollection.org/works/ju78dfph">Wellcome Collection</a></span>
</figcaption>
</figure>
<p>For example, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9728709/pdf/homoeopathphys132846-0032.pdf">William Tebb</a>, one of the most prominent anti-vaxxers of Victorian times campaigned with equal energy on a whole raft of causes, from women’s suffrage to the abolition of slavery via vegetarianism, animal rights and mystical religion. </p>
<p>For Tebb and many of his followers, these were intimately connected causes. To reach the root of the problem, we need to untangle these connections in sensitive ways that go beyond conventional public engagement.</p><img src="https://counter.theconversation.com/content/218671/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The project described in this article is funded by the UK Economic & Social Research Council.</span></em></p><p class="fine-print"><em><span>Chris Sanderson received funding from the UK Economic & Social Research Council for this project. </span></em></p><p class="fine-print"><em><span>Alice Deignan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Victorian anti-vaccine literature shows that the fears and concerns remain largely the same today.Derek Gatherer, Lecturer, Biomedical and Life Sciences, Lancaster UniversityAlice Deignan, Professor of Applied Linguistics, University of LeedsChris Sanderson, PhD Candidate, ESRC Centre for Corpus Approaches to Social Science, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2165982023-12-07T13:27:32Z2023-12-07T13:27:32ZDisinformation is rampant on social media – a social psychologist explains the tactics used against you<figure><img src="https://images.theconversation.com/files/564017/original/file-20231206-23-770wja.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6205%2C4105&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Disinformation campaigns use emotional and rhetorical tricks to try to get you to share propaganda and falsehoods.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/worried-man-working-at-home-looking-in-laptop-royalty-free-image/1451439821">hobo_018/E+ via Getty Images</a></span></figcaption></figure><p>Information warfare <a href="https://www.nytimes.com/2023/10/26/technology/russian-disinformation-us-state-department-campaign.html">abounds, and everyone online has been drafted</a> whether they know it or not. </p>
<p>Disinformation is deliberately generated misleading content disseminated for selfish or malicious purposes. Unlike misinformation, which may be shared unwittingly or with good intentions, disinformation aims to foment distrust, destabilize institutions, <a href="https://www.jstor.org/stable/26508126">discredit good intentions</a>, defame opponents and delegitimize sources of knowledge such as science and journalism. </p>
<p>Many governments engage in disinformation campaigns. For instance, the Russian government has <a href="https://www.wired.com/story/russia-ukraine-taylor-swift-disinformation/">used images of celebrities</a> to attract attention to anti-Ukraine propaganda. Meta, parent company of Facebook and Instagram, warned on Nov. 30, 2023, that China <a href="https://www.npr.org/2023/11/30/1215898523/meta-warns-china-online-social-media-influence-operations-facebook-elections">has stepped up its disinformation operations</a>.</p>
<p>Disinformation is <a href="https://blogs.lse.ac.uk/medialse/2021/10/08/performing-disinformation-a-muddled-history-and-its-consequences/">nothing new</a>, and information warfare has been practiced by many countries, <a href="https://warontherocks.com/2019/06/the-united-states-needs-an-information-warfare-command-a-historical-examination/">including the U.S.</a> But the internet gives disinformation campaigns unprecedented reach. <a href="https://www.nytimes.com/2019/09/26/technology/government-disinformation-cyber-troops.html">Foreign governments</a>, <a href="https://misinforeview.hks.harvard.edu/article/who-knowingly-shares-false-political-information-online/">internet trolls</a>, domestic and international <a href="https://unicri.it/sites/default/files/2020-11/SM%20misuse.pdf">extremists</a>, <a href="https://counterhate.com/research/the-disinformation-dozen/">opportunistic profiteers</a> and even <a href="https://phys.org/news/2021-11-disinformation-realm-spycraft-shady-industry.html">paid disinformation agencies</a> exploit the internet to spread questionable content. Periods of <a href="https://www.psychologytoday.com/us/blog/unpacking-social-relations/202008/why-misinformation-goes-viral">civil unrest</a>, <a href="https://www.nytimes.com/2023/09/11/us/politics/china-disinformation-ai.html">natural disasters</a>, <a href="https://publichealth.jhu.edu/meeting-covid-19-misinformation-and-disinformation-head-on">health</a> crises and <a href="https://www.nytimes.com/2023/09/26/world/europe/ukraine-russia-war-disinformation.html">wars</a> <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8853081/">trigger anxiety</a> and the hunt for information, which disinformation agents take advantage of.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/rbxN6qfbE3w?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Meta has uncovered and blocked sophisticated Chinese disinformation campaigns.</span></figcaption>
</figure>
<p>Certainly it’s worth watching for the warning signs for <a href="https://theconversation.com/10-ways-to-spot-online-misinformation-132246">misinformation</a> and <a href="https://theconversation.com/incitement-to-violence-is-rarely-explicit-here-are-some-techniques-people-use-to-breed-hate-153585">dangerous speech</a>, but there are additional tactics disinformation agents employ. </p>
<h2>It’s just a joke</h2>
<p><a href="https://www.cde.ual.es/en/key-narratives-in-pro-kremlin-disinformation-the-hahaganda/">Hahaganda</a> is a <a href="https://stratcomcoe.org/pdfjs/?file=/publications/download/Full-stratcom-laughs-report_web_15-03-2017.pdf?zoom=page-fit">tactic</a> in which disinformation agents use memes, political comedy from state-run outlets, or speeches to make light of serious matters, attack others, <a href="https://euvsdisinfo.eu/lets-laugh-at-political-murder/">minimize violence</a> or <a href="https://www.psychologytoday.com/us/blog/without-prejudice/201307/dehumanizing-others-is-no-joke">dehumanize</a>, and deflect blame. </p>
<p>This approach provides an easy defense: If challenged, the disinformation agents can say, “Can’t you take a joke?” often followed by accusations of being too politically correct.</p>
<h2>Shhh … tell everyone</h2>
<p>Rumor-milling is a tactic in which the disinformation agents <a href="https://doi.org/10.1073/pnas.1517441113">claim to have exclusive access to secrets</a> they allege are being purposefully concealed. They indicate that you will “only hear this here” and will imply that others are unwilling to share the alleged truth – for example, “The media won’t report this” or “The government doesn’t want you to know” and “I shouldn’t be telling you this … .” </p>
<p>But they do not insist that the information be kept secret, and will instead include encouragement to share it – for example, “Make this go viral” or “Most people won’t have the courage to share this.” It’s important to question how an author or speaker could have come by such “secret” information and what their motive is to prompt you to share it.</p>
<h2>People are saying</h2>
<p>Often disinformation has no real evidence, so instead disinformation agents will <a href="https://press.princeton.edu/books/hardcover/9780691188836/a-lot-of-people-are-saying">find or make up people</a> to support their assertions. This impersonation can take multiple forms. Disinformation agents will use anecdotes as evidence, especially sympathetic stories from vulnerable groups such as women or children.</p>
<p>Similarly, they may disseminate “<a href="https://doi.org/10.1080/21670811.2023.2210616">concerned citizens’</a>” perspectives. These layperson experts present their social identity as providing the authority to speak on a matter; “As a mother …,” “As a veteran …,” “As a police officer ….” <a href="https://doi.org/10.1080/21670811.2023.2210616">Convert communicators</a>, or people who allegedly change from the “wrong” position to the “right” one, can be especially persuasive, such as the woman who got an abortion but regretted it. These people often don’t actually exist or may be <a href="https://www.bbc.com/news/world-us-canada-52733886">coerced</a> or paid. </p>
<p>If ordinary people don’t suffice, <a href="https://www.france24.com/en/live-news/20230907-fake-experts-drive-disinformation-before-bangladesh-polls">fake experts</a> may be used. Some are fabricated, and you can watch out for “<a href="https://datajournalism.com/read/handbook/verification-3/investigating-actors-content/3-spotting-bots-cyborgs-and-inauthentic-activity">inauthentic user</a>” behavior, for example, by checking X – formerly Twitter – accounts using the <a href="https://botometer.osome.iu.edu/">Botometer</a>. But fake experts can come in different varieties.</p>
<ul>
<li>A faux expert is someone used for their title but doesn’t have actual relevant expertise. </li>
<li>A pseudoexpert is someone who claims relevant expertise but has <a href="https://doi.org/10.3389/fpsyg.2021.732666">no actual training</a>.</li>
<li>A junk expert is a sellout. They may have had expertise once but now say whatever is profitable. You can often find these people have supported other dubious claims – for example, that <a href="http://dx.doi.org/10.1177/003335490512000215">smoking doesn’t cause cancer</a> – or work for <a href="https://redwoods.libguides.com/fakenews/thinktanks">institutes</a> that regularly produce questionable “<a href="https://doi.org/10.1038/d41586-021-00733-5">scholarship</a>.”<br></li>
<li>An echo expert is when disinformation sources cite each other to provide credence for their claims. China and Russia routinely <a href="https://www.brookings.edu/articles/china-and-russia-are-joining-forces-to-spread-disinformation/">cite one another’s</a> newspapers.<br></li>
<li>A stolen expert is someone who exists, but they weren’t actually contacted and their research is misinterpreted. Likewise, disinformation agents also steal credibility from known news sources, such as by <a href="https://stratheia.com/typo-squatting-and-disinformation-campaigns-are-the-growing-threat-to-the-national-security-of-pakistan/">typosquatting</a>, the practice of setting up a domain name that closely resembles a legitimate organization’s.</li>
</ul>
<p>You can check whether accounts, anecdotal or scientific, <a href="https://www.forbes.com/sites/kathycaprino/2014/05/19/are-you-dealing-with-a-real-expert-or-a-fake-7-ways-to-tell/?sh=5e91e90f6dba">have been verified by other reliable sources</a>. Google the name. Check expertise status, source validity and interpretation of research. Remember, <a href="https://www.ncbi.nlm.nih.gov/books/NBK63643/">one story</a> or interpretation is not necessarily representative. </p>
<h2>It’s all a conspiracy</h2>
<p>Conspiratorial narratives involve some malevolent force – for example, “the deep state” – <a href="https://doi.org/10.1037/bul0000392.supp">engaged in covert actions</a> with the aim to cause harm to society. That certain conspiracies such as <a href="https://www.smithsonianmag.com/smart-news/what-we-know-about-cias-midcentury-mind-control-project-180962836/">MK-Ultra</a> and Watergate have been confirmed is often offered as evidence for the validity of new unfounded conspiracies. </p>
<p>Nonetheless, disinformation agents find that constructing a conspiracy is an effective means to remind people of past reasons to <a href="https://doi.org/10.1007/s11109-014-9287-z">distrust governments, scientists or other trustworthy sources</a>. </p>
<p>But extraordinary claims require extraordinary evidence. Remember, the conspiracies that were ultimately unveiled had evidence – often from sources like investigative journalists, scientists and government investigations. Be particularly wary of conspiracies that try to <a href="https://doi.org/10.1057/s41296-019-00372-6">delegitimize knowledge-producing institutions</a> like universities, research labs, government agencies and news outlets by claiming that they are in on a cover-up.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/u8Pg-cD0ytg?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Basic tips for resisting disinformation and misinformation include thinking twice before sharing social media posts that trigger emotional responses like anger and fear and checking the sources of posts that make unusual or extraordinary claims.</span></figcaption>
</figure>
<h2>Good vs. evil</h2>
<p>Disinformation often serves the dual purpose of making the originator look good and their opponents look bad. Disinformation takes this further by painting issues as a battle between good and evil, using accusations of evilness to <a href="https://doi.org/10.1177/0146167213500997">legitimize violence</a>. Russia is particularly fond of accusing others of being <a href="https://www.nytimes.com/interactive/2022/07/02/world/europe/ukraine-nazis-russia-media.html">secret Nazis</a>, <a href="https://www.voanews.com/a/europe_why-kremlin-tagging-protesters-political-pedophiles/6201377.html">pedophiles</a> or <a href="https://www.atlanticcouncil.org/blogs/ukrainealert/nato-nazis-satanists-putin-is-running-out-of-excuses-for-his-imperial-war/">Satanists</a>. Meanwhile, they often depict their soldiers as helping children and the elderly. </p>
<p>Be especially wary of <a href="https://doi.org/10.1177/2046147X14542958">accusations of atrocities</a> like genocide, especially under the attention-grabbing “breaking news” headline. <a href="https://www.bbc.com/news/world-39266863">Accusations</a> abound. Verify the facts and how the information was obtained. </p>
<h2>Are you with us or against us?</h2>
<p>A false dichotomy narrative sets up the reader to believe that they have one of two mutually exclusive options; a good or a bad one, a right or a wrong one, a red pill or a blue pill. You can accept their version of reality or be an idiot or “sheeple.” </p>
<p>There are always more options than those being presented, and issues are rarely so black and white. This is just one of the tactics in <a href="https://doi.org/10.3389%2Ffsoc.2023.1141416">brigading</a>, where disinformation agents seek to silence dissenting viewpoints by casting them as the wrong choice. </p>
<h2>Turning the tables</h2>
<p><a href="https://theconversation.com/whataboutism-what-it-is-and-why-its-such-a-popular-tactic-in-arguments-182911">Whataboutism</a> is a classic Russian disinformation technique they use to deflect attention from their own wrongdoings by alleging the wrongdoings of others. These allegations about the actions of others may be <a href="https://hedgehogreview.com/issues/the-use-and-abuse-of-history/articles/whataboutism">true or false but are nonetheless irrelevant</a> to the matter at hand. The potential past wrongs of one group does not mean you should ignore the current wrongs of another. </p>
<p>Disinformation agents also often cast their group as the wronged party. They only engage in disinformation because their “enemy” engages in disinformation against them; they only attack to defend; and their reaction was appropriate, while that of others was an <a href="https://doi.org/10.1177/0146167207311282">overreaction</a>. This type of <a href="https://doi.org/10.1177/0146167207311282">competitive victimhood</a> is particularly pervasive when groups have been embedded in a long-lasting conflict.</p>
<p>In all of these cases, the disinformation agent is aware that they are deflecting, misleading, trolling or outright fabricating. If you don’t believe them, they at least want to make you question what, if anything, you can believe. </p>
<p>You often look into the things you buy rather than taking the advertising at face value before you hand over your money. This should also go for what information you buy into.</p><img src="https://counter.theconversation.com/content/216598/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>H. Colleen Sinclair has received funding from the National Institute of Justice, the Department of Defense, the Gates Foundation, and the Louisiana Department of Corrections</span></em></p>Disinformation campaigns often use a set of rhetorical devices that you can learn to spot, like conspiracy narratives, good versus evil framing, and revealed secrets.H. Colleen Sinclair, Associate Research Professor of Social Psychology, Louisiana State University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2185222023-11-30T19:03:44Z2023-11-30T19:03:44ZThe news is fading from sight on big social media platforms – where does that leave journalism?<p>According to a <a href="https://newsmediauk.org/blog/2023/11/02/editors-warn-of-existential-threat-to-journalism-from-big-tech/">recent survey</a> by the News Media Association, 90% of editors in the United Kingdom “believe that Google and Meta pose an existential threat to journalism”. </p>
<p>Why the pessimism? Because being in the news business but relying on social media platforms and search engines has become very risky. The big tech companies are de-prioritising news content, making it harder for citizens to find verified information produced by journalists.</p>
<p>It is arguable the threat isn’t necessarily existential. News companies are also <a href="https://www.inma.org/blogs/research/post.cfm/the-un-conscious-uncoupling-of-platforms-and-news-publishers-is-happening-quickly">leaving social media platforms</a>, potentially claiming back some control and building resilience into their revenue models. </p>
<p>Leading New Zealand digital publisher Stuff, for example, recently decided to stop <a href="https://www.stuff.co.nz/national/300988705/stuff-group-withdraws-from-x-formerly-twitter">posting its content</a> on X (formerly Twitter), “except stories that are of urgent public interest – such as health and safety emergencies”.</p>
<p>But as I describe in my new book, <a href="https://www.bwb.co.nz/books/from-paper-to-platform/">From Paper to Platform</a>, news organisations that continue to conduct their news business via these platforms will have limited control. As social media companies and search engines change the terms of their services at will, news companies are left to deal with the consequences. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/breaking-news-making-google-and-facebook-pay-nz-media-for-content-could-deliver-less-than-bargained-for-196030">Breaking news: making Google and Facebook pay NZ media for content could deliver less than bargained for</a>
</strong>
</em>
</p>
<hr>
<h2>Risks of ‘platformed publishing’</h2>
<p>Platforms such as Google and Facebook play various roles in the modern media ecosystem. Consequently, their actions create multiple risk points for news media. The impacts differ, of course, depending on each news company’s own goals and strategies.</p>
<p>As one <a href="https://journals.sagepub.com/doi/full/10.1177/14648849211031363">Scandinavian study</a> of media risk management noted, “platforms pose a competitive threat to news organisations”. But that threat varies, depending on how news organisations respond, and how reliant they are on those platforms for audience reach or funding.</p>
<p>News companies distribute their content on platforms such as Facebook or X because that’s where their audience is – at least a large proportion of it, anyway. But news is poorly promoted by those platforms, and Google and Facebook admit news makes up only a tiny fraction of their overall content.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/even-experts-struggle-to-tell-which-social-media-posts-are-evidence-based-so-what-do-we-do-217448">Even experts struggle to tell which social media posts are evidence-based. So, what do we do?</a>
</strong>
</em>
</p>
<hr>
<p>Furthermore, the visibility of news within these platforms is rapidly declining. The result is described by the authors of <a href="https://global.oup.com/academic/product/the-power-of-platforms-9780190908867?lang=en&cc=us">The Power of Platforms</a> as “<a href="https://reutersinstitute.politics.ox.ac.uk/news/power-platforms">platformed publishing</a>”: </p>
<blockquote>
<p>a situation where some news organisations have almost no control over the distribution of their journalism because they publish primarily to platforms defined by coding technologies, business models, and cultural conventions over which they have little influence.</p>
</blockquote>
<p>As a recent <a href="https://www.wired.com/story/facebook-is-giving-up-on-news-again/">Wired article observed</a>, “Facebook is done with news”: its parent company Meta is “killing off the News tab in France, Germany and the UK”, having already temporarily blocked access to news content in Australia in 2021 and more recently in Canada where the blackout continues.</p>
<p>Instagram’s new Threads app (also owned by Meta) has no appetite for hard news, Google’s search results offer <a href="https://pressgazette.co.uk/media-audience-and-business-data/google-core-update-news-search-october-2023/">less news</a>, and X has stopped showing news headlines and links on tweets.</p>
<h2>Weakening democracy</h2>
<p>The New Zealand news publishers I spoke to generally believe platform algorithms don’t prioritise factual news content. As <a href="https://www.bwb.co.nz/books/from-paper-to-platform/">one observed</a>, the “platforms have the control over algorithms”. Another noted how platforms “can bury or promote you as they like, their tweaks in algorithms determine your fate”.</p>
<p>This has real consequences beyond the impact on media metrics and advertising revenue. Platforms have an influence on democratic processes – including elections.</p>
<p>The same News Media Association survey quoted at the start of this article also reveals 77% of UK editors believe platform antics such as news blackouts will weaken democratic societies. </p>
<p>When people cannot access (or have limited access to) verified and trusted news, other things fill the void. The Israel-Gaza conflict, to take just the most recent example, has seen an <a href="https://www.euronews.com/next/2023/10/11/eus-thierry-breton-gives-elon-musk-24-hour-ultimatum-to-deal-with-israel-hamas-misinformat">increase in disinformation</a> on X – to the extent the European Union’s digital rights chief warned owner Elon Musk he was potentially breaching EU law.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/41-us-states-are-suing-meta-for-getting-teens-hooked-on-social-media-heres-what-to-expect-next-216914">41 US states are suing Meta for getting teens hooked on social media. Here’s what to expect next</a>
</strong>
</em>
</p>
<hr>
<h2>Terms of payment</h2>
<p>There has been some cause for optimism recently due to Google and Facebook becoming funders of journalism and news, having been either mandated or coerced to pay publishers for their content. </p>
<p>Australia was first to introduce a law requiring platforms to compensate news companies, followed by Canada. The previous New Zealand government introduced a <a href="https://www.parliament.nz/en/pb/sc/make-a-submission/document/54SCEDSI_SCF_FC7FAAC0-2EC0-4E47-7AB5-08DB9EBB2302/fair-digital-news-bargaining-bill">similar bill</a> to parliament, but there is no certainty it will become law under the new administration. </p>
<p>In Australia and Canada, the platforms implemented news “blackouts” in their services as a response to these laws, effectively making news invisible to their users.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-google-and-meta-owe-news-publishers-much-more-than-you-think-and-billions-more-than-theyd-like-to-admit-216818">Why Google and Meta owe news publishers much more than you think – and billions more than they’d like to admit</a>
</strong>
</em>
</p>
<hr>
<p>And while these platform payments have brought additional revenue to many news publishers, the terms of the payments are not public. It’s hard to estimate how much Google and Facebook have actually paid for news content, but it has been <a href="https://cepr.org/voxeu/columns/logic-behind-australias-news-media-bargaining-code">estimated in Australia</a> to be A$200 million annually. </p>
<p>If that sounds substantial, consider this: <a href="https://policydialogue.org/publications/working-papers/paying-for-news-what-google-and-meta-owe-us-publishers-draft-working-paper/">a recent US study</a> suggested Google and Meta should be paying far more than they do, estimating Facebook owes news publishers US$1.9 billion and Google US$10-12 billion annually.</p>
<p>It’s hard to see those platforms agreeing to such figures, or increasing any payments for news. More likely, the payments will gradually dwindle as Google and Meta continue prioritising other services and products over news. </p>
<p>Newsrooms will likely have to say goodbye to platformed publishing and social media news distribution. It’s clear it isn’t working as well as many hoped, and it will almost certainly not work in the long term.</p><img src="https://counter.theconversation.com/content/218522/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Merja Myllylahti does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social media platforms are abandoning news – which is bad news for traditional media organisations that have come to rely on them for consumers.Merja Myllylahti, Senior Lecturer, Co-Director Research Centre for Journalism, Media & Democracy, Auckland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2178422023-11-23T08:14:24Z2023-11-23T08:14:24ZDisinformation is part and parcel of social media’s business model, new research shows<figure><img src="https://images.theconversation.com/files/561013/original/file-20231122-21-41uhqo.jpg?ixlib=rb-1.1.0&rect=12%2C8%2C2683%2C1786&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/es/image-photo/two-cute-beautiful-young-women-friends-295469396">fizkes/Shutterstock</a></span></figcaption></figure><p>Deceptive online content is big business. The digital advertising market is now worth <a href="https://www.statista.com/outlook/dmo/digital-advertising/worldwide#ad-spending">€625 billion</a>, and their business model is simple: more clicks, views or engagement means more money from advertisers. Incendiary, shocking content – whether it is true or not – is an easy way to get our attention, which means advertisers can end up funding <a href="https://www.propublica.org/article/google-alphabet-ads-fund-disinformation-covid-elections">fake news</a> and <a href="https://www.mediamatters.org/twitter/musk-endorses-antisemitic-conspiracy-theory-x-has-been-placing-ads-apple-bravo-ibm-oracle">hate speech</a>. </p>
<p>This is not an accident – social media platforms <a href="https://www.vice.com/en/article/8xw575/heres-how-google-sends-advertising-dollars-to-fake-news-sites">know</a> they profit from the spread of disinformation, while advertisers turn <a href="https://www.newsguardtech.com/special-reports/brands-send-billions-to-misinformation-websites-newsguard-comscore-report/">a blind eye</a>.</p>
<p>Disinformation aims to confuse, paralyse and polarise society at large for political, military, or commercial purposes through <a href="https://www.disinformationindex.org/blog/2022-06-22-disinformation-as-adversarial-narrative-conflict/">orchestrated campaigns</a> to strategically spread <a href="https://doi.org/10.1177/07439156221103852">deceptive or manipulative media content</a>. On social media, <a href="https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf">disinformation tools</a> include bots, deep fakes, fake news and conspiracy theories.</p>
<p>Up to now, most disinformation research has focused on how the system is abused by <a href="https://doi.org/10.1177/0163443716686672">national interests and authoritarian leaders</a>. My research shows that <a href="https://doi.org/10.1177/14614448231207644">disinformation is, in fact, a likely and predictable outcome</a> of this market system instead of an unforeseen consequence.</p>
<h2>A business model that rewards engagement</h2>
<p>Social media platforms were not designed to convey information, but rather for entertainment. They were designed to identify things like the most amusing cat videos, and then recommend them to people who would share them. However, marketing researchers have since found that content that evokes strong positive emotions like awe, or negative emotions like anger and anxiety, is <a href="https://doi.org/10.1509/jmr.10.0353">more likely to go viral</a>. Platforms have taken note of this and built it into their business models.</p>
<p><a href="https://doi.org/10.1177/14614448231207644">The business model of social media</a> works as follows. Platforms provide us with free “infotainment” (information and entertainment), and do everything in their power to keep us engaged. While we consume the content, the platform harvests our data, which is then processed into predictive analytics – the information that is used to target adverts. Advertisers pay for these analytics to power their targeted advertising campaigns. </p>
<p>There is a financial incentive for most platforms to maximise online engagement, which means that any content, factual or not, that receives clicks, likes and comments is highly valued. Influencers who share incendiary, controversial content can become wealthy as a result, often leading others to replicate their style. Therefore, it is unsurprising that many creators publish confrontational, simplistic and emotionally charged content with us-against-them narratives. </p>
<p>Stoking social anxieties and fuelling tribalism is also <a href="https://doi.org/10.1177/07439156221103852">how conspiracy theories circulate</a>.</p>
<hr>
<p>
<em>
<strong>
Leer más:
<a href="https://theconversation.com/i-watched-hundreds-of-flat-earth-videos-to-learn-how-conspiracy-theories-spread-and-what-it-could-mean-for-fighting-disinformation-184589">I watched hundreds of flat-Earth videos to learn how conspiracy theories spread – and what it could mean for fighting disinformation</a>
</strong>
</em>
</p>
<hr>
<h2>Digital marketing and disinformation</h2>
<p>Digital marketing is a commercial practice by which firms create value over the internet. It includes search optimisation, content marketing, influencers, pay-per-click adverts, affiliate programs, and ordinary advertising. Brands hire digital marketing agencies and firms known as <a href="https://www.techopedia.com/definition/31293/ad-tech">ad tech</a>, which operate the software that makes <a href="https://www.lifewire.com/ads-online-why-are-they-following-you-around-the-web-4063788">adverts follow us around the internet</a>. </p>
<p>ad tech firms <a href="https://www.bandt.com.au/how-programmatic-advertising-funds-an-increasingly-polarised-world/">operate without accountability or oversight</a>, so when a brand pays an ad tech firm to place their ads, they also outsource their responsibility. A brand might therefore unknowingly end up funding disinformation about major global events like the <a href="https://doi.org/10.1080/15252019.2023.2173991">Russia-Ukraine war</a> and the <a href="https://www.aljazeera.com/news/2023/11/10/tiktok-faces-renewed-calls-for-a-ban-amid-pro-hamas-anti-israel-claims">Israel-Palestine war</a>. Even after being presented with <a href="https://www.disinformationindex.org/disinfo-ads/2022-10-18-advertising-week-new-york-ad-techs-disinformation-problem/">evidence</a>, brands remain silent.</p>
<p>Influencers play an especially important role in this cutthroat digital market. Driven by the promise of advertising money they seek engagement at any cost, even going as far as promoting <a href="https://doi.org/10.1353/jod.2019.0002">content that undermines democratic institutions</a>. If an influencer has to be demonetised or banned for publishing hate speech it makes no difference to the platform, because <a href="https://www.wired.com/story/meta-is-making-millions-from-fake-accounts/">the platforms get to keep the advertising revenue</a>. </p>
<h2>Democratic governance of digital platforms</h2>
<p>Most brands do not want to be associated with hate speech and bot farms, but <a href="https://www.tandfonline.com/doi/full/10.1080/21670811.2018.1556314">they are</a>. It is easy to look the other way in such a technically complicated market, but marketers have a responsibility. Brands become complicit by remaining silent.</p>
<p>Policymakers and activists are pushing to <a href="https://www.unesco.org/en/articles/online-disinformation-unesco-unveils-action-plan-regulate-social-media-platforms">reform digital platforms to counter disinformation</a>. Most efforts focus on content moderation and fact checking, but little attention is being paid to reforming the digital advertising market. </p>
<p>Platforms and ad tech firms must work to reform a market that profits from disinformation, though it appears they are often <a href="https://theconversation.com/how-tech-firms-have-tried-to-stop-disinformation-and-voter-intimidation-and-come-up-short-148771">unwilling or unable to lead the way</a>. </p>
<p>Brand managers can use their budgets to hold platforms accountable, especially if they act in large numbers, as demonstrated by the recent <a href="https://www.bbc.com/news/world-us-canada-67460386">X (formerly known as Twitter) ad boycott following Elon Musk’s antisemitic remarks</a>. If all else fails, <a href="https://theconversation.com/regulating-political-misinformation-isnt-easy-but-its-necessary-to-protect-democracy-216537">policymakers must step in</a> to ensure that the profits of these tech giants do not come at the cost of our democracy.</p><img src="https://counter.theconversation.com/content/217842/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Carlos Diaz Ruiz no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.</span></em></p>Deceptive content on social media is being monetised by digital platforms, advertisers, and influencersCarlos Diaz Ruiz, Assistant Professor, Hanken School of EconomicsLicensed as Creative Commons – attribution, no derivatives.