tag:theconversation.com,2011:/au/topics/bots-39927/articlesBots – The Conversation2024-02-07T12:03:08Ztag:theconversation.com,2011:article/2213562024-02-07T12:03:08Z2024-02-07T12:03:08ZGaza is now the frontline of a global information war<p>The conflicts in Gaza and Ukraine have become key battlegrounds in an information war that goes far wider than their tightly drawn physical borders. Carefully crafted social media posts and other online propaganda are fighting to make people around the world take sides, harden their positions and even <a href="https://time.com/6549544/israel-and-hamas-the-media-war/">move broader public opinion</a>. </p>
<p>Propaganda has always been a weapon of war, but the digital <a href="https://www.ieworldconference.org/content/WP2023/Papers/GDRKMCC23_4.pdf">revolution</a> increases its reach, immediacy and effectiveness and makes it a more potent tool. This makes it harder and harder for the average person, as well as professionals with expertise, to work out what is true and what isn’t. </p>
<p>To understand this information war, we need to understand where and how arguments and ideologies are promoted and developed online. </p>
<p>In some instances, online propaganda simply involves the framing of real events, <a href="https://www.isdglobal.org/digital_dispatches/capitalising-on-crisis-russia-china-and-iran-use-x-to-exploit-israel-hamas-information-chaos/?cmplz-force-reload=1705683801885">violent images and videos, and hate speech</a> to emphasise the guilt of one side and vindicate the other.</p>
<p>But much material relies on the creation of what’s commonly referred to as fake news. This often takes the form of fabricated stories published on social media that repurpose or mislabel real photos or videos. </p>
<p>For example, <a href="https://perma.cc/5H76-YBBP">one post</a> on X (formerly Twitter) that was viewed 300,000 times used a photo of an accidental fire at a McDonald’s restaurant in New Zealand to falsely claim the company had been attacked by pro-Palestinian protestors for its perceived support of Israel. Despite <a href="https://factcheck.afp.com/doc.afp.com.34GE6ZA">being debunked</a>, the story was still the focus of heated <a href="https://twitter.com/search?q=mcdonalds%20IDF&src=typed_query&f=live">discussions</a> on social media channels. </p>
<p>There are also <a href="https://news.sky.com/story/its-important-to-separate-the-facts-from-speculation-what-we-actually-know-about-the-viral-report-of-beheaded-babies-in-israel-12982329">reports of excerpts from video</a> games and old TikToks being shared with claims they are from real current events in Gaza, and fake government agency <a href="https://www.euronews.com/my-europe/2023/11/07/israel-hamas-war-fake-mossad-account-creates-online-confusion">social media accounts</a> posting disinformation. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-houthis-four-things-you-will-want-to-know-about-the-yemeni-militia-targeted-by-uk-and-us-military-strikes-221040">The Houthis: four things you will want to know about the Yemeni militia targeted by UK and US military strikes</a>
</strong>
</em>
</p>
<hr>
<p>Advances in AI are also playing a role. Experts in digital <a href="https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47">forensics</a> have shown how AI-faked photographs of bloodied babies and abandoned children in Gaza were being widely used in November 2023. These were being published at the same time as the media was trying to investigate allegations that <a href="https://news.sky.com/story/its-important-to-separate-the-facts-from-speculation-what-we-actually-know-about-the-viral-report-of-beheaded-babies-in-israel-12982329">babies</a> had been beheaded in the Hamas attack of October 7. </p>
<p>Deep fake videos have been used in the Gaza conflict, to show prominent <a href="https://fortune.com/2023/12/04/deepfakes-israel-hamas-war-ai-detection-tech-startups/">figures</a> in the Middle East saying things they never said, and it is reasonable to think they don’t believe. Edited battlefield footage from Ukraine and modified footage from high-end military computer games have also been used as deepfaked “Gazan footage”, with the Associated Press keeping an extensive <a href="https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47">archive</a> of examples.</p>
<p>Based on what we know about misinformation on other subjects, it’s likely that much of this online propaganda about Gaza isn’t being generated by individual supporters posting randomly on social media. <a href="https://www.zdnet.com/article/the-dark-webs-latest-offering-disinformation-as-a-service">Misinformation contractors</a> now make their <a href="https://www.theguardian.com/world/2023/feb/15/revealed-disinformation-team-jorge-claim-meddling-elections-tal-hanan">services available</a> on the dark web (an encrypted part of the web that makes it very difficult to identify users) to people looking to mount widespread campaigns.</p>
<p>Inside the dark web, those developing mis- and disinformation can use techniques that are used by legitimate marketing companies in the outside world. They can <a href="https://www.tandfonline.com/doi/pdf/10.1080/23738871.2020.1797135?casa_token=BqdCKN1_d5YAAAAA:Fn7CF8QaYy62jxJFJOKOfkiK7yneTQ_Tz7PMcR7B6KHBHdBa_xHDP0A8S1VWMcLGbl-gBCTFukY">experiment</a> with messages, and test the responses they receive to them. On dark web forums, <a href="https://dl.acm.org/doi/pdf/10.1145/3366424.3385775?casa_token=1EVIZafzOkQAAAAA:O0_x_p8Teo-BifB8gkMRs7T247ebOH08wO7QkFLgqDLLARJERRguRHwAjCdAwvDowiC3fE6AYk0">groups of activists</a> can collaborate on <a href="https://www.tandfonline.com/doi/pdf/10.1080/10584609.2019.1661888?casa_token=DFHnxgRL-mAAAAAA:3UYaV5i58bAcJ1i_5S_XWzIkOPcdO1Qe8OeIW8A5U8o-myGRu1ZXDuqiVNiMdIhqkbL8V_iaGeg">messaging</a>, imagery, timing and targeting to best effect.</p>
<p>Another origin of much misinformation is <a href="https://www.nato.int/nato_static_fl2014/assets/pdf/2020/5/pdf/2005-deepportal2-troll-factories.pdf">“troll farms”</a>, which are staffed by government agents or their proxies in <a href="https://www.rollingstone.com/politics/politics-features/china-internet-trolls-russia-copycat-1234728307/">China</a>, <a href="https://www.rand.org/content/dam/rand/pubs/research_reports/RR4300/RR4373z1/RAND_RR4373z1.pdf">North Korea</a> and <a href="https://www.cnn.com/2023/02/14/europe/russia-yevgeny-prigozhin-internet-research-agency-intl/index.html">Russia</a>, among other countries. These are groups who identify the messages they think will change attitudes and amplify them through coordinating social media campaigns.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/israel-now-ranks-among-the-worlds-leading-jailers-of-journalists-we-dont-know-why-theyre-behind-bars-221411">Israel now ranks among the world’s leading jailers of journalists. We don't know why they're behind bars</a>
</strong>
</em>
</p>
<hr>
<p>They are increasingly using AI-driven bots programmed to spread particular narratives or key words or phrases. “Viral” bots magnify the reach of their content by getting networks of other bots to repost it, which in turn encourages <a href="https://www.econstor.eu/bitstream/10419/214101/1/IntPolRev-2019-4-1442.pdf">search engine</a> and <a href="https://oro.open.ac.uk/66155/8/70-Article%20Text-258-2-10-20190906.pdf">social media</a> algorithms that favour popular and provocative posts to give it greater prominence. </p>
<p>The dark web origins of misinformation makes it much harder for governments to track and stop the people creating it, as does the use of encrypted messaging services such as WhatsApp and <a href="https://www.rollingstone.com/politics/politics-features/telegram-fueling-israel-hamas-war-misinformation-1234854300/">Telegram</a> to share content. By the time the authorities have identified a piece of misinformation it may have been seen by many thousands of people across multiple channels. </p>
<p>The traditional media is also struggling to sift through and counter the weight of misinformation about Gaza, which appears in social media much faster than journalists can verify or debunk it. And the death of <a href="https://www.theguardian.com/world/2023/dec/21/israel-idf-accused-targeting-journalists-gaza#:%7E:text=The%20Committee%20to%20Protect%20Journalists,workers%20in%20any%20recent%20conflict">so many journalists</a> in Gaza is making accurate news harder to gather. </p>
<p>Media outlets are often <a href="https://www.theguardian.com/media/2023/oct/16/bbc-gets-1500-complaints-over-israel-hamas-coverage-split-50-50-on-each-side">accused of bias</a> in both directions. So when traditional news is seen as inadequate or hard to come by, people are more likely to turn social media and its flood of dark web-created misinformation. </p>
<p>The information war in Gaza is a war of values, and a war of behaviours, of establishing who is “them” and who are “us”. The war in Ukraine is exactly the same. The danger is that in shaping the view of the public, the information war could have an impact on governments and on the battlefield.</p><img src="https://counter.theconversation.com/content/221356/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert M. Dover does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Viral bots are ‘tricking’ social media algorithms to get more coverage for disinformation.Robert M. Dover, Professor of Intelligence and National Security, University of HullLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2109562023-10-27T12:17:55Z2023-10-27T12:17:55ZWhy Elon Musk is obsessed with casting X as the most ‘authentic’ social media platform<figure><img src="https://images.theconversation.com/files/555929/original/file-20231025-19-mfd5h2.jpg?ixlib=rb-1.1.0&rect=8%2C8%2C5521%2C3772&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">X CEO Elon Musk has argued that his social media platform allows users to 'be their true selves.'</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/elon-musk-ceo-of-tesla-and-x-arrives-for-the-ai-insight-news-photo/1678314548?adppopup=true">Nathan Howard/Getty Images</a></span></figcaption></figure><p>With X, formerly known as Twitter, hitting the <a href="https://www.nytimes.com/2022/10/27/technology/elon-musk-twitter-deal-complete.html">one-year anniversary</a> of Elon Musk’s US$44 billion takeover of the social media platform, it can feel disorienting to try to make sense of all that’s gone down. </p>
<p>Blue check-mark verifications <a href="https://www.nytimes.com/2023/03/31/technology/personaltech/twitter-blue-check-musk.html">got hawked</a>. Internal company documents about content moderation policies <a href="https://www.npr.org/2022/12/14/1142666067/elon-musk-is-using-the-twitter-files-to-discredit-foes-and-push-conspiracy-theor">got laundered</a>. A puzzling rebrand to “X” <a href="https://www.washingtonpost.com/technology/2023/07/24/elon-musk-x-twitter-rebrand-logo/">got hatched</a>. And a literal cage match with Meta head Mark Zuckerberg was on again and, ultimately, <a href="https://www.nytimes.com/2023/08/13/business/zuckerberg-musk-cage-fight.html">off again</a>.</p>
<p>It appears unclear what, precisely, Musk’s ambitions are for the platform. But when a threatening competitor, Threads, emerged in summer 2023, he may have offered a brief window of insight.</p>
<p>A clone of X, Threads <a href="https://www.washingtonpost.com/technology/2023/07/10/threads-meta-twitter-zuckerberg/">rolled up 100 million users</a> in less than a week after its June launch, becoming the fastest-growing app of all time. Musk promptly erupted with two attacks on Zuckerberg’s creation.</p>
<p>The first was catty and, as such, invited notice within digital spaces programmed to promote outrage. <a href="https://twitter.com/elonmusk/status/1676770522200252417?lang=en">Musk declared</a>, “It is infinitely preferable to be attacked by strangers on Twitter, than indulge in the false happiness of hide-the-pain Instagram.” </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1676770522200252417"}"></div></p>
<p><a href="https://twitter.com/elonmusk/status/1678686570122199040">The second</a> – “You are free to be your true self here” – was more overlooked, yet revealed an essential premise that social media companies must sell to all their users.</p>
<p>As I argue in my new book, “<a href="https://www.sup.org/books/title/?id=36333">The Authenticity Industries</a>,” authenticity represents the central battle for social media companies. They design their platforms to demonstrate and facilitate genuine self-performance from users. That’s what makes for dependable data, and dependable data – sold to advertisers – is <a href="https://slate.com/technology/2019/10/mark-zuckerberg-facebook-georgetown-speech-authentic.html">what makes the internet economy hum</a>.</p>
<p>Silicon Valley’s commitment to the ideal of authenticity remains ironclad, even as more and more people are starting to recognize that <a href="https://theconversation.com/taylor-swifts-eras-tour-is-a-potent-reminder-that-the-internet-is-not-real-life-209325">the internet isn’t real life</a>.</p>
<h2>A life performed</h2>
<p>Over the past decade, Instagram – with its glossy, obsessively manicured tableaux – became the aesthetic antithesis against which all other social media platforms measure that authenticity. </p>
<p>Instagram tinted life by allowing users to apply sun-kissed, nostalgic filters to their photographs. To scrub clean any blemishes on selfies posted there, add-ons like Facetune enabled magazine-quality Photoshopping <a href="https://digitalnative.substack.com/p/the-rejection-of-internet-perfection?s=r.">and topped paid-app charts</a>. Instagram became your highlight reel: galleries of far-flung travels and mouth-watering food porn exquisitely curated – a life performed as much as lived.</p>
<p>“[Instagram’s] basically almost designed to make your friends jealous,” one executive at TikTok <a href="https://www.sup.org/books/title/?id=36333">confided to me</a>. “It kind of makes me depressed a little bit sometimes when I go on Instagram and I feel, like, ‘Oh, I’m not fit enough. I’m not successful enough.’”</p>
<p>Over time, #NoFilter caveats, blurry photo dumps and shameless “finsta” accounts – a portmanteau of “fake” and “Instagram” – <a href="https://www.refinery29.com/en-gb/bereal-authenticity-performance-online-instagram">arose as forms of authenticity backlash</a> to the “false happiness” of the posed lifestyles appearing on users’ feeds.</p>
<p>Heck, even Instagram knew it had a problem, copy-and-pasting Snapchat’s signature ephemerality and <a href="https://about.instagram.com/blog/announcements/introducing-instagram-stories">launching its disappearing Stories feature</a> to lower the pressure on users to post perfection.</p>
<p>If ever a platform, then, has been deserving of <a href="https://www.nytimes.com/2019/06/17/business/media/miquela-virtual-influencer.html">Reddit co-founder Alexis Ohanian’s 2019 quip</a> that “social media, to date, has largely been the domain of real humans being fake,” it’s probably Instagram.</p>
<h2>Different flavors of the same thing</h2>
<p>Recall Musk’s second, <a href="https://twitter.com/elonmusk/status/1678686570122199040">more revelatory rejoinder</a> on behalf of X: “You are free to be your true self here.”</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1678686570122199040"}"></div></p>
<p>For two decades, this has been the first commandment of social media promotion – both by platforms and on them.</p>
<p>More broadly, all online communication bears the burden of proof in this vein: It must compensate for the absence of face-to-face verifiability, which a 1993 Peter Steiner <a href="https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog">cartoon for The New Yorker</a> satirized with the caption, “On the internet, nobody knows you’re a dog.”</p>
<p>Research confirms this. One <a href="https://www.mdpi.com/2076-0760/6/1/10">clever study</a> by media scholars Meredith Salisbury and Jefferson Pooley scoured the publicity pablum, CEO platitudes and app store copy from Friendster onward, finding that nearly every site leans on the same rhetorical clichés – like “real life” and “genuine” – as a means of defining itself against the purported phoniness of other sites.</p>
<p>But this might well be the narcissism of tiny differences at work, with Threads only the latest instance of social media copycatting. </p>
<p>In 2020, Wired <a href="https://www.wired.com/story/social-media-giants-look-the-same-tiktok-twitter-instagram/">incisively tallied</a> how <a href="https://blog.twitter.com/en_us/topics/product/2020/introducing-fleets-new-way-to-join-the-conversation">X’s Fleets</a>, a 24-hour posting-expiration feature, was a copy of Instagram’s Stories, which was itself originally ripped off from Snapchat. <a href="https://influencermarketinghub.com/what-is-snap-spotlight/">Snapchat developed Spotlight</a> for short-form video content, comparable to Instagram’s Reels and YouTube’s Shorts, all of which were an attempt to fend off TikTok, itself a reincarnation of Vine.</p>
<p>And all of these, including last year’s 56 million-times-downloaded viral sensation, <a href="https://www.washingtonpost.com/technology/2022/09/17/bereal-copy-tiktok-instagram-snapchat/">BeReal</a> – where users snap unfiltered, unposed selfies for friends at random times daily – have promised users the opportunity to be their true selves. </p>
<p>In as much as Musk has pursued anything in his first year as Chief Twit, that seems to be his ambition: engineering a space with no social guardrails, where any inhibitions of decorum are ignored in favor of speaking, authentically, from the heart.</p>
<h2>Ambitions don’t match reality</h2>
<p>To a certain kind of personality, that’s probably an alluring offer. Indeed, Zuckerberg’s original – and still most enduring – platform triumph, Facebook, depended on designing a website that induced an online performance of a “true” offline self.</p>
<p>Those norms were embedded in design choices, as Zuckerberg made plain his disregard for our <a href="https://www.penguinrandomhouse.com/books/708488/the-presentation-of-self-in-everyday-life-by-erving-goffman/">multistage, two-faced selves</a> in an <a href="https://www.simonandschuster.com/books/The-Facebook-Effect/David-Kirkpatrick/9781439102121">oft-quoted line</a>, “You have one identity. The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.”</p>
<p>“Single-identity authenticity” was Facebook’s early market strategy, and the nascent website initially required users to register with a college email address. The design choice may well have been critical to Facebook vanquishing its closest early competitors, <a href="https://www.mentalfloss.com/article/556413/friendster-rise-and-fall-jonathan-abrams">Friendster</a> and <a href="https://www.theatlantic.com/technology/archive/2011/01/the-rise-and-fall-of-myspace/69444/">Myspace</a>.</p>
<p>“The .edu email system served as this authenticating clearinghouse,” one early Facebook executive <a href="https://www.sup.org/books/title/?id=36333">explained to me</a>, a phrasing that could as easily be applied to the utility of Instagram accounts today for Threads. “Really, users 0 through 10 million were all verified and authenticated by the .edu email system, [while] Myspace had 57 Jennifer Anistons.”</p>
<p>That authenticating clearinghouse would soon vanish as Facebook opened itself up to users not enrolled in college – like, say, <a href="https://www.theguardian.com/technology/2017/oct/30/facebook-russia-fake-accounts-126-million">the disinformation agents</a> who have meddled in U.S. elections from Russia.</p>
<h2>A regression to the meanest</h2>
<p>All this competition makes for authenticity jockeying: Musk attempted to parry Zuckerberg’s Threads threat with his invitation to convene strangers who will stop being polite and <a href="https://en.wikipedia.org/wiki/The_Real_World_(TV_series)">start getting real</a>. </p>
<p>But in an ominous echo of Rupert Murdoch’s $500 million <a href="https://www.theatlantic.com/technology/archive/2011/06/as-myspace-sells-for-35-million-a-history-of-the-networks-valuation/241224/">write-off</a> of Myspace, Musk’s $44 billion purchase has struggled with those bot-and-blue check mark difficulties of user verification.</p>
<p>None of this is to say Threads will eventually triumph over X, even as the crisis in the Middle East – and the misinformation circulating because of it – <a href="https://slate.com/technology/2023/10/x-twitter-elon-musk-israel-hamas-gaza-misinformation-meta-threads.html">seems to have initiated</a> another exodus of defectors from X. After all, a month after its launch, Threads had already lost <a href="https://gizmodo.com/threads-has-lost-more-than-80-of-daily-active-users-1850707329">an estimated</a> 80% of its daily active users.</p>
<p>Threads’ vibes may have been cheerful and friendly at the outset – disingenuously so, according to Musk – but it may well prove that, eventually, all social media sites regress toward the meanest. </p>
<p>Musk would probably call that “authenticity.” On X, you might not be able to trust the veracity of the user or the information they’re spreading. But you can be sure that they don’t feel like they have to bite their tongue and act nice.</p>
<p>Social media company names may change. But when identity is the most lucrative commodity they trade in, their fetishization of authenticity won’t.</p><img src="https://counter.theconversation.com/content/210956/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Serazio does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>With identity the most lucrative commodity social media platforms trade in, their fetishization of authenticity remains ironclad.Michael Serazio, Associate Professor of Communication, Boston CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2161932023-10-23T15:58:53Z2023-10-23T15:58:53ZX/Twitter: imposing a US$1 bot tax on new customers will only make the platform’s problems worse<p>X, formerly known as Twitter, <a href="https://help.twitter.com/en/using-x/not-a-bot">is testing</a> a subscription plan called “Not a Bot” of US$1 equivalent per annum in New Zealand and the Philippines. Those who don’t subscribe will still be able to log in to view content and follow other accounts, but won’t be able to interact through tweeting, liking, sharing or bookmarking content. The plan is limited to new accounts and only the browser version of the platform, as opposed to the mobile app. </p>
<p>As the name of the plan suggests, X has positioned Not a Bot as a means to deter bots. Bots are fake accounts running on automated scripts, usually created by malicious actors to spread fake news and drive advertising traffic. They are present in <a href="https://www.statista.com/statistics/1264226/human-and-bot-web-traffic-share/">large numbers</a> not just on X but also other platforms such as <a href="https://www.comparitech.com/blog/information-security/inside-facebook-bot-farm/">Facebook</a>. </p>
<p>Bots have been a dominant theme since Musk took charge of Twitter a year ago. His <a href="https://www.businessinsider.com/twitter-chaos-elon-musk-lash-out-blue-check-reversal-2023-4?r=US&IR=T">troubled move</a> to start charging for blue checkmarks was part of the same battle, while he also <a href="https://www.theguardian.com/technology/2022/may/17/elon-musk-twitter-deal-bot-tesla">previously tried</a> to get out of buying the company on the grounds that the previous board hadn’t been clear with him about bot levels. Introducing a broader system of charges to tackle bots is a bold move, but there’s a good chance it will only make the company’s problems worse. </p>
<h2>The bot arms race</h2>
<p>Ever since the Musk takeover, X’s financial woes seem to <a href="https://www.reuters.com/technology/elon-musk-says-twitters-cash-flow-still-negative-ad-revenue-drops-2023-07-15/">have only increased</a>. Despite aggressive layoffs of thousands of employees and a <a href="https://arstechnica.com/tech-policy/2022/11/musk-to-gut-twitter-infrastructure-cut-costs-by-1b-annually/">sharp reduction</a> in server capacity, the company has yet to reach positive cashflow (though new CEO <a href="https://techcrunch.com/2023/08/10/ceo-says-x-formerly-twitter-is-close-to-break-even/#:%7E:text=X%20CEO%20Linda%20Yaccarino%20claims,pretty%20close%20to%20break%20even.%E2%80%9D">Linda Yaccarino has said</a> breakeven is close). This is because its revenues have shrunk as fast, falling <a href="https://www.nytimes.com/2023/06/05/technology/twitter-ad-sales-musk.html">approximately 59%</a> in the year to May as advertisers have exited due to increased hate speech and ads featuring things like online gambling and marijuana products. This, combined with the increased debt burden from the takeover, has put X under serious pressure to achieve a turnaround. </p>
<p><a href="https://www.gamedeveloper.com/business/-em-runescape-em-developer-wins-suit-against-bot-maker">Internet companies</a> have long been in an <a href="https://phys.org/news/2019-05-fake-facebook-accounts-never-ending-bots.html">arms race</a> with bot makers. Every time they implement a new means of detecting fake accounts, bot makers find ways of countering them. Older bot-detection algorithms sought to purge internet platforms using machine learning and searching for unusual language patterns, but weren’t <a href="https://mitsloan.mit.edu/ideas-made-to-matter/study-finds-bot-detection-software-isnt-accurate-it-seems">particularly successful</a>. </p>
<p>Things have gotten worse with the rise of <a href="https://mitsloan.mit.edu/ideas-made-to-matter/study-finds-bot-detection-software-isnt-accurate-it-seems">generative AI</a>. It has literally changed the face of fake news, videos and images, reducing the ability to detect bots either through algorithms or even human moderators.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Illustration of a social media bot on a phone" src="https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=463&fit=crop&dpr=1 600w, https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=463&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=463&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=582&fit=crop&dpr=1 754w, https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=582&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/555319/original/file-20231023-19-uu7b0j.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=582&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">No ifs, no bots.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/digital-chatbot-assistant-conversation-provide-access-2232434469">Hodoimg</a></span>
</figcaption>
</figure>
<p>Musk’s US$1/year proposal represents a different approach. The main problem is bot farms, where thousands of accounts are created and then run on different social media platforms via large servers. This makes it possible through economies of scale to create a bot account for a <a href="https://www.aljazeera.com/news/2023/9/19/elon-musk-says-x-formerly-twitter-could-charge-a-monthly-fee-for-access">fraction of a penny</a>, according to Musk. By charging new X accounts, it will become much more expensive for bot farms to achieve the scale needed to be profitable. </p>
<p>Restricting the charge only to the browser version of X will hurt them the most: whereas the mobile app only allows a single account login at a time, bot accounts need multiple logins on different browser windows running on the same machine to be useful.</p>
<h2>A decent idea in theory …</h2>
<p>On the face of it, the proposal sounds a smart move. Having fewer bots may help X win back some of the advertisers who have <a href="https://www.vox.com/technology/2023/3/23/23651151/twitter-advertisers-elon-musk-brands-revenue-fleeing">quit the platform</a>. </p>
<p>It is also a new revenue stream. Musk said shortly after his takeover that X was signing up <a href="https://www.forbes.com/sites/siladityaray/2022/11/27/musk-touts-all-time-high-twitter-signups-and-daily-active-users-on-as-he-promises-new-features/?sh=59c529c612d2">2 million new accounts</a> per week, which points to some potential, but let’s not get carried away. It is unclear how many genuine users will agree to pay the charge, but since browsers account for <a href="https://www.cnet.com/tech/services-and-software/twitters-users-base-keeps-growing-particularly-on-mobile-devices/">approximately 20%</a> of the monthly total active base, we could assume a generous 10% of new subscribers. This would result in a paltry US$10.4 million (£8.6 million) increase in annual revenues, less than 0.25% of the company’s 2022 total. </p>
<p>This also has to be offset against potential users deciding to walk away. Particularly if the scheme becomes more aggressive in size and scope, it could become the undoing of the platform. Musk has already hinted at charging all X users a <a href="https://www.standard.co.uk/news/tech/twitter-x-pay-fee-use-elon-musk-b1107980.html">small monthly</a> fee. Supposing it were US$1 per month, it could prompt a steep drop not only in bots but in genuine users. </p>
<p>Let’s not forget <a href="https://www.businessinsider.com/twitter-chaos-elon-musk-lash-out-blue-check-reversal-2023-4">the chaos</a> around Musk’s blue check mark subscription, for which lots of celebrities refused to pay and the platform partially backtracked to prevent an exodus. That system also didn’t stop malicious actors <a href="https://www.theguardian.com/technology/2023/aug/27/consumers-complaining-x-targeted-scammers-verification-changes-twitter">from scamming</a> genuine X users. With X users already having felt the whiplash of Musk’s previous decisions, charging them for what has been free could be the straw that breaks the camel’s back. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of Elon Musk's blue check mark on Twitter" src="https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/555320/original/file-20231023-15-a4f3ss.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Let’s not play it again, Sam.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/kaunas-lithuania-2023-march-27-close-2280621197">Rokas Tenys</a></span>
</figcaption>
</figure>
<p>To emphasise, a platform’s value lies in its user base. The more users interact with each other, the more content they produce and vice versa. This induces other users to join the platform, which makes it attractive to advertisers who can reach a bigger, more engaged audience. This virtuous cycle is known as “<a href="https://www.investopedia.com/terms/n/network-effect.asp">network effects</a>”, and drives the success not only of social media but also services like telecoms, ride sharing, messaging, streaming and multiplayer online games.</p>
<h2>High risk, low return</h2>
<p>While bots are a very real issue, X’s “bot tax” is not the best way forward. This is akin to a government raising taxes on tax-paying citizens because it is incapable of getting tax avoiders to pay their share. </p>
<p>Unfortunately there’s no quick fix with bots. The best way forward is leveraging advanced technologies such as generative AI, sharing more transparent data with academic institutions to push research forward, and sharper human moderation. </p>
<p>X needs to remain in that arms race, while sticking to the broader reinvention strategy already outlined by Musk. This includes turning X into a <a href="https://www.theverge.com/2023/7/26/23808796/elon-musks-x-everything-app-vision">super-app</a>, akin to WeChat, Grab and others from Asia, offering new services such as telephony, remittances, deliveries, mini games and video channels. Giving users more things to do in one place is key to the platform’s future. But that requires more patience and cash, which both X and Musk might be running out of.</p><img src="https://counter.theconversation.com/content/216193/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Hamza Mudassir does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It’s a year since Elon Musk took charge of Twitter. His plan for charging new customers looks like another mis-step.Hamza Mudassir, Lecturer in Strategy, Cambridge Judge Business SchoolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2071872023-06-27T18:14:43Z2023-06-27T18:14:43ZChatbots can be used to create manipulative content — understanding how this works can help address it<figure><img src="https://images.theconversation.com/files/533248/original/file-20230621-14002-1plwfd.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5772%2C3827&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Artificial intelligence can be used to produce persuasive texts that influence behaviour.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Manipulative communication includes disinformation, propaganda, con artistry and fraud. With the increasing use of artificial intelligence (AI), manipulative communication is not only being aided by AI, automation and machine learning, but will likely be dominated by such practices in the near future.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="book cover with the title Social Engineering How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication" src="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=900&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=900&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=900&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1131&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1131&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533218/original/file-20230621-27-ue0mbs.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1131&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Social Engineering’ looks at how the internet affected manipulative communication.</span>
</figcaption>
</figure>
<p>As communication scholars, we were interested in the applications of AI in supporting manipulative communication efforts. Our recent book, <a href="https://mitpress.mit.edu/9780262543453/"><em>Social Engineering: How Crowdmasters, Phreaks, Hackers, and Trolls Created a New Form of Manipulative Communication</em></a> looks at how the internet affected manipulative communication.</p>
<h2>AI chatbots</h2>
<p>ChatGPT, one of the most popular and widely used chatbots, is an artificial intelligence chatbot, capable of parsing text input and quickly providing text responses. </p>
<p>While it has been <a href="https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html">critiqued for being misleading</a>, when we asked it about the possibilities for manipulative communication, its response was remarkably prescient:</p>
<blockquote>
<p>AI can be used for manipulative communication in a few ways:</p>
<p>• Social media bots: AI-powered bots can be programmed to mimic human behavior and spread false or misleading information to manipulate public opinion.</p>
<p>• Deepfake technology: AI algorithms can be used to generate fake audio or video content that appears to be real, making it easier to spread false information or manipulate individuals.</p>
<p>• Personalized advertising: AI algorithms can analyze data about a person’s behavior, preferences, and relationships to create targeted advertisements that manipulate their emotions and influence their purchasing decisions.</p>
</blockquote>
<p>As we argue in <em>Social Engineering</em>, manipulative communication can be understood as a form of “masspersonal social engineering.” It’s “masspersonal” in that it can oscillate between targeted messages or societal-scale manipulation. It’s “social engineering” in that it combines the respective tools and techniques of hackers and propagandists in an attempt to shape the perceptions and actions of audiences.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/chatgpts-greatest-achievement-might-just-be-its-ability-to-trick-us-into-thinking-that-its-honest-202694">ChatGPT's greatest achievement might just be its ability to trick us into thinking that it's honest</a>
</strong>
</em>
</p>
<hr>
<p>Masspersonal social engineering typically involves three stages: trashing, pretexting and bullshitting.</p>
<p>Each of these stages can be automated, with new AI tools increasing the pace and intensity.</p>
<h2>Trashing</h2>
<p>Trashing is the stage where the masspersonal social engineer gathers information on potential targets. We use the term “trashing” because it hearkens back to a mid-20th century hacker process of literally <a href="https://hackcur.io/trashing-the-phone-company-with-suzy-thunder/">going through corporate trash</a> to find passwords and restricted information.</p>
<p>While social engineers <a href="https://doi.org/10.1016/B978-1-59749-215-7.X0001-7">still go through physical trash</a>, these days trashing takes place in digital environments.</p>
<p>For example, trashing was key to the Russian hack of former White House Chief of Staff John Podesta’s emails in 2016. Podesta, who was in charge of Hillary Clinton’s 2016 presidential campaign, <a href="https://www.vice.com/en/article/mg7xjb/how-hackers-broke-into-john-podesta-and-colin-powells-gmail-accounts">fell victim to a phishing attack</a>. </p>
<p>Podesta wasn’t the first target — the <a href="https://apnews.com/dea73efc01594839957c3c9a6c962b8a">Russian hackers worked their way</a> through several email addresses used by Clinton staffers, including staffers who were no longer part of her campaign and who had abandoned their email accounts years before. </p>
<p>In other words, they had to work their way through the digital detritus of old and abandoned emails until they were able to find active ones – including Podesta’s – and then they could send a phishing email.</p>
<p>Digital trashing has already been automated. Facebook/Meta, Twitter and especially LinkedIn have been <a href="https://portal.research.lu.se/en/publications/the-weaponization-of-social-media-spear-phishing-and-cyberattacks">ripe targets for the automated gathering of data on potential targets</a>. </p>
<p>Beyond social media, websites — particularly those that have organizational structures, names of employees and email addresses — <a href="https://nostarch.com/practical-social-engineering">are targets</a>. </p>
<h2>Pretexting</h2>
<p>A pretext is the role a masspersonal social engineer plays when trying to get information or manipulate a target. For example, in a phishing email, the phisher is playing a role as a bank or government representative. The most effective pretexts are developed based on the information gathered in trashing — the more information a social engineer has on their target, the more likely the social engineer can construct a compelling role to play.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a man sits in the dark in front of a laptop and additional screen. he is wearing headphones" src="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/533801/original/file-20230623-2626-fqvcib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When phishing for information, a social engineer may play a deceptive role.</span>
<span class="attribution"><span class="source">(Jefferson Santos/Unsplash)</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>And pretexts can be automated. We’ve already seen the effects of <a href="https://doi.org/10.1177/0894439320908190">socialbots on discourse in social media</a>. And for several years people have sounded alarms about <a href="https://doi.org/10.1109/ACCESS.2021.3131517">deepfake videos and audio</a> of political figures.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-combat-the-unethical-and-costly-use-of-deepfakes-184722">How to combat the unethical and costly use of deepfakes</a>
</strong>
</em>
</p>
<hr>
<p>But evidence from security professionals show that automated imitations of everyday people are happening, too. <a href="https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402">A case of fraud</a> involving an AI-based imitation of a CEO’s voice has already occurred, and there are <a href="https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc">reports of fraudsters using AI-generated voices</a> of relatives to scam their loved ones.</p>
<h2>Bullshitting</h2>
<p>The third and final stage, bullshitting, is the actual engagement with the target. All the trashing and development of a pretext leads to this point: trashing gives the social engineer background information, and the pretext provides a role-playing framework, but in any back-and-forth engagement with the target, the social engineer engages in improvisation.</p>
<p>As moral philosopher <a href="https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit">Harry Frankfurt famously defines it</a>, “bullshit” is not lying — it’s the indifference to truth. A bullshitter may or may not speak truth. The truth is beside the point; it’s the <em>effect</em> of the communication that matters.</p>
<p>AI could produce bullshit content — including deepfakes — that floods a media system at a much larger scale than a person, or group of people, working together. The primary concern here is the production of seemingly real content that is meant to deceive or muddy debate.</p>
<p>And we are already seeing interest among content marketers, who are <a href="https://www.entrepreneur.com/science-technology/how-can-companies-use-chatgpt-for-content-marketing/450831">using AI</a> to help them crank out more content for their blogs. </p>
<p>Even if no one piece is particularly effective, the flood of such content online will further add to the “<a href="https://doi.org/10.7249/PE198">firehose of falsehood</a>.” This could have the effect of further muddying the waters of online discourse, and eroding our sense of what is true, false and authentic online.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1671586620665995266"}"></div></p>
<h2>Increased intensity</h2>
<p>Manipulative communication isn’t new. But automated manipulative communication is a new development, increasing the pace and intensity of disinformation and misinformation. </p>
<p>We hope that this framework, which breaks down the manipulative communication process into stages, helps future researchers and policymakers come to grips with this development. </p>
<p>Reducing trashing behaviours involves better privacy regulations and cybersecurity to prevent data breaches, and enhanced penalties for organizations that do leak private data. </p>
<p>Addressing pretexting can involve more transparency in the funding for advertising campaigns, particularly in the case of political advertising on social media. </p>
<p>And to combat bullshitting, we should support projects that teach digital media literacy.</p><img src="https://counter.theconversation.com/content/207187/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert W. Gehl received funding from the Fulbright Commission. </span></em></p><p class="fine-print"><em><span>Sean Lawson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Artificial intelligence could be used to generate content intended to manipulate people. Addressing this problem means understanding how communication works to influence people.Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice, York University, CanadaSean Lawson, Professor, Communication, University of UtahLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1942692023-01-18T12:02:53Z2023-01-18T12:02:53ZHow to spot a cyberbot – five tips to keep your device safe<figure><img src="https://images.theconversation.com/files/504652/original/file-20230116-12-67n97i.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4704%2C2682&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Malware is designed to hide in your device </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/illustration-insecure-network-world-wide-computer-1093142627">Jaiz Anuar/Shutterstock</a></span></figcaption></figure><p>You may know nothing about it, but your phone – or your laptop or tablet – could be taken over by someone else who has found their way in through a back door. They could have infected your device with malware to make it a “bot” or a “zombie” and be using it – perhaps with hundreds of other unwitting victims’ phones – to launch a cyberattack. </p>
<p>Bot is short for robot. But cyberbots don’t look like the robots of science fiction such as R2-D2. They are software applications that perform repetitive tasks they have been programmed to do. They only become malicious when a human operator (a “botmaster”) uses it to infect other devices. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=222&fit=crop&dpr=1 600w, https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=222&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=222&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=279&fit=crop&dpr=1 754w, https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=279&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/504005/original/file-20230111-12-lolh45.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=279&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The botmaster controls their zombies via a command and control server (C&C)</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Botmasters use thousands of zombies to form a network (“botnets”), unknown to their owners. The botnet lies dormant until the number of infected computers reaches a critical mass. This is when the botmaster initiates an attack. An attack could involve hundreds of thousands of bots, which target a single or very small number of victims. </p>
<p>This type of attack is called a <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">distributed denial-of-service (DDoS)</a> attack. Its aim is to overwhelm the resources of a website or service with network data traffic.</p>
<p>Attacks are measured by how many connection requests (for example website/browser connections) and by how much data they can generate per second. Usually a lone bot can only generate a few Mbps of traffic. The power of a botnet is in its numbers.</p>
<h2>Are bots illegal?</h2>
<p>Not entirely. Anyone can buy a botnet. “Botnets-for-hire” services <a href="https://www.imperva.com/learn/ddos/booters-stressers-ddosers/">start from $23.99</a> (£19.70) monthly from private vendors. The largest botnets tend to be sold by reference. These services are sold so you can test your personal or company service against such attacks. However, it wouldn’t take much effort to launch an illegal attack on someone you disagree with later on. </p>
<p>Other <a href="https://netacea.com/glossary/good-bots-vs-bad-bots/">legitimate uses</a> of bots include chatting online to customers with automated responses as well as collecting and aggregating data, such as digital marketing. Bots can also be used for online transactions.</p>
<p>Botnet malware is designed to work undetected. It acts like a sleeper agent, keeping a low profile on your system once it’s installed. However, there are some simple ways to check if you think you might be part of a botnet.</p>
<h2>Antivirus protection</h2>
<p>Computer operating systems (such as Windows) come with antivirus protection installed by default, which offers the first line of defence. Antivirus software uses signature analysis. When a security company detects malware, it will make a unique signature for the malware and add it to a database. </p>
<p>But not all malware is known. </p>
<p>More advanced types of antivirus detection solutions include “heuristic” and “behaviour” techniques. Heuristic detection scans algorithm code for suspect segments. Behaviour detection tracks programs to check if they’re doing something they should not (such as Microsoft Word trying to change antivirus rules). Most antivirus packages have these features to a greater or lesser degree but <a href="https://www.av-comparatives.org/">compare different products side by side to side</a> to see if they meet your needs.</p>
<h2>Use a firewall</h2>
<p>Computers are more vulnerable when connected to the internet. Ports, input devices with an assigned number that run on your computer, are one of the parts that become more exposed. These ports allow your computer to send and receive data. </p>
<p>A firewall will block specific data or ports to keep you safe. But bots are harder to detect if the botmaster uses encrypted channels (the firewall can’t read encrypted data like Hypertext Transfer Protocol Secure (https) data). </p>
<p>Investing in a new broadband router rather than using the one your broadband provider sends can help, especially if it features advanced network-based firewalls, web security/URL filtering, flow detection and intrusion detection and prevention systems.</p>
<h2>Behaviour and decisions</h2>
<p>Ignoring system and software updates leaves you vulnerable to security threats. Your computer data should also be backed up on a regular basis. </p>
<p>Don’t use <a href="https://heimdalsecurity.com/blog/why-removing-admin-rights-closes-critical-vulnerabilities-in-your-organization/">administrator accounts</a>for regular computer access for both home and business use. Create a separate user account even for your personal laptop, without admin privileges. It is much easier for attackers to introduce malware via a phishing attack or gain those credentials by using impersonation when you are logged into an administrator account. Think twice before downloading new apps and only install programs that are digitally verified by a trusted company. </p>
<p>Many attacks, such as ransomware, only work when <a href="https://www.ncsc.gov.uk/information/how-cyber-attacks-work">people lack awareness</a>. So keep up to date with the latest information about techniques cybercriminals use. </p>
<h2>Use an alternative domain name service</h2>
<p>Usually your internet provider handles this automatically for you (linking website addresses to network addresses and vice versa). But botnets often use domain name services to distribute malware and issue commands. </p>
<figure class="align-center ">
<img alt="Young hacker in the dark breaks the access to steal information and infect computers and systems." src="https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/504653/original/file-20230116-20-vp6fjx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Botmasters need to infect thousands of devices to create their network of zombies.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-hacker-dark-breaks-access-steal-574105036">Artem Oleshko/Shutterstock</a></span>
</figcaption>
</figure>
<p>You can manually check patterns of known botnet attacks <a href="https://www.opendns.com/home-internet-security/">from sites such as OpenDNS</a> against your computer records. </p>
<h2>What if I think I have a botnet infection?</h2>
<p>Signs your device is a zombie include websites opening slowly, the device running slower than usual or behaving oddly such as app windows opening unexpectedly. </p>
<p>Have a look at what programs are running. On Windows for example, open Task Manager to do a brief survey to see if anything looks suspicious. For example, is a web browser running despite the fact you have not opened any websites?</p>
<p>For more information look at guides to <a href="https://thegeekpage.com/view-the-running-processes/">viewing Windows computer processes</a>. Other tools include <a href="https://www.netlimiter.com/">Netlimiter for Windows</a> and <a href="https://www.obdev.at/products/littlesnitch/index.html">Little Snitch for Mac</a>.</p>
<p>When there have been news reports of a botnet attack, you might want to take a look at <a href="https://checkip.kaspersky.com/">reputable botnet status sites</a> which offer <a href="https://capturelabs.sonicwall.com/m/feature/ip-reputation-lookup/">free checks</a> to see if your network has an infected computer.</p>
<p>If your computer has a botnet infection it either needs to be removed by antivirus software. Some types of malware with features like <a href="https://www.crowdstrike.com/cybersecurity-101/malware/rootkits/">rootkit functionality</a> are notoriously hard to remove. In this case your computer’s data (including the operating system) should be deleted and restored. Another reason to back your computer up on a regular basis - anything not backed up will be lost.</p><img src="https://counter.theconversation.com/content/194269/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How to know if your computers are infected for use in a distributed denial of service attack.Adrian Winckles, Senior Lecturer, School of Computing and Information Science, Anglia Ruskin UniversityAndrew Moore, Senior Lecturer Practitioner in Cyber and Networking, Anglia Ruskin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1962032023-01-09T19:09:21Z2023-01-09T19:09:21ZWhat if your colleague is a bot? Harnessing the benefits of workplace automation without alienating staff<figure><img src="https://images.theconversation.com/files/501716/original/file-20221218-22-mcj5c1.jpg?ixlib=rb-1.1.0&rect=59%2C59%2C7880%2C5237&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Getty Images</span></span></figcaption></figure><p>The need for businesses to adapt to the workplace demands of the COVID-19 pandemic has <a href="https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/how-covid-19-has-pushed-companies-over-the-technology-tipping-point-and-transformed-business-forever">accelerated the adoption</a> of digital technologies, with clear implications for jobs and workers.</p>
<p>But just how much employees worry about the threat of automation – and how real those fears are – can have implications for workplaces beyond the technological change itself.</p>
<p>Our <a href="https://journal.acs.org.au/index.php/ajis/article/view/3833">new research</a> examined how employees feel about the introduction of “robotic process automation” (<a href="https://ieeexplore.ieee.org/document/8070671">RPA</a>) to the workplace. We also looked at how the willingness to embrace these new technologies influenced employees’ assessment of the software bots and their work.</p>
<p>RPA refers to software that interacts with different applications, such as a payroll system or a website, in the same way a human would. </p>
<p>Software robots – the so-called worker bees of RPA – can conduct mundane, repetitive and rule-based tasks such as transferring, <a href="https://link.springer.com/article/10.1007/s12525-019-00365-8">entering and extracting data</a>, accounting reconciliation, and <a href="https://www.sciencedirect.com/science/article/pii/S0166361519304609?casa_token=6TS19ujVpiwAAAAA:lzGoLK708EDwckOFqblKldROqfHIEc4Pr2-0CCW1wy808F26sFYDm7TgveP6tInEgk7SjE5YYw">automated email query processing</a>. And they can do it at a fraction of the cost of employing real people.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1592497917268484096"}"></div></p>
<h2>The 24/7 worker</h2>
<p>Unsurprisingly, organisations have embraced RPA for its <a href="https://link.springer.com/chapter/10.1007/978-3-030-44999-5_10">cost and productivity benefits</a>, but it’s not without its challenges. As RPA interacts with various applications, for example, it can “break” when one of the <a href="https://www.capco.com/en/Capco-Institute/Journal-46-Automation/Avoiding-pitfalls-and-unlocking-real-business-value-with-RPA">underlying systems is upgraded</a> and the user interface changes. </p>
<p>RPA is also a double-edged sword for employees. On the one hand, with mundane and repetitive tasks outsourced to software robots, workers can focus on more complex tasks that require “soft” skills, empathy and <a href="https://aisel.aisnet.org/ecis2018_rp/66/">decision-making capabilities</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/companies-are-mitigating-labour-shortages-with-automation-and-this-could-drastically-impact-workers-181017">Companies are mitigating labour shortages with automation — and this could drastically impact workers</a>
</strong>
</em>
</p>
<hr>
<p>On the other, some feel threatened by the software robots because they are generally more productive, make fewer errors and <a href="https://link.springer.com/chapter/10.1007/978-3-319-66963-2_7">don’t cost as much</a> as human employees. </p>
<p>Employees can also end up having to do additional tasks, picking up the work that used to be completed by the staff replaced by RPA. Paradoxically, fewer human employees can lead to an increased workload rather than the expected decrease. </p>
<p>Similarly, as employees shift from a mix of mundane and complex tasks to mainly complex ones, the variety in their work is reduced. This can lead to feeling <a href="https://scholarspace.manoa.hawaii.edu/items/1ea52ab4-e5f0-4b74-96f5-3c695ced0879">alienated at work</a>, or a sense they lack control over their role. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1575689553897103362"}"></div></p>
<h2>Fear and enthusiasm</h2>
<p>These various perspectives on automation were clear in our research. We interviewed employees and automation team members at a financial institution in New Zealand about their perceptions and responses to RPA and software robots. </p>
<p>We found that reactions to RPA are influenced by what employees imagined would be the consequences of software robots on their jobs. In turn, this influenced their collaboration with the automation team, their attitude towards change in their tasks and work processes, and ultimately their interactions with software robots – including how they judged the bots’ performance. </p>
<p>Perceptions and responses to RPA can be categorised by employees’ views of software robots as either burdens and threats, tools, teammates or innovative enablers. </p>
<p>Those who considered software robots as a burden and threat before they were introduced tended to have a negative view of their experience with RPA. They were concerned about job security, had negative reactions to having greater responsibility added to their workload, and were dissatisfied with the robots’ performance. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/can-machines-invent-things-without-human-help-these-ai-examples-show-the-answer-is-yes-196036">Can machines invent things without human help? These AI examples show the answer is ‘yes’</a>
</strong>
</em>
</p>
<hr>
<h2>Lessons for employees and employers</h2>
<p>At the opposite end of the spectrum, those who viewed software robots as enablers of innovation saw the opportunities of RPA and the benefits of using robots to improve work quality. </p>
<p>Some eagerly accepted the robots as team members, even giving them human names and joking that the bot was taking a sick day when it stopped working. This group also appreciated the reduction in their own workloads through RPA. </p>
<p>Little surprise, then, that employees who view software robots as innovative enablers or teammates tended to collaborate closely with the automation team to find the best way to integrate robots and improve their performance. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/brain-computer-interfaces-could-allow-soldiers-to-control-weapons-with-their-thoughts-and-turn-off-their-fear-but-the-ethics-of-neurotechnology-lags-behind-the-science-194017">Brain-computer interfaces could allow soldiers to control weapons with their thoughts and turn off their fear – but the ethics of neurotechnology lags behind the science</a>
</strong>
</em>
</p>
<hr>
<p>In the middle ground, employees who viewed software robots as tools tended to be accepting, but remained sceptical about changes to their workloads and robot performance. They were reluctant to offer full cooperation with the automation team to configure robots’ tasks that would have consequences for their own roles. </p>
<p>Some level of automation is inevitable for businesses. To harness the benefits of RPA without alienating staff, organisations should communicate clearly and often, debunking the myths of robots and their capabilities early to avoid unnecessary misunderstandings by employees. </p>
<p>Employers should take the time to understand how different employees feel about the introduction of automation initiatives. And they should consider incorporating employees’ ideas to increase the overall benefits of automation.</p><img src="https://counter.theconversation.com/content/196203/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>AI is already on the payroll in many workplaces – how well human employees interact with it can depend a lot on their existing attitudes and anxieties.Lena Waizenegger, Senior Lecturer in Information Systems, Auckland University of TechnologyAngsana A. Techatassanasoontorn, Professor of Information Systems, Auckland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1834252022-05-23T21:24:07Z2022-05-23T21:24:07ZHow many bots are on Twitter? The question is difficult to answer and misses the point<figure><img src="https://images.theconversation.com/files/464848/original/file-20220523-12-r4u4vp.jpg?ixlib=rb-1.1.0&rect=0%2C251%2C2400%2C1343&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Yes, worry about Twitter, but don't worry whether there are hordes of spambots running rampant there.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/warehouse-of-dormant-super-villains-royalty-free-image/486346426">gremlin/E+ via Getty Images</a></span></figcaption></figure><p>Twitter <a href="https://www.reuters.com/technology/twitter-estimates-spam-fake-accounts-represent-less-than-5-users-filing-2022-05-02/">reports that fewer than 5% of accounts are fakes or spammers</a>, commonly referred to as “bots.” Since his offer to buy Twitter was accepted, Elon Musk has repeatedly questioned these estimates, even dismissing <a href="https://twitter.com/paraga/status/1526237578843672576">Chief Executive Officer Parag Agrawal’s public response</a>. </p>
<p>Later, Musk <a href="https://twitter.com/elonmusk/status/1526465624326782976">put the deal on hold and demanded more proof</a>. </p>
<p>So why are people arguing about the percentage of bot accounts on Twitter?</p>
<p>As the creators of <a href="https://botometer.osome.iu.edu/">Botometer</a>, a widely used bot detection tool, our group at the Indiana University <a href="https://osome.iu.edu/">Observatory on Social Media</a> has been studying inauthentic accounts and manipulation on social media for over a decade. We brought the concept of the “<a href="https://cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext">social bot</a>” to the foreground and <a href="https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587">first estimated</a> <a href="https://www.cnbc.com/2017/03/10/nearly-48-million-twitter-accounts-could-be-bots-says-study.html">their prevalence</a> on Twitter in 2017. </p>
<p>Based on our knowledge and experience, we believe that estimating the percentage of bots on Twitter has become a very difficult task, and debating the accuracy of the estimate might be missing the point. Here is why.</p>
<h2>What, exactly, is a bot?</h2>
<p>To measure the prevalence of problematic accounts on Twitter, a clear definition of the targets is necessary. Common terms such as “fake accounts,” “spam accounts” and “bots” are used interchangeably, but they have different meanings. Fake or false accounts are those that impersonate people. Accounts that mass-produce unsolicited promotional content are defined as spammers. Bots, on the other hand, are accounts controlled in part by software; they may post content or carry out simple interactions, like retweeting, automatically.</p>
<p>These types of accounts often overlap. For instance, you can create a bot that impersonates a human to post spam automatically. Such an account is simultaneously a bot, a spammer and a fake. But not every fake account is a bot or a spammer, and vice versa. Coming up with an estimate without a clear definition only yields misleading results.</p>
<p>Defining and distinguishing account types can also inform proper interventions. Fake and spam accounts degrade the online environment and violate <a href="https://help.twitter.com/en/rules-and-policies/platform-manipulation">platform policy</a>. Malicious bots are used to <a href="https://doi.org/10.1038/s41467-018-06930-7">spread misinformation</a>, <a href="https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html">inflate popularity</a>, <a href="https://doi.org/10.1073/pnas.1803470115">exacerbate conflict through negative and inflammatory content</a>, <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14127">manipulate opinions</a>, <a href="https://doi.org/10.1371/journal.pone.0214210">influence elections</a>, <a href="https://doi.org/10.1109/TCSS.2021.3059286">conduct financial fraud</a> and <a href="https://doi.org/10.1007/978-3-319-47874-6_19">disrupt communication</a>. However, some bots can be harmless or even <a href="https://doi.org/10.1080/21670811.2015.1081822">useful</a>, for example by helping disseminate news, delivering disaster alerts and <a href="https://doi.org/10.1038/s41467-021-25738-6">conducting research</a>. </p>
<p>Simply banning all bots is not in the best interest of social media users. </p>
<p>For simplicity, researchers use the term “inauthentic accounts” to refer to the collection of fake accounts, spammers and malicious bots. This is also the definition Twitter appears to be using. However, it is unclear what Musk has in mind. </p>
<h2>Hard to count</h2>
<p>Even when a consensus is reached on a definition, there are still technical challenges to estimating prevalence. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a network graph showing a circle composed of groups of colored dots with lines connecting some of the dots" src="https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=337&fit=crop&dpr=1 600w, https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=337&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=337&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/464419/original/file-20220520-15-mt7my.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Networks of coordinated accounts spreading COVID-19 information from low-credibility sources on Twitter in 2020.</span>
<span class="attribution"><span class="source">Pik-Mai Hui</span></span>
</figcaption>
</figure>
<p>External researchers do not have access to the same data as Twitter, such as IP addresses and phone numbers. This <a href="https://doi.org/10.37016/mr-2020-49">hinders the public’s ability</a> to identify inauthentic accounts. But even Twitter acknowledges that the actual number of inauthentic accounts <a href="https://investor.twitterinc.com/financial-information/annual-reports/default.aspx">could be higher than it has estimated</a>, because <a href="https://twitter.com/paraga/status/1526237581419040768">detection is challenging</a>.</p>
<p>Inauthentic accounts evolve and develop new tactics to evade detection. For example, some fake accounts <a href="https://twitter.com/conspirator0/status/1502772800146313223">use AI-generated faces as their profiles</a>. These faces can be indistinguishable from real ones, <a href="https://doi.org/10.1073/pnas.2120481119">even to humans</a>. Identifying such accounts is hard and requires new technologies. </p>
<p>Another difficulty is posed by <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">coordinated accounts</a> that appear to be normal individually but act so similarly to each other that they are almost certainly controlled by a single entity. Yet they are like needles in the haystack of hundreds of millions of daily tweets. </p>
<p>Finally, inauthentic accounts can evade detection by techniques like <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">swapping handles</a> or automatically <a href="https://arxiv.org/abs/2203.13893">posting and deleting</a> large volumes of content. </p>
<p>The distinction between inauthentic and genuine accounts gets more and more blurry. Accounts can be hacked, <a href="https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html">bought or rented</a>, and some users “donate” their credentials to <a href="https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html">organizations</a> who post on their behalf. As a result, so-called <a href="https://www.washingtonpost.com/business/economy/as-a-conservative-twitter-user-sleeps-his-account-is-hard-at-work/2017/02/05/18d5a532-df31-11e6-918c-99ede3c8cafa_story.html">“cyborg” accounts</a> are controlled by both algorithms and humans. Similarly, spammers sometimes post legitimate content to obscure their activity. </p>
<p>We have observed a broad spectrum of behaviors mixing the characteristics of bots and people. Estimating the prevalence of inauthentic accounts requires applying a simplistic binary classification: authentic or inauthentic account. No matter where the line is drawn, mistakes are inevitable.</p>
<h2>Missing the big picture</h2>
<p>The focus of the recent debate on estimating the number of Twitter bots oversimplifies the issue and misses the point of quantifying the harm of online abuse and manipulation by inauthentic accounts.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="screenshot of a web form" src="https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=575&fit=crop&dpr=1 600w, https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=575&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=575&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=722&fit=crop&dpr=1 754w, https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=722&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/464417/original/file-20220520-11-n2s5da.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=722&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Screenshot of the BotAmp application comparing likely bot activity around two topics on Twitter.</span>
<span class="attribution"><span class="source">Kaicheng Yang</span></span>
</figcaption>
</figure>
<p>Through <a href="https://botometer.osome.iu.edu/botamp">BotAmp</a>, a new tool from the Botometer family that anyone with a Twitter account can use, we have found that the presence of automated activity is not evenly distributed. For instance, the discussion about cryptocurrencies tends to show more bot activity than the discussion about cats. Therefore, whether the overall prevalence is 5% or 20% makes little difference to individual users; their experiences with these accounts depend on whom they follow and the topics they care about.</p>
<p>Recent evidence suggests that inauthentic accounts might not be the only culprits responsible for the spread of misinformation, hate speech, polarization and radicalization. These issues typically involve many human users. For instance, our analysis shows that <a href="https://doi.org/10.1177/20539517211013861">misinformation about COVID-19 was disseminated overtly</a> on both Twitter and Facebook by verified, <a href="https://apnews.com/article/how-rfk-jr-built-anti-vaccine-juggernaut-amid-covid-4997be1bcf591fe8b7f1f90d16c9321e">high-profile accounts</a>. </p>
<p>Even if it were possible to precisely estimate the prevalence of inauthentic accounts, this would do little to solve these problems. A meaningful first step would be to acknowledge the complex nature of these issues. This will help social media platforms and policymakers develop meaningful responses.</p><img src="https://counter.theconversation.com/content/183425/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Filippo Menczer receives funding from Knight Foundation, Craig Newmark Philanthropies, Open Technology Fund, and DoD. He owns a Tesla. </span></em></p><p class="fine-print"><em><span>Kai-Cheng Yang does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Elon Musk’s focus on the number of bots on Twitter, whether genuine or a distraction, does little to address the problems of misinformation and spam. A pair of social media experts explain why.Kai-Cheng Yang, Doctoral Student in Informatics, Indiana UniversityFilippo Menczer, Professor of Informatics and Computer Science, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1820232022-05-03T13:10:55Z2022-05-03T13:10:55ZElon Musk’s comments about Twitter don’t square with the social media platform’s reality<figure><img src="https://images.theconversation.com/files/460128/original/file-20220427-24-jhfroy.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C4768%2C3162&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Elon Musk has called Twitter the world's "digital town square."</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/SpaceXCapsuleTest/22a2230808c34b38856b7f4afa01642b/photo">AP Photo/John Raoux</a></span></figcaption></figure><p>On April 25, 2022, Twitter’s board of directors accepted Elon Musk’s <a href="https://www.cnn.com/2022/04/25/tech/elon-musk-twitter-sale-agreement/index.html">US$44 billion hostile takeover bid</a>. Twitter’s <a href="https://www.prnewswire.com/news-releases/elon-musk-to-acquire-twitter-301532245.html">statement announcing the deal</a> included comments from the Tesla and SpaceX CEO:</p>
<blockquote>
<p>“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated. I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans.”</p>
</blockquote>
<p>The problem with Musk’s statement is that it fundamentally misunderstands speech, algorithms and bots and human authentication. As a researcher who <a href="https://scholar.google.com/citations?user=0LbmlocAAAAJ&hl=en">studies social media</a>, I believe that if anything is cause for concern about this transaction, it is this misunderstanding.</p>
<h2>Digital town square?</h2>
<p>Despite Musk’s comments, Twitter was not designed or intended to be a digital town square. While many platforms tout community-building, Twitter has <a href="https://doi.org/10.1177%2F2056305120926622">not to date made such a claim</a>. Instead, Twitter has <a href="https://doi.org/10.1080/1369118X.2014.902984">prioritized information-sharing</a> over community, making it a space for millions of town criers, but not a town square for people to come together and debate.</p>
<p>Twitter has been a notable epicenter of online vitriol in the past, so much so that when the company was up for sale previously, <a href="https://www.bloomberg.com/news/articles/2016-10-17/disney-said-to-have-dropped-twitter-pursuit-partly-over-image">potential buyers, including Disney, were scared off</a> by the harassment and hate on the platform. A 2017 study found that <a href="https://mashable.com/article/amnesty-study-twitter-abuse-women">women were harassed every 30 seconds</a> on Twitter, with Black women being the <a href="https://www.colorlines.com/articles/new-study-confirms-black-women-are-most-abused-group-twitter">most frequently abused</a>. </p>
<p>Additionally, the ease with which people can create and tweet images of <a href="https://www.rollingstone.com/culture/culture-news/snickers-dick-vein-dont-let-this-flop-podcast-1343163/">doctored news articles</a> and generate <a href="https://www.tweetgen.com/">fake tweets</a> helps spread misinformation – essentially, tools that help amplify the voices of malicious town criers. These are examples of how Twitter is about information-sharing first, community-building second. Someone can shout harassment, hate or misinformation, and then others pile on.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a woman speaks into a microphone and foreground as a man and a woman behind her holdup a poster displaying a message about a Twitter account suspension" src="https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/460849/original/file-20220502-16-n4y3bi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Twitter has taken steps to counter misinformation on the social media platform, including suspending the accounts of serial offenders like Rep. Marjorie Taylor Greene.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/HouseGreene/4037a7acfd9b404b99c982197c1ca22b/photo">AP Photo/Jacquelyn Martin</a></span>
</figcaption>
</figure>
<p>Also, arguments for free speech raise the question: Free speech for whom? Law and lived experience do not always align – ask any person of color, woman, LGBTQ person or disabled person who has experienced harassment online, <a href="https://www.essence.com/news/black-women-twitter-harassment/">particularly on Twitter</a>. Long-standing conceptions of the public sphere, or town square, feature a <a href="https://www.hks.harvard.edu/publications/long-life-nancy-frasers-rethinking-public-sphere">romanticized conception of white men</a> debating issues, while others are relegated to the margins.</p>
<p>Furthermore, Musk, who has <a href="https://twitter.com/elonmusk">over 90 million Twitter followers</a>, has himself engaged in harmful behavior on Twitter. In 2018, in now-deleted tweets, Musk referred to a diver who was helping rescue children from a flooded cave in Thailand as “<a href="https://web.archive.org/web/20200803133851/https://www.theguardian.com/technology/2018/jul/15/elon-musk-british-diver-thai-cave-rescue-pedo-twitter">a pedo guy</a>.” At the start of the COVID-19 pandemic, Musk tweeted <a href="https://www.bbc.co.uk/news/technology-51975377">erroneous claims</a> that children “are essentially immune” to the coronavirus and <a href="https://twitter.com/elonmusk/status/1239776019856461824?s=20&t=M8rGAs8UommS5FxQ2lUAoA">promoted chloroquine</a>, which is <a href="https://www.who.int/news-room/questions-and-answers/item/coronavirus-disease-(covid-19)-hydroxychloroquine">not recommended</a> as a COVID-19 treatment.</p>
<h2>Open algorithms</h2>
<p>Musk’s pledge to open Twitter’s algorithms to public scrutiny sounds good. </p>
<p>Twitter’s algorithms have been a source of controversy. For example, many conservative politicians claim the algorithms silence them. Research from <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">inside</a> and <a href="https://www.washingtonpost.com/business/2021/10/22/twitter-algorithm-right-leaning/">outside</a> Twitter has routinely shown this is not the case, and Twitter algorithms actually <a href="https://doi.org/10.1038/s41467-021-25738-6">amplify conservative tweets</a> over left-leaning ones. Transparency, in theory, could address these concerns.</p>
<p>But transparency doesn’t get at the root of the problem. Algorithms are popular targets in debates about social media platforms, political bias and misinformation because it’s easy to blame opaque technological systems. It’s harder to offer solutions for the political and personal motivations some people have to manipulate algorithms. </p>
<p><a href="https://www.ajl.org/">While algorithmic harm is a real problem</a>, algorithms are always programmed by people. Understanding the human decision-making processes that go into algorithms is a more worthwhile inquiry than simply revealing code. </p>
<h2>Bots and humans</h2>
<p>Like algorithms, bots are often blamed for many of Twitter’s ills. And like algorithms, bots are always programmed by humans. They do not act of their own accord, which means a productive line of inquiry is why people program bots to spam in the first place. </p>
<p>Musk has pledged to eliminate spambots by requiring all Twitter users to be authenticated as real people, but this would eliminate all bots – even the good ones. </p>
<p>Bots can serve important organizational purposes, given the immense amount of information on the internet. They also provide humor and whimsy when programmed for fun – such as <a href="https://twitter.com/ReutersPitchbot">journalism pitch bots</a>, which make up fake headlines for news outlets. There are also amusing bots such as <a href="https://twitter.com/EmojiMashupPlus">Emoji Mashup</a>, which tweets out novel emojis remixed from existing ones.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1520500762786377730"}"></div></p>
<p>While conversations about bots and human authentication often go hand in hand, the latter often involves considerations of multiple accounts and anonymity. Facebook has been a proponent of the Real Name Web, or the <a href="https://www.businessinsider.com/facebook-apologizes-for-real-name-policy-2014-10">push to have one singular identity online</a> that can be tied to one’s offline identity. But platforms like Twitter and Instagram have allowed users to assume multiple identities and mask their identity by having multiple accounts, often referred to as <a href="https://www.nytimes.com/2021/09/30/style/finsta-instagram-accounts-senate.html">Finstas</a> or <a href="https://www.vice.com/en/article/7xgpw9/finsta-twitter-alts-why-secret-social-media-accounts-feature">alts</a>.</p>
<p>[<em>Interested in science headlines but not politics? Or just politics or religion?</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-interested">The Conversation has newsletters to suit your interests</a>.]</p>
<p>On these accounts, the identity of the person behind the screen is not always straightforward. And while some argue <a href="https://www.andrewgriffithmp.com/campaigns/ending-online-anonymity">removing anonymity would help solve online problems</a>, research has shown time and time again that removing anonymity <a href="https://www.newstatesman.com/the-explainer/2021/10/would-ending-online-anonymity-reduce-abuse-against-mps">does not stop hate speech, vitriol or racism</a>. </p>
<h2>Fixing Twitter</h2>
<p>At the end of the day, Twitter’s problems are first and foremost human problems. The technological issues are only buttressed by the people who design or misuse them. </p>
<p>In Musk’s statement, he proposes that essentially more Twitter – Twitter as a digital square, transparent Twitter algorithms and Twitter solutions to bots and authentication – is the solution to all the platform’s problems. History shows that simply would not be the case.</p>
<p>If Musk is serious about making Twitter a healthy, vibrant component of the digital public sphere, he’ll need to engage with all users to understand the myriad experiences on the platform – especially those who have faced the most harm. </p>
<p>And I believe he should understand Twitter not as an online microcosm but a symptom representative of larger social and political ills.</p><img src="https://counter.theconversation.com/content/182023/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jessica Maddox does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Elon Musk has an idea of what ails Twitter and what needs to be done to fix it. The problem is his assumptions are wrong.Jessica Maddox, Assistant Professor of Journalism and Creative Media, University of AlabamaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1819232022-04-25T21:07:54Z2022-04-25T21:07:54ZElon Musk’s plans for Twitter could make its misinformation problems worse<figure><img src="https://images.theconversation.com/files/459590/original/file-20220425-13-feqjsz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C4004&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Elon Musk's moment of triumph is a moment of uncertainty for the future of one of the world's leading social media platforms.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/USMuskTwitter/360b354555564c63931e87a4eee568c6/photo">AP Photo/John Raoux</a></span></figcaption></figure><p>Elon Musk, the world’s richest person, <a href="https://www.wsj.com/articles/twitter-and-elon-musk-strike-deal-for-takeover-11650912837">acquired Twitter</a> in a US$44 billion deal on April 25, 2022, 11 days after announcing his bid for the company. Twitter announced that the public company will become <a href="https://www.prnewswire.com/news-releases/elon-musk-to-acquire-twitter-301532245.html">privately held after the acquisition is complete</a>. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">filing with the Securities and Exchange Commission</a> for his initial bid for the company, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=JpFHYKcAAAAJ">researcher of social media platforms</a>, I find that Musk’s ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.</p>
<h2>What makes Twitter unique</h2>
<p>Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.</p>
<p>Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is <a href="https://www.business2community.com/social-media-articles/how-your-contents-half-life-should-drastically-impact-your-social-media-strategy-in-2020-02290478">about 20 minutes</a>, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.</p>
<p>Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict <a href="https://ieeexplore.ieee.org/abstract/document/7045443">asthma-related emergency department visits</a>, measure <a href="https://www.cs.jhu.edu/%7Emdredze/publications/2016_ossm.pdf">public epidemic awareness</a>, and model <a href="https://doi.org/10.1080/1369118X.2016.1218528">wildfire smoke dispersion</a>. </p>
<p>Tweets that are part of a conversation are <a href="https://blog.twitter.com/en_us/a/2013/keep-up-with-conversations-on-twitter">shown in chronological order</a>, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive <a href="https://blog.twitter.com/en_us/a/2015/full-archive-search-api">provides instant and complete access to every public Tweet</a>. This positions Twitter as a <a href="https://twitter.com/sarahkendzior/status/1514590065674047488">historical chronicler of record</a> and a de facto fact checker.</p>
<h2>Changes on Musk’s mind</h2>
<p>A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several <a href="https://www.bloombergquint.com/business/twitter-shares-fall-after-musk-ditches-potential-board-role">suggestions about how to change Twitter</a>, including adding an edit button for tweets and granting automatic verification marks to premium users. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1511143607385874434"}"></div></p>
<p>There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets. </p>
<p>There are numerous ways to <a href="https://www.tweettabs.com/find-deleted-tweets/">retrieve deleted tweets</a>, which allows researchers to study them. While some studies show <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13133">significant personality differences</a> between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a <a href="https://doi.org/10.1080/1369118X.2016.1257041">way for people to manage their online identities</a>.</p>
<p>Analyzing deleting behavior can also yield valuable clues about <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14874">online credibility and disinformation</a>. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.</p>
<p>Studies of bot-generated activity on Twitter have concluded that <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots">nearly half of accounts tweeting about COVID-19 are likely bots</a>. Given <a href="https://doi.org/10.1073/pnas.1804840115">partisanship and political polarization in online spaces</a>, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1514590065674047488"}"></div></p>
<p>Musk has also indicated his intention to combat twitter bots, or automated accounts that post rapidly and repeatedly in the guise of people. He has called for <a href="https://twitter.com/elonmusk/status/1517215736606957573">authenticating users as real human beings</a>. </p>
<p>Given <a href="https://doi.org/10.1145/3131365.3131385">challenges such as doxxing</a> and other malicious personal harms online, it’s important for user authentication methods to preserve privacy. This is particularly important for activists, dissidents and whistleblowers who face threats for their online activities. Mechanisms such as <a href="https://www.ijert.org/decentralized-access-control-technique-with-anonymous-authentication">decentralized protocols</a> can enable authentication without sacrificing anonymity. </p>
<h2>Twitter’s content moderation and revenue model</h2>
<p>To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – <a href="https://warzel.substack.com/p/the-internets-original-sin?s=r">online advertising ecosystem</a> involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the <a href="https://www.wsj.com/articles/social-media-may-have-to-embrace-the-musk-11649691208">primary revenue source for Twitter</a>. </p>
<p>Musk’s vision is to <a href="https://finance.yahoo.com/news/musk-proposes-twitter-blue-subscription-024424750.html">generate revenue for Twitter from subscriptions</a> rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This could make Twitter a sort of freewheeling opinion site for paying subscribers. In contrast, until now Twitter has been <a href="https://www.techdirt.com/2021/02/10/content-moderation-case-study-twitter-attempts-to-tackle-covid-related-vaccine-misinformation-2020/">aggressive in using content moderation</a> in its attempts to address disinformation.</p>
<p>Musk’s description of a <a href="https://qz.com/2155098/elon-musks-twitter-bid-isnt-about-free-speech/">platform free from content moderation issues</a> is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as <a href="https://doi.org/10.1145/3468507.3468512">algorithms that assign gender</a> to users, <a href="https://doi.org/10.1145/3287560.3287587">potential inaccuracies and biases in algorithms</a> used to glean information from these platforms, and the impact on those <a href="https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616">looking for health information online</a>. </p>
<p>Testimony by Facebook whistleblower <a href="https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/">Frances Haugen</a> and recent regulatory efforts such as the <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">online safety bill unveiled in the U.K.</a> show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s acquisition of Twitter <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">highlights a whole host of regulatory concerns</a>. </p>
<p>Because of Musk’s other businesses, Twitter’s <a href="https://www.nasdaq.com/articles/how-does-social-media-influence-financial-markets-2019-10-14">ability to influence public opinion</a> in the sensitive industries of aviation and the automobile industry automatically creates a conflict of interest, not to mention affects the disclosure of <a href="https://www.investopedia.com/terms/m/materialinsiderinformation.asp">material information</a> necessary for shareholders. Musk has already been accused of <a href="https://www.cbsnews.com/news/elon-musk-twitter-shareholder-lawsuit/">delaying disclosure of his ownership stake in Twitter</a>.</p>
<p>Twitter’s own <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">algorithmic bias bounty challenge</a> concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to <a href="https://www.media.mit.edu/galleries/youtube-redesign/">re-imagine the YouTube platform with ethics in mind</a>. Perhaps it’s time to ask Musk to do the same with Twitter.</p>
<p><em>This is an updated version of <a href="https://theconversation.com/elon-musks-bid-spotlights-twitters-unique-role-in-public-discourse-and-what-changes-might-be-in-store-181374">an article</a> originally published on April 15, 2022.</em></p><img src="https://counter.theconversation.com/content/181923/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI. </span></em></p>Twitter, more than other social media platforms, fosters real-time discussion about events as they unfold. That could change now that Musk has gained control of the company.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1813742022-04-15T14:42:22Z2022-04-15T14:42:22ZElon Musk’s bid spotlights Twitter’s unique role in public discourse – and what changes might be in store<figure><img src="https://images.theconversation.com/files/458321/original/file-20220415-22-vd2ph3.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5760%2C3828&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Twitter may not be a darling of Wall Street, but it occupies a unique place in the social media landscape.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CapitolRiotInvestigationTech/d85dc445f8e84d0c9d08c8402a0d300a/photo">AP Photo/Richard Drew</a></span></figcaption></figure><p>Twitter has been in the news a lot lately, albeit for the wrong reasons. Its stock growth has languished and the platform itself has <a href="https://www.npr.org/2021/11/29/1059756077/jack-dorsey-steps-down-as-twitter-ceo">largely remained the same since its founding</a> in 2006. On April 14, 2022, Elon Musk, the world’s richest person, <a href="https://www.bloomberg.com/news/articles/2022-04-14/elon-musk-launches-43-billion-hostile-takeover-of-twitter">made an offer to buy Twitter</a> and take the public company private. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">filing with the Securities and Exchange Commission</a>, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=JpFHYKcAAAAJ">researcher of social media platforms</a>, I find that Musk’s potential ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.</p>
<h2>What makes Twitter unique</h2>
<p>Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.</p>
<p>Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is <a href="https://www.business2community.com/social-media-articles/how-your-contents-half-life-should-drastically-impact-your-social-media-strategy-in-2020-02290478">about 20 minutes</a>, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.</p>
<p>Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict <a href="https://ieeexplore.ieee.org/abstract/document/7045443">asthma-related emergency department visits</a>, measure <a href="https://www.cs.jhu.edu/%7Emdredze/publications/2016_ossm.pdf">public epidemic awareness</a>, and model <a href="https://doi.org/10.1080/1369118X.2016.1218528">wildfire smoke dispersion</a>. </p>
<p>Tweets that are part of a conversation are <a href="https://blog.twitter.com/en_us/a/2013/keep-up-with-conversations-on-twitter">shown in chronological order</a>, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive <a href="https://blog.twitter.com/en_us/a/2015/full-archive-search-api">provides instant and complete access to every public Tweet</a>. This positions Twitter as a <a href="https://twitter.com/sarahkendzior/status/1514590065674047488">historical chronicler of record</a> and a de facto fact checker.</p>
<h2>Changes on Musk’s mind</h2>
<p>A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several <a href="https://www.bloombergquint.com/business/twitter-shares-fall-after-musk-ditches-potential-board-role">suggestions about how to change Twitter</a>, including adding an edit button for tweets and granting automatic verification marks to premium users. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1511143607385874434"}"></div></p>
<p>There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets. </p>
<p>There are numerous ways to <a href="https://www.tweettabs.com/find-deleted-tweets/">retrieve deleted tweets</a>, which allows researchers to study them. While some studies show <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13133">significant personality differences</a> between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a <a href="https://doi.org/10.1080/1369118X.2016.1257041">way for people to manage their online identities</a>.</p>
<p>Analyzing deleting behavior can also yield valuable clues about <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14874">online credibility and disinformation</a>. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.</p>
<p>Studies of bot-generated activity on Twitter have concluded that <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots">nearly half of accounts tweeting about COVID-19 are likely bots</a>. Given <a href="https://doi.org/10.1073/pnas.1804840115">partisanship and political polarization in online spaces</a>, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1514590065674047488"}"></div></p>
<h2>Twitter’s content moderation and revenue model</h2>
<p>To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – <a href="https://warzel.substack.com/p/the-internets-original-sin?s=r">online advertising ecosystem</a> involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the <a href="https://www.wsj.com/articles/social-media-may-have-to-embrace-the-musk-11649691208">primary revenue source for Twitter</a>. </p>
<p>Musk’s vision is to generate revenue for Twitter from subscriptions rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This would make Twitter a sort of freewheeling opinion site for paying subscribers. Twitter has been <a href="https://www.techdirt.com/2021/02/10/content-moderation-case-study-twitter-attempts-to-tackle-covid-related-vaccine-misinformation-2020/">aggressive in using content moderation</a> in its attempts to address disinformation.</p>
<p>Musk’s description of a <a href="https://qz.com/2155098/elon-musks-twitter-bid-isnt-about-free-speech/">platform free from content moderation issues</a> is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as <a href="https://doi.org/10.1145/3468507.3468512">algorithms that assign gender</a> to users, <a href="https://doi.org/10.1145/3287560.3287587">potential inaccuracies and biases in algorithms</a> used to glean information from these platforms, and the impact on those <a href="https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616">looking for health information online</a>. </p>
<p>Testimony by Facebook whistleblower <a href="https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/">Frances Haugen</a> and recent regulatory efforts such as the <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">online safety bill unveiled in the U.K.</a> show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s potential bid for Twitter <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">highlights a whole host of regulatory concerns</a>. </p>
<p>Because of Musk’s other businesses, Twitter’s <a href="https://www.nasdaq.com/articles/how-does-social-media-influence-financial-markets-2019-10-14">ability to influence public opinion</a> in the sensitive industries of aviation and the automobile industry would automatically create a conflict of interest, not to mention affecting the disclosure of <a href="https://www.investopedia.com/terms/m/materialinsiderinformation.asp">material information</a> necessary for shareholders. Musk has already been accused of <a href="https://www.cbsnews.com/news/elon-musk-twitter-shareholder-lawsuit/">delaying disclosure of his ownership stake in Twitter</a>.</p>
<p>Twitter’s own <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">algorithmic bias bounty challenge</a> concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to <a href="https://www.media.mit.edu/galleries/youtube-redesign/">re-imagine the YouTube platform with ethics in mind</a>. Perhaps it’s time to ask Twitter to do the same, whoever owns and manages the company.</p>
<p>[<em>Over 150,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-150ksignup">Sign up today</a>.]</p><img src="https://counter.theconversation.com/content/181374/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI. </span></em></p>Twitter, more than other social media platforms, fosters real-time discussion about events as they unfold. That could change if Musk gains control of the company.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1791212022-03-30T13:28:08Z2022-03-30T13:28:08ZAlgorithms, bots and elections in Africa: how social media influences political choices<figure><img src="https://images.theconversation.com/files/452825/original/file-20220317-23-1up3n81.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Social media provides spaces for participation -- but also for misinformation.
</span> <span class="attribution"><span class="source">Photo by Omar Marques/SOPA Images/LightRocket via Getty Images</span></span></figcaption></figure><p>The rise in the use of smartphones and an <a href="https://www.statista.com/topics/779/mobile-internet/">increased adoption</a> of mobile internet in Africa are fundamentally altering the media ecology for election campaigns. </p>
<p>As mobile phones become <a href="https://www.gsma.com/mobilefordevelopment/blog/the-state-of-mobile-internet-connectivity-in-sub-saharan-africa/">commonplace</a>, even in Africa’s poorest countries, the uptake of social media has become ubiquitous. Applications like Facebook, Twitter, YouTube, WhatsApp and blogs form an integral part of today’s political communication landscape in much of the continent. </p>
<p>These platforms are becoming a dominant factor in electoral processes, playing a <a href="https://theconversation.com/analysis-across-africa-shows-how-social-media-is-changing-politics-121577">tremendous role</a> in the creation, dissemination and consumption of political content. </p>
<p>Their influence and embedded power over political content invites further scrutiny, which informed <a href="https://link.springer.com/chapter/10.1007/978-3-030-30553-6_2">my research</a>.
Is the rise in social media uptake in the continent a game changer in political communications? And if it is, does social media influence political campaigns? </p>
<p>To answer these questions, I considered the interplay between elements in the infrastructure of social media and human agency. </p>
<p>The infrastructure refers to the architecture that makes up social media systems. Even though the infrastructure is not immediately visible, it plays a critical role in the (re)production and dissemination of information. </p>
<p>Human agency entails the choices human beings make when they interact with social media systems.</p>
<p>I found that there are three main ways that political campaigns are influenced via social media: through algorithms, bots and the people who use them.</p>
<h2>The power of algorithms</h2>
<p>Imbued in social media platforms, with the exception of WhatsApp, is a system of software, codes and algorithms that manage, interpret and disseminate large quantities of information across social media networks. </p>
<p>The power of the algorithm is in its ability to search, sort, rank, prioritise and recommend the content consumed by users. The system, therefore, influences the choices we make. </p>
<p>Algorithms watch your behaviour when you interact with certain content in the platform, make assumptions and predictions on your preferences, and then recommend similar content in your feed. </p>
<p>For instance, if you constantly interact with posts – by liking, replying or sharing – from certain individuals, you are likely to see more posts from them. If you have shown interest in watching videos from a political outfit, you are likely to get more videos from them. </p>
<p>Which items are promoted and why? We may never know why the algorithms are coded (by programmers) in such a way as to rank certain items, individuals or political parties higher. What we know is that these algorithms influence what people see or don’t see.</p>
<p>They have the power to amplify and marginalise certain content and, like human gatekeepers in traditional mass media, determine what information users are exposed to. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-social-media-and-fake-news-are-battering-traditional-media-in-kenya-82920">How social media and fake news are battering traditional media in Kenya</a>
</strong>
</em>
</p>
<hr>
<p>For example, Facebook’s EdgeRank algorithm determines what is shown on a user’s Top News by displaying only a subset of stories by one’s friends. These are derived from a set of factors, such as the type of content (links, videos or photos) and the frequency and types of interactions with these friends (like tags or comments). </p>
<p>Similarly, Twitter algorithms display ranked tweets. That is, first they rank them and then display what they think is most relevant to the user.</p>
<p>These algorithms are not neutral. They encode political choices, influencing the information seen by users. When a user opens his or her social media account, he or she will be met by algorithm-filtered and recommended content, based on prior activities and interactions on the platform. </p>
<p>People are then likely to share visible information on non-algorithm-based applications like WhatsApp and Messenger, as well as in mainstream media. </p>
<h2>Bots and deepfakes</h2>
<p>Social bots can also be deployed to manipulate public opinion and influence votes. They mimic and potentially manipulate humans and their behaviour on social networks. They run automatically to produce messages, post online and interact with users through likes, comments and follows (fake accounts). </p>
<p>Even more worrisome is the rise of deepfakes. This involves the use of artificial intelligence to fabricate images and videos by replacing the face or voice of someone, usually a public figure, with someone else’s in a way that makes the content look authentic. </p>
<p>The intention is often to mislead the audience and make them believe that the targeted public figure said something (often controversial or provocative). </p>
<p>As noted by Portland Communications, a strategic communications consultancy, in their report, <a href="https://portland-communications.com/publications/how-africa-tweets-2018/">How Africa Tweets</a>, Twitter bots account for more than 20% of influencers in countries like Lesotho and Kenya. </p>
<p>One of the surprising findings in the report was the limited influence of politicians on the conversation. </p>
<h2>Human element</h2>
<p>African political parties are spending huge sums hiring consultancy companies with expertise in digital campaigning and even manipulation of social media content. </p>
<p>International consultancy firms like the <a href="https://www.theguardian.com/uk-news/2018/may/03/cambridge-analytica-closing-what-happened-trump-brexit">now defunct Cambridge Analytica (CA)</a> have been accused of attempting to influence digital campaigns in Africa and in other parts of the world. CA worked on several campaigns in Russia, the UK, USA and Kenya. </p>
<p>In <a href="https://www.washingtonpost.com/news/global-opinions/wp/2018/03/20/how-cambridge-analytica-poisoned-kenyas-democracy/">Kenya</a>, it emerged that President Uhuru Kenyatta had hired CA ahead of the 2013 elections. CA’s activities sparked global outcry when it became known, culminating in its collapse. </p>
<p>It is evident that those with political power and money can easily hire automated systems, like bots, to influence the flow of political content across social media. They can also <a href="https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c">distort information</a>. </p>
<p>The role of non-human actors should be worrying to anyone keen on democratic processes. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-is-being-misused-in-kenyas-political-arena-why-its-hard-to-stop-it-177586">Social media is being misused in Kenya's political arena. Why it's hard to stop it</a>
</strong>
</em>
</p>
<hr>
<p>There are indications that social media algorithms and bots are slowly changing the dynamics of elections in Africa. This is seen in the number of political parties hiring a new breed of communicators, such as social media managers.</p>
<p>The interplay between media and politics is central to any understanding of political campaigns, given their role as conduits of political information, persuasion and discussion. Social media provides spaces for participation – but also for misinformation and disinformation.</p><img src="https://counter.theconversation.com/content/179121/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Martin N Ndlela does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The system behind apps like Facebook, Twitter, YouTube and WhatsApp isn’t neutral. It encodes political communication, influencing what users see.Martin N Ndlela, Professor of Communication, Inland Norway University of Applied SciencesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1680542021-11-15T13:11:34Z2021-11-15T13:11:34ZDisinformation is spreading beyond the realm of spycraft to become a shady industry – lessons from South Korea<figure><img src="https://images.theconversation.com/files/431735/original/file-20211112-15738-1loyh8e.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4907%2C3261&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Efforts to reduce tensions between the Koreas, like the 2018 inter-Korean summit, are frequently the target of disinformation campaigns in South Korea.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/SouthKoreaKoreasTensions/3e5e17d7705a4d6983af18f5eb94e8f3/photo?Query=Korea%20border&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=Anytime&totalCount=5441&currentItemNo=10">AP Photo/Ahn Young-joon</a></span></figcaption></figure><p>Disinformation, the practice of blending real and fake information with the goal of duping a government or influencing public opinion, has its origins in the Soviet Union. But disinformation is no longer the exclusive domain of government intelligence agencies. </p>
<p>Today’s disinformation scene has evolved into a marketplace in which services are contracted, laborers are paid and shameless opinions and fake readers are bought and sold. This industry is emerging around the world. Some of the private-sector players are driven by political motives, some by profit and others by a mix of the two.</p>
<p>Public relations firms have recruited social media influencers in <a href="https://www.nytimes.com/2021/07/25/world/europe/disinformation-social-media.html">France and Germany</a> to spread falsehoods. Politicians have hired staff to create fake Facebook accounts in <a href="https://www.theguardian.com/technology/2021/apr/13/facebook-honduras-juan-orlando-hernandez-fake-engagement">Honduras</a>. And <a href="https://www.wired.com/story/opinion-in-kenya-influencers-are-hired-to-spread-disinformation/">Kenyan Twitter influencers</a> are paid 15 times more than many people make in a day for promoting political hashtags. Researchers at the University of Oxford have tracked government-sponsored disinformation activities in 81 countries and <a href="https://demtech.oii.ox.ac.uk/research/posts/industrialized-disinformation/">private-sector disinformation operations in 48 countries</a>.</p>
<p>South Korea has been at the forefront of online disinformation. Western societies began to raise concerns about disinformation in 2016, triggered by disinformation related to the 2016 U.S. presidential election and Brexit. But in South Korea, media reported the first formal disinformation operation in 2008. As a researcher who <a href="https://scholar.google.com/citations?user=QpNFdIEAAAAJ&hl=en">studies digital audiences</a>, I’ve found that South Korea’s 13-year-long disinformation history demonstrates how technology, economics and culture interact to enable the disinformation industry. </p>
<p>Most importantly, South Korea’s experience offers a lesson for the U.S. and other countries. The ultimate power of disinformation is found more in the ideas and memories that a given society is vulnerable to and how prone it is to fueling the rumor mill than it is in the people perpetrating the disinformation or the techniques they use.</p>
<h2>From dirty politics to dirty business</h2>
<p>The origin of South Korean disinformation can be traced back to the nation’s National Intelligence Service, which is equivalent to the U.S. Central Intelligence Agency. The NIS formed teams in 2010 <a href="https://www.theguardian.com/world/2017/aug/04/south-koreas-spy-agency-admits-trying-rig-election-national-intelligence-service-2012">to interfere in domestic elections</a> by attacking a political candidate it opposed. </p>
<p>The NIS hired more than 70 full-time workers who managed fake, or so-called <a href="https://doi.org/10.1145/3308560.3317598">sock puppet</a>, accounts. The agency recruited a group called Team Alpha, which was composed of civilian part-timers who had ideological and financial interests in working for the NIS. By 2012, the scale of the operation had grown to <a href="https://www.brookings.edu/techstream/lessons-from-south-koreas-approach-to-tackling-disinformation/">3,500 part-time workers</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two men, one in a suit jacket in the other a windbreaker jacket, stand shoulder to shoulder in a stairwell, photographers behind them" src="https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/431745/original/file-20211112-13043-8xui0h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">South Korean President Moon Jae-in (left) campaigning in 2014 for Kim Kyoung-soo (right), who became governor of South Gyeongsang Province in 2018 but was subsequently convicted of opinion rigging.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:KimKyoung-soo.jpg">Udenjan/WikiCommons</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Since then the private sector has moved into the disinformation business. For example, a shadowy publishing company led by an influential blogger was involved in a high-profile <a href="http://www.koreaherald.com/view.php?ud=20210721000615">opinion-rigging scandal</a> between 2016 and 2018. The company’s client was a close political aide of the current president, Moon Jae-in. </p>
<p>In contrast to NIS-driven disinformation campaigns, which use disinformation as a propaganda tool for the government, some of the private-sector players are chameleonlike, changing ideological and topical positions in pursuit of their business interests. These private-sector operations have achieved greater cost effectiveness than government operations by skillfully <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/7301">using bots to amplify fake engagements</a>, involving social media entrepreneurs like <a href="https://restofworld.org/2021/elderly-conservatives-in-south-korea-turn-to-youtube-and-conspiracy-theories/">YouTubers</a> and <a href="https://globalvoices.org/2012/11/19/confessions-of-paid-political-trolls-in-south-korea/">outsourcing trolling to cheap laborers</a>.</p>
<h2>Narratives that strike a nerve</h2>
<p>In South Korea, Cold War rhetoric has been particularly visible across all types of disinformation operations. The campaigns typically portray the conflict with North Korea and the battle against Communism as being at the center of public discourse in South Korea. In reality, nationwide polls have painted a very different picture. For example, even when North Korea’s nuclear threat was at a peak in 2017, <a href="https://www.nytimes.com/2017/04/27/world/asia/north-korea-south-tensions.html">fewer than 10 percent of respondents</a> picked North Korea’s saber-rattling as their priority concern, compared with more than 45 percent who selected economic policy.</p>
<p>Across all types of purveyors and techniques, political disinformation in South Korea has amplified anti-Communist nationalism and denigrated the nation’s dovish diplomacy toward North Korea. My research on <a href="https://doi.org/10.1080/01292986.2015.1130157">South Korean social media rumors</a> in 2013 showed that the disinformation rhetoric continued on social media even after the formal disinformation campaign ended, which indicates how powerful these themes are. Today I and my research team continue to see references to the same themes.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man standing on a stage while holding a microphone tears a flag" src="https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=392&fit=crop&dpr=1 600w, https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=392&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=392&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=492&fit=crop&dpr=1 754w, https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=492&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/431748/original/file-20211112-15-13ynxfu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=492&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Much of the disinformation trafficked in South Korea involves nationalistic anti-Communist narratives similar to this protester’s anti-North Korea message.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/south-korean-protester-tears-a-north-korean-flag-during-a-news-photo/1233640604">Photo by Jung Yeon-je/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>The dangers of a disinformation industry</h2>
<p>The disinformation industry is enabled by the three prongs of today’s digital media industry: an attention economy, algorithm and computational technologies and a participatory culture. In online media, the most important currency is audience attention. Metrics such as the number of page views, likes, shares and comments quantify attention, which is then converted into economic and social capital. </p>
<p>Ideally, these metrics should be a product of networked users’ spontaneous and voluntary participation. Disinformation operations more often than not manufacture these metrics by using bots, hiring influencers, paying for crowdsourcing and developing computational tricks to game a platform’s algorithms. </p>
<p>The expansion of the disinformation industry is troubling because it distorts how public opinion is perceived by researchers, the media and the public itself. Historically, democracies have relied on polls to understand public opinion. Despite their limitations, nationwide polls conducted by credible organizations, such as <a href="https://www.gallup.com/224855/gallup-poll-work.aspx">Gallup</a> and <a href="https://www.pewresearch.org/our-methods/u-s-surveys/u-s-survey-methodology/">Pew Research</a>, follow rigorous methodological standards to represent the distribution of opinions in society in as representative a manner as possible. </p>
<p>Public discourse on social media has emerged as an alternative means of assessing public opinion. Digital audience and web traffic analytic tools are widely available to measure the trends of online discourse. However, people can be misled when purveyors of disinformation manufacturer opinions expressed online and falsely amplify the metrics about the opinions. </p>
<p>Meanwhile, the persistence of anti-Communist nationalist narratives in South Korea shows that disinformation purveyors’ rhetorical choices are not random. To counter the disinformation industry wherever it emerges, governments, media and the public need to understand not just the who and the how, but also the what – a society’s controversial ideologies and collective memories. These are the most valuable currency in the disinformation marketplace.</p>
<p>[<em>The Conversation’s science, health and technology editors pick their favorite stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-favorite">Weekly on Wednesdays</a>.]</p><img src="https://counter.theconversation.com/content/168054/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>K. Hazel Kwon is a U.S.-Korea NextGen Scholar under the sponsorship of the Korea Foundation. Her work was supported by the National Science Foundation (Award #2027387), Army Research Laboratory-Army Research Office (Award #W911NF1910066), and MIT Lincoln Laboratory (Award #PO 7000506684). </span></em></p>Disinformation is being privatized around the world. This new industry is built on a dangerous combination of cheap labor, high-tech algorithms and emotional national narratives.K. Hazel Kwon, Associate Professor of Journalism and Digital Audiences, Arizona State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1681372021-09-19T12:16:19Z2021-09-19T12:16:19ZFederal election 2021: Why we shouldn’t always trust ‘good’ political bots<figure><img src="https://images.theconversation.com/files/421706/original/file-20210916-29-180xty.jpg?ixlib=rb-1.1.0&rect=529%2C93%2C3698%2C1943&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Political bots are setting the stage and standards for how this kind of AI will be used moving forward. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>During the 2019 federal election campaign, <a href="https://calgaryherald.com/news/local-news/canada-wexit-and-the-federal-election-targeted-in-russian-disinformation-campaign-academics-say">concerns about foreign interference and scary “Russian bots” dominated conversation</a>. In contrast, throughout the 2021 election cycle, new political bots <a href="https://policyoptions.irpp.org/magazines/septembe-2021/tracking-online-toxicity-in-elxn44/">have been getting noticed for their potentially helpful contributions</a>. </p>
<p>From detecting online toxicity to replacing traditional polling, political bot creators are experimenting with artificial intelligence (AI) to automate analysis of social media data. <a href="https://policyoptions.irpp.org/fr/magazines/novembre-2017/toward-the-responsible-use-of-bots-in-politics/">These kinds of political bots can be framed as “good” uses of AI</a>, but even if they can be helpful, we need to be critical. </p>
<p>The cases <a href="https://sambot.ca/">of SAMbot</a> <a href="https://advancedsymbolics.com/canadian-federal-election-prediction/">and Polly</a> can help us understand what to expect and demand from people when they choose to use AI in their political activities. </p>
<p>SAMbot was created by Areto Labs in partnership with the Samara Centre for Democracy. It’s a tool that automatically analyzes tweets to assess harassment and toxicity directed at political candidates. </p>
<p>Advanced Symbolics Inc. deployed a tool called Polly to analyze social media data and predict who will win the election. </p>
<p><a href="https://www.cbc.ca/radio/day6/no-knock-warrants-monitoring-the-u-s-election-ai-pollsters-west-wing-reunites-bts-stock-and-more-1.5763944/meet-polly-the-ai-pollster-that-wants-to-predict-elections-using-social-media-1.5763952">Both are receiving media attention and having an impact on election coverage</a>.</p>
<p>We know little about how these tools work yet we trust them largely because they are being used by non-partisan players. But these bots are setting the stage and standards for how this kind of AI will be used moving forward. </p>
<h2>People make bots</h2>
<p>It is tempting to think of SAMbot or Polly as friends, helping us understand the confusing mess of political chatter on social media. Samara, Areto Labs and Advanced Symbolics Inc. all promote the things their bots do, all the data their bots have analyzed and all the findings their bots have unearthed. </p>
<p>SAMbot is depicted as an adorable robot with big eyes, five fingers on each hand, and a nametag. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1438507909206269955"}"></div></p>
<p>Polly has been personified as a woman. However, these bots are still tools that require humans to be used. People decide what data to collect, what kind of analysis is appropriate and how to interpret the results.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1438445112930017280"}"></div></p>
<p>But when we personify, we risk losing sight of the agency and responsibility bot creators and bot users have. We need to think about these bots as tools used by people.</p>
<h2>The black box approach is dangerous</h2>
<p>AI is a catch-all phrase for a wide range of technology, and the techniques are evolving. Explaining the process is a challenge even in lengthy academic articles, so it’s not surprising most political bots are presented with scant information about how they work. </p>
<p><a href="https://economictimes.indiatimes.com/magazines/panache/black-box-problem-humans-cant-trust-ai-us-based-indian-scientist-feels-lack-of-transparency-is-the-reason/articleshow/72601554.cms">Bots are black boxes</a> — meaning their inputs and operations aren’t visible to users or other interested parties — and right now bot creators are mostly just suggesting: “It’s doing what we want it to, trust us.”</p>
<p>The problem is, what goes on in those black boxes can be extremely varied and messy, and small choices can have massive knock-on effects. For example, <a href="https://medium.com/jigsaw/unintended-bias-and-names-of-frequently-targeted-groups-8e0b81f80a23">Jigsaw’s (Google) Perspective API</a> — aimed at identifying toxicity — <a href="https://www.technologyreview.com/2021/06/04/1025742/ai-hate-speech-moderation/">infamously and unintentionally embedded racist and homophobic tendencies into their tool</a>. </p>
<p>Jigsaw only discovered and corrected the issues once people started asking questions about unexpected results.</p>
<p>We need to establish a base set of questions to ask when we see new political bots. We must develop digital literacy skills so we can question the information that shows up on our screens.</p>
<h2>Some of the questions we should ask</h2>
<p><em>What data is being used? Does it actually represent the population we think it does?</em></p>
<p>SAMbot is only applied to tweets mentioning incumbent candidates, and we know that better known politicians are likely to engender higher levels of negativity. The SAMbot website does make this clear, but <a href="https://sambot.ca/in-the-news/">most media coverage of their weekly reports</a> throughout this election cycle misses this point. </p>
<p>Polly is used to analyze social media content. But <a href="https://policyoptions.irpp.org/magazines/september-2018/when-journalists-report-social-media-as-public-opinion/">that data isn’t representative of all Canadians</a>. Advanced Symbolics Inc. works hard to mirror the general population of Canadians in their analysis, but the population that simply never posts on social media is still missing. This means there is an unavoidable bias that needs to be explicitly acknowledged in order for us to situate and interpret the findings.</p>
<p><em>How was the bot trained to analyze the data? Are there regular checks to make sure the analysis is still doing what the creators initially intended?</em></p>
<p>Each political bot might be designed very differently. Look for a clear explanation of what was done and how the bot creators or users check to make sure their automated tool is in fact on target (validity) and consistent (reliability). </p>
<p>The training processes to develop both SAMbot and Polly aren’t explained in detail on their respective websites. Methods data has been added to the SAMbot website throughout the 2021 election campaign, but it’s still limited. In both cases you can find a link to a peer-reviewed academic article <a href="https://arxiv.org/pdf/1911.11025.pdf">that explains part</a>, <a href="https://advancedsymbolics.com/wp-content/uploads/2019/06/Forecasting-Canadian-Elections-Using-Twitter.pdf">but not all</a>, of their approaches. </p>
<p>While it’s a start, linking to often complex academic articles can actually make understanding the tool difficult. Instead, simple language helps.</p>
<p>Some additional questions to ponder: How do we know what counts as “toxic?” Are human beings checking the results to make sure they are still on target? </p>
<figure class="align-center ">
<img alt="A microphone in front of Justin Trudeau." src="https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=385&fit=crop&dpr=1 600w, https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=385&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=385&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=484&fit=crop&dpr=1 754w, https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=484&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/421710/original/file-20210916-19-8jdqfc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=484&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A media microphone is pictured as Liberal leader Justin Trudeau prepares to take questions during a campaign stop in Montréal. We need to be asking questions not just of our political leaders, but of the creators of political bots.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Sean Kilpatrick</span></span>
</figcaption>
</figure>
<h2>Next steps</h2>
<p>SAMbot and Polly are tools created by non-partisan entities with no interest in creating disinformation, sowing confusion or influencing who wins the election on Monday. But the same tools could be used for very different purposes. We need to know how to identify and critique these bots.</p>
<p>Any time a political bot, or indeed any type of AI in politics, is employed, information about how it was created and tested is essential. </p>
<p>It’s important we set expectations for transparency and clarity early. This will help everyone develop better digital literacy skills and will allows us to distinguish between trustworthy and untrustworthy uses of these kinds of tools.</p><img src="https://counter.theconversation.com/content/168137/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Elizabeth Dubois receives funding from the Social Sciences and Humanities Research Council of Canada and previously from the University of Ottawa, and the Government of Canada through the Canada History Fund. She has been an academic advisor for the Samara Centre for Democracy in the past but is not currently affiliated with the organization.</span></em></p>Any time a political bot, or indeed any type of AI in politics, is employed, we need information about how it was created and tested.Elizabeth Dubois, Associate Professor, Communication, L’Université d’Ottawa/University of OttawaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1586662021-04-14T17:04:17Z2021-04-14T17:04:17ZFemale robots are seen as being the most human. Why?<figure><img src="https://images.theconversation.com/files/395473/original/file-20210416-17-1s42eoj.jpg?ixlib=rb-1.1.0&rect=119%2C0%2C1277%2C747&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Why do we perceive female robots as more human than male robots?</span> <span class="attribution"><span class="source">Rafael Matigulin</span>, <span class="license">Author provided</span></span></figcaption></figure><p>With the proliferation of female robots such as <a href="https://www.hansonrobotics.com/sophia/">Sophia</a> and the popularity of female virtual assistants such as Siri (Apple), Alexa (Amazon) and Cortana (Microsoft), artificial intelligence seems to have a gender issue.</p>
<p>This gender imbalance in AI is a pervasive trend that has drawn sharp criticism in the media (even Unesco <a href="https://en.unesco.org/news/new-recommendations-improve-gender-equality-digital-professions-and-eliminate-stereotypes-ai">warned against the dangers of this practice</a>) because it could reinforce stereotypes about women being objects.</p>
<p>But why is femininity injected in artificial intelligent objects? If we want to curb the massive use of female gendering in AI, we need to better understand the deep roots of this phenomenon.</p>
<h2>Making the inhuman more human</h2>
<p>In an <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/mar.21480">article</a> published in the journal <em>Psychology & Marketing</em>, we argue that research on what makes people human can provide a new perspective into why feminization is systematically used in AI. We suggest that if women tend to be more objectified in AI than men, it is not just because they are perceived as the perfect assistant, but also because people attribute more humanness to women (versus men) in the first place. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/PI8XBKb6DQk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for <em>Ex Machina</em>, a 2015 film starring Domhnall Gleeson and Oscar Isaac.</span></figcaption>
</figure>
<p>Why? Because women are perceived as warmer and more likely to experience emotions than men, female gendering of AI objects contributes to humanizing them. Warmth and experience (but not competence) are indeed seen as fundamental qualities to be a full human but are lacking in machines.</p>
<p>Drawing on theories from dehumanization and objectification, we show across five studies with a total sample of more than 3,000 participants that:</p>
<ul>
<li><p>Women are perceived as more human than men, overall and compared to non-human entities (animals and machines).</p></li>
<li><p>Female bots are endowed with more positive human qualities than male bots, and they are perceived as more human than male bots, compared to both animals and machines.</p></li>
<li><p>The inferred humanness of female bots increases perceived uniqueness of treatment from them in a health context, leading to more favorable attitudes toward AI solutions.</p></li>
</ul>
<p>We used several different measures of perceived humanness, compared to both animals and machines. For example, to measure blatant humanness of female and male bots compared to animals, we used the ascent humanization scale based on the classic <a href="https://sites.wustl.edu/prosper/on-the-origins-of-the-march-of-progress/">“march of progress”</a> illustration. We explicitly asked online respondents to indicate how “evolved” they perceived female or male bots to be, using a continuous progression from ancient apes to modern humans. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=776&fit=crop&dpr=1 600w, https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=776&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=776&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=975&fit=crop&dpr=1 754w, https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=975&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/395215/original/file-20210415-18-u82kwc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=975&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>To measure the blatant perceived humanness of female and male bots compared to machines, we created a scale that measures blatant mechanistic (de)humanization, by picturing man’s evolution from robot to human (instead of ape to human). Of course, we created both a female and a male version of each of these scales.</p>
<p>Other measures captured more subtle and implicit perceptions of humanness, by asking respondents the level of emotions they attributed to male and female bots. Some emotions are said to distinguish humans from machines (for example, “friendly”, “fun-loving”), and other emotions to distinguish humans from animals (i.e., “organized”, “polite”). Finally, we also used an <a href="https://implicit.harvard.edu/implicit/">implicit association test</a> to investigate whether female bots are more likely than male bots to be associated with the concept of “human” rather than “machine”.</p>
<h2>The ghost in the machine</h2>
<p>While we found that women and female robots are perceived as more human on most of the subtle and all the blatant and implicit measures of humanness, we also found that men and male robots are perceived as more human on the negative dimensions of the subtle measures of humanness. Taken together, these results indicate that female robots are not only endowed with more positive human qualities than male robots (benevolent sexism), but that they are also perceived as more human and are expected to be more prone to consider our unique needs in a service context. </p>
<p>These findings may point to a new possible explanation of why female bots are favored over their male counterparts, with people preferring female intelligent machines because such machines are more strongly associated with humanness.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/dJTU48_yghs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for <em>Her</em>, a 2013 film starring Joaquin Phoenix and Scarlett Johansson.</span></figcaption>
</figure>
<p>If femininity is used to humanize non-human entities, this research suggests that treating women like objects in AI may lie precisely in the recognition that they are not. The popular assumption, though, frequently referred to as the dehumanization hypothesis, is that it is necessary to view outgroup members as animals or instruments before objectifying them. In other words, dehumanization would be a prerequisite for objectification to take place, with targets of objectification typically being denied their humanness. Contrary to this dominant view, the transformation of women into objects in AI might occur not because women are perceived as subhumans, but because they are perceived as superhumans in the first place.</p>
<p>This is in line with Martha C. Nussbaum’s assertion: “Objectification entails making into a thing… something that is really not a thing” (<a href="https://www.jstor.org/stable/2961930?seq=1#metadata_info_tab_contents">Nussbaum, 1995</a>, p. 256–7). It also fits with Kate Manne’s view on misogyny and dehumanization: “Often, it’s not a sense of women’s humanity that is lacking. Her humanity is precisely the problem” (<a href="https://www.penguin.com.au/books/down-girl-9780141990729">Manne, 2018</a>, p. 33). Therefore, the widespread use of female identity in AI artefacts may be rooted in the implicit recognition that women are perceived to be human, and more so than men.</p>
<h2>Objectification of women in the real world?</h2>
<p>This research builds on what makes people human compared to machines to better understand the deep roots of the widespread female gendering of AI. Because feelings are at the very substance of our humanness, and because women are perceived as more likely to experience feelings, we argue that female gendering of AI objects makes them look more human and more likely to consider our unique needs. However, this process of transforming women into objects could lead to women’s objectification by conveying the idea that women are objects and simple tools designed to fulfill their owners’ needs. This may potentially fuel more women’s objectification and dehumanization in the non-digital world.</p>
<p>This research highlights thus the ethical quandary faced by AI designers and policymakers: Women are said to be transformed into objects in AI, but injecting women’s humanity into AI objects makes these objects seem more human and acceptable.</p>
<p>These results are not particularly encouraging for the future of gender parity in AI, nor for ending objectification of women in AI. The development of gender-neutral voices could be a way to move away from the female gendering of AI and stop the perpetuation of this benevolent sexism. Another solution, similar to Google’s <a href="https://www.theverge.com/2019/9/18/20870939/google-assistant-new-voices-nine-countries-languages">recent experimentation</a>, would be to impose a default gender voice, assigning randomly and with an equal probability either a male or a female intelligent bot to users.</p>
<hr>
<p><em>The original paper published in Psychology & Marketing was co-written by Sylvie Borau, Tobias Otterbring, Sandra Laporte, and Samuel Fosso-Wamba.</em></p><img src="https://counter.theconversation.com/content/158666/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sylvie Borau ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Virtual assistants and robots are frequently given female attributes. To curb the massive use of such gendering in AI, we need to better understand the deep roots of this phenomenon.Sylvie Borau, Professeure en Marketing éthique, TBS EducationLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1396102020-06-24T12:18:10Z2020-06-24T12:18:10ZHow fake accounts constantly manipulate what you see on social media – and what you can do about it<figure><img src="https://images.theconversation.com/files/342464/original/file-20200617-94044-chkdii.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C6000%2C3979&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">All is not as it appears on social media.</span> <span class="attribution"><a class="source" href="https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html">filadendron/E+ via Getty Images</a></span></figcaption></figure><p>Social media platforms like Facebook, Twitter and Instagram started out as a way to connect with friends, family and people of interest. But anyone on social media these days knows it’s increasingly a divisive landscape. </p>
<p>Undoubtedly you’ve heard reports that hackers and even foreign governments are using social media to manipulate and attack you. You may wonder how that is possible. As a professor of computer science who <a href="https://scholar.google.com/citations?user=HC021GgAAAAJ">researches social media and security</a>, I can explain – and offer some ideas for what you can do about it. </p>
<h2>Bots and sock puppets</h2>
<p>Social media platforms don’t simply feed you the posts from the accounts you follow. They use <a href="https://blog.hootsuite.com/facebook-algorithm/">algorithms to curate</a> what you see based in part on “likes” or “votes.” A post is shown to some users, and the more those people react – positively or negatively – the more it will be highlighted to others. Sadly, lies and extreme content often garner more reactions and so <a href="https://doi.org/10.1126/science.aap9559">spread quickly and widely</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/342981/original/file-20200619-43220-jsqrf1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A 2018 file photo showing a business center building in St. Petersburg, Russia, known as the ‘troll factory,’ one of a web of companies allegedly controlled by Yevgeny Prigozhin, who has reported ties to Russian President Vladimir Putin.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Election-2018-Russian-Meddling/91870df003cc492494b575682ef911c0/3/0">AP Photo/Dmitri Lovetsky</a></span>
</figcaption>
</figure>
<p>But who is doing this “voting”? Often it’s an army of accounts, called bots, that do not correspond to real people. In fact, they’re controlled by hackers, often on the other side of the world. For example, researchers have reported that <a href="https://www.technologyreview.com/2020/05/21/1002105/covid-bot-twitter-accounts-push-to-reopen-america/">more than half of the Twitter accounts discussing COVID-19 are bots</a>.</p>
<p>As a social media researcher, I’ve seen <a href="https://dl.acm.org/doi/10.1145/2789187.2789206">thousands of accounts with the same profile picture</a> “like” posts in unison. I’ve seen <a href="https://medium.com/@geoffgolberg/twitter-looks-the-other-way-as-trumps-tweets-amplified-by-artificial-network-ce90f119e2d5">accounts post hundreds of times per day</a>, far more than a human being could. I’ve seen an account claiming to be an “All-American patriotic army wife” from Florida post obsessively about immigrants in English, but whose account history showed it used to post in Ukranian. </p>
<p>Fake accounts like this are called “<a href="https://dl.acm.org/doi/10.1145/3308560.3317598">sock puppets</a>” – suggesting a hidden hand speaking through another identity. In many cases, this deception can easily be revealed with a look at the account history. But in some cases, there is a big investment in making sock puppet accounts seem real. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=532&fit=crop&dpr=1 600w, https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=532&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=532&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=668&fit=crop&dpr=1 754w, https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=668&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/342841/original/file-20200618-41209-16ye3u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=668&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Now defunct, the ‘Jenna Abrams’ account was created by hackers in Russia.</span>
</figcaption>
</figure>
<p>For example, <a href="https://www.forbes.com/sites/chrisladd/2017/11/20/jenna-abrams-is-not-real-and-that-matters-more-than-you-think/#7449caed3b5a">Jenna Abrams, an account with 70,000 followers</a>, was quoted by mainstream media outlets like <a href="https://www.thedailybeast.com/jenna-abrams-russias-clown-troll-princess-duped-the-mainstream-media-and-the-world">The New York Times</a> for her xenophobic and far-right opinions, but was actually an invention controlled by the <a href="https://www.theatlantic.com/international/archive/2018/02/russia-troll-farm/553616/">Internet Research Agency</a>, a <a href="https://www.nytimes.com/2017/09/07/us/politics/russia-facebook-twitter-election.html">Russian government-funded troll farm</a> and not a living, breathing person. </p>
<h2>Sowing chaos</h2>
<p>Trolls often don’t care about the issues as much as they care about <a href="https://medium.com/dfrlab/trolltracker-twitters-troll-farm-archives-d1b4df880ec6">creating division and distrust</a>. For example, researchers in 2018 concluded that some of the most influential accounts on both sides of divisive issues, like <a href="https://twitter.com/katestarbird/status/954804195269361664">Black Lives Matter and Blue Lives Matter</a>, were controlled by troll farms. </p>
<p>More than just fanning disagreement, trolls want to encourage <a href="https://www.theatlantic.com/politics/archive/2020/06/biden-ukraine-recordings-oan/612454/">a belief that truth no longer exists</a>. Divide and conquer. Distrust anyone who might serve as a leader or trusted voice. Cut off the head. Demoralize. Confuse. Each of these is a devastating attack strategy. </p>
<p>Even as a social media researcher, I underestimate the degree to which my opinion is shaped by these attacks. I think I am smart enough to read what I want, discard the rest and step away unscathed. Still, when I see a post that has millions of likes, part of me thinks it must reflect public opinion. The social media feeds I see are affected by it and, what’s more, I am affected by the opinions of my real friends, who are also influenced. </p>
<p>The entire society is being <a href="https://comprop.oii.ox.ac.uk/research/ira-political-polarization/">subtly manipulated</a> to believe they are on opposite sides of many issues when <a href="https://doi.org/10.1126/science.aap9559">legitimate common ground exists</a>. </p>
<p>I have focused primarily on U.S.-based examples, but the same types of attacks are playing out around the world. By turning the voices of democracies against each other, authoritarian regimes may begin to look preferable to chaos. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/342984/original/file-20200619-43209-yibf92.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Founder and CEO of Facebook Mark Zuckerberg in Brussels, Feb. 17, 2020.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/founder-and-ceo-of-us-online-social-media-and-social-news-photo/1201476988">Kenzo Tribouillard/AFP via Getty Images</a></span>
</figcaption>
</figure>
<p>Platforms have been slow to act. Sadly, misinformation and disinformation drives usage and is <a href="https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499">good for business</a>. Failure to act has often been justified with concerns about <a href="https://www.newyorker.com/news/daily-comment/facebook-and-the-free-speech-excuse">freedom of speech</a>. Does freedom of speech include the right to create 100,000 fake accounts with the express purpose of spreading lies, division and chaos? </p>
<h2>Taking control</h2>
<p>So what can you do about it? You probably already know to check the sources and dates of what you read and forward, but common-sense media literacy advice is not enough. </p>
<p>First, use social media more deliberately. Choose to catch up with someone in particular, rather than consuming only the default feed. You might be amazed to see what you’ve been missing. Help your friends and family find your posts by using features like pinning key messages to the top of your feed. </p>
<p>Second, <a href="https://www.nature.com/articles/d41586-020-01107-z">pressure social media platforms</a> to remove accounts with clear signs of automation. Ask for more controls to manage what you see and which posts are amplified. Ask for more transparency in how posts are promoted and who is placing ads. For example, complain directly about the Facebook news feed <a href="https://www.facebook.com/help/contact/268228883256323">here</a> or tell <a href="https://www.house.gov/representatives/find-your-representative">legislators</a> about your concerns. </p>
<p>Third, be aware of the trolls’ favorite issues and be skeptical of them. They may be most interested in creating chaos, but they also show clear preferences on some issues. For example, trolls want to <a href="https://www.technologyreview.com/2020/05/21/1002105/covid-bot-twitter-accounts-push-to-reopen-america/">reopen economies</a> quickly without real management to flatten the COVID-19 curve. They also clearly supported <a href="https://www.buzzfeednews.com/article/ryanhatesthis/mueller-report-internet-research-agency-detailed-2016">one of the 2016 U.S. presidential candidates</a> over the other. It’s worth asking yourself how these positions might be good for Russian trolls, but bad for you and your family. </p>
<p>Perhaps most importantly, use social media sparingly, like any other addictive, toxic substance, and invest in more real-life community building conversations. Listen to real people, real stories and real opinions, and build from there.</p>
<p>[<em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters/weekly-highlights-61?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=weeklysmart">You can get our highlights each weekend</a>.]</p><img src="https://counter.theconversation.com/content/139610/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeanna Matthews received a 2018-2019 Magic Grant from the Brown Institute (Columbia University/Stanford University). She was a fellow at Data and Society, a research institute in Manhattan, from 2017 to 2018 and is still an affiliate there.
</span></em></p>A social media researcher explains how bots and sock puppet accounts manipulate and polarize public debate.Jeanna Matthews, Full Professor, Computer Science, Clarkson UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1410442020-06-23T20:16:23Z2020-06-23T20:16:23ZChina’s disinformation threat is real. We need better defences against state-based cyber campaigns<p>The Australian government recently announced plans to establish the country’s first <a href="https://www.abc.net.au/news/2020-06-17/foreign-minister-steps-up-criticism-china-global-cooperation/12362076">taskforce</a> devoted to fighting disinformation campaigns, under the Department of Foreign Affairs and Trade (DFAT).</p>
<p>Last week, Foreign Minister Marise Payne <a href="https://www.smh.com.au/politics/federal/foreign-minister-marise-payne-hits-out-at-chinese-russian-disinformation-20200616-p552y9.html">accused</a> China and Russia of “using the pandemic to undermine liberal democracy” by spreading disinformation to manipulate social media debate.</p>
<p>“Where we see disinformation, whether it’s here, whether it’s in the Pacific, whether it’s in Southeast Asia, where it affects our region’s interests and our values, then we will be shining a light on it,” <a href="https://www.abc.net.au/news/2020-06-17/foreign-minister-steps-up-criticism-china-global-cooperation/12362076">Payne said</a>.</p>
<p>In her <a href="https://www.foreignminister.gov.au/minister/marise-payne/speech/australia-and-world-time-covid-19">speech</a> to Canberra’s National Security College, she claimed Australia is going through an “infodemic”. But is it really? And if so, what can be done about it?</p>
<h2>170,000 accounts removed, but how many missed?</h2>
<p>Disinformation campaigns are coordinated attempts to spread false narratives, fake news and conspiracy theories. They’re characterised by repetitive narratives seemingly emanating from a variety of sources. These narratives are made even more believable when republished by trusted friends, family, community figures or political leaders.</p>
<p>Disinformation campaigns exist along a continuum of different cyber warfare techniques, including the massive state-sponsored cyberattacks targeting Australian government institutions and businesses. These sustained attacks <a href="https://theconversation.com/australia-is-under-sustained-cyber-attack-warns-the-government-whats-going-on-and-what-should-businesses-do-141119">reported on Friday</a> were also purportedly emanating from China. </p>
<p>Social media networks such as Twitter and Facebook provide a perfect forum for disinformation campaigns. They’re easily accessible to foreign actors, who can create fake accounts to spread false but seemingly credible stories. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/meet-sara-sharon-and-mel-why-people-spreading-coronavirus-anxiety-on-twitter-might-actually-be-bots-134802">Meet ‘Sara’, ‘Sharon’ and 'Mel': why people spreading coronavirus anxiety on Twitter might actually be bots</a>
</strong>
</em>
</p>
<hr>
<p>Earlier this month, Twitter <a href="https://www.theguardian.com/technology/2020/jun/12/twitter-deletes-170000-accounts-linked-to-china-influence-campaign">removed</a> more than 170,000 accounts connected to state-run propaganda operations based in China, Russia and Turkey. Of these, about 150,000 were reportedly “amplifier” accounts boosting content.</p>
<p>According to a <a href="https://s3-ap-southeast-2.amazonaws.com/ad-aspi/2020-06/Retweeting%20through%20the%20great%20firewall_0.pdf?zjVSJfAOYGRkguAbufYr8KRSQ610SfRX=">report</a> published this month by the Australian Strategic Policy Institute (ASPI), a “persistent, large-scale influence campaign linked to Chinese state actors” has been targeting Chinese-speaking people outside China. </p>
<p>The campaign is allegedly aimed at swaying online debate surrounding the COVID-19 pandemic and the Hong Kong protests, among other key issues.</p>
<p>Twitter is banned in China, so there would be minimal opportunity for the Chinese government to develop and embed troll accounts into local Twitter networks. Instead, China has <a href="https://www.propublica.org/article/how-china-built-a-twitter-propaganda-machine-then-let-it-loose-on-coronavirus">likely hacked</a>, stolen or purchased legitimate accounts.</p>
<p>Twitter hasn’t revealed exactly how it detected the state-sponsored accounts, presumably because this would give other states a “how-to” guide on circumventing the platform’s security barriers. </p>
<p>But according to a New York Times <a href="https://www.nytimes.com/2020/06/11/technology/twitter-chinese-misinformation.html?referringSource=articleShare">report</a>, one giveaway is when a user logs into many different accounts from the same web address. Twitter has also suggested unblocked accounts posting from China may be acting maliciously with government approval.</p>
<h2>Information warfare is a growing threat</h2>
<p>Australia’s Department of Home Affairs has <a href="https://www.theguardian.com/australia-news/2020/jun/11/home-affairs-flags-steps-to-help-australians-identify-fake-news-by-foreign-powers">warned</a> there’s a “realistic prospect” foreign actors could meddle in Australian politics, including in the next federal election – unless steps are taken to prevent this. </p>
<p>The government has warned of this as a future threat. But based on the available evidence, we contend disinformation is already being used to manipulate public debate in Australia.</p>
<p>A University of Oxford <a href="https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf">report</a> published last year suggested organised social media manipulation campaigns have occurred in 70 countries, including Australia. </p>
<p>Earlier this week, analysts at ASPI <a href="https://www.theguardian.com/australia-news/2020/jun/22/foreign-actors-targeted-facebook-users-during-australian-2019-election-thinktank-finds">reiterated how</a> Islamophobic and nationalist content was intentionally <a href="https://firstdraftnews.org/latest/tracking-anti-muslim-tactics-online-australias-election-misinformation/">spread online</a> during last year’s election campaign.</p>
<p>Perhaps the most infamous example of a large-scale disinformation campaign <a href="https://www.wired.com/story/did-russia-affect-the-2016-election-its-now-undeniable/">came from Russia</a> in 2016, when a coordinated campaign was deployed to meddle with the US presidential election. Like Russia, China now appears to be investing substantial resources into disinformation campaigns. </p>
<p>Australia should expect to see further complex attacks conducted by both foreign and internal agents. These may be foreign state-sponsored campaigns, or dirty tactics used on the electoral campaign trail.</p>
<p>During last summer’s horrific bushfires, a large number of Twitter bot accounts were found posting #ArsonAttack, to perpetuate the idea the fires were largely attributable to arson, rather than climate change. The false claims were <a href="https://www.theguardian.com/media/2020/mar/13/we-didnt-start-the-fire-news-corp-defends-false-arson-claims-that-spread-worldwide">taken up by News Corp publications</a>, which then <a href="https://www.nytimes.com/2020/01/08/world/australia/fires-murdoch-disinformation.html">influenced debate</a> surrounding the crisis. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bushfires-bots-and-arson-claims-australia-flung-in-the-global-disinformation-spotlight-129556">Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight</a>
</strong>
</em>
</p>
<hr>
<p>Such claims sow confusion among the public. They increase political polarisation, and erode trust in media and political institutions.</p>
<h2>The best defence is a collective one</h2>
<p>While we can hope Twitter builds on efforts to detect malicious accounts that spread lies, we can’t assume state-sponsored actors will sit back and do nothing in response. Governments have invested too much into such attacks, and campaigns have proven successful.</p>
<p>The most readily available means of defence, as per most contemporary cybercrime, is user education. Social media users of all political persuasions should be aware what they’re seeing online may not be accurate, and should be viewed with a critical eye. </p>
<p>Some of us are better at differentiating between what is real and fake online, and can help filter out content that’s untrustworthy, unverified or plain wrong. <a href="https://theconversation.com/why-people-believe-in-conspiracy-theories-and-how-to-change-their-minds-82514">Simple ways to do this</a> include stating the facts (without specifically focusing on the myths), and offering explanations that coincide with the other’s preexisting beliefs. </p>
<p>It’s also important to remember how little actions such as “liking” and “retweeting” content can further spread disinformation, regardless of intent.</p>
<p>Also, while the above steps help they’re unlikely to completely insulate Australia from the potentially disastrous effects of future disinformation campaigns. We’ll need new solutions from both government and private industry.</p>
<p>Ideally, we’d like to see government regulation around disinformation. And although this hasn’t happened yet, the announcement of a government-run disinformation taskforce is at least one step in the right direction. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/coronavirus-anti-vaxxers-arent-a-huge-threat-yet-how-do-we-keep-it-that-way-138531">Coronavirus anti-vaxxers aren’t a huge threat yet. How do we keep it that way?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/141044/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Unlike the US, Australia hasn’t yet been hit by a large-scale disinformation campaign focussed on meddling with elections. But this is a ‘realistic prospect’ moving forward.Sarah Morrison, PhD Candidate, Swinburne University of TechnologyBelinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of TechnologyJames Martin, Associate Professor in Criminology, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1348022020-04-01T04:20:10Z2020-04-01T04:20:10ZMeet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots<figure><img src="https://images.theconversation.com/files/324472/original/file-20200401-66120-19cxqfx.jpg?ixlib=rb-1.1.0&rect=16%2C9%2C2180%2C1985&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Recently Facebook, Reddit, Google, LinkedIn, Microsoft, Twitter and YouTube <a href="https://techcrunch.com/2020/03/16/facebook-reddit-google-linkedin-microsoft-twitter-and-youtube-issue-joint-statement-on-misinformation/">committed to removing</a> coronavirus-related misinformation from their platforms. </p>
<p>COVID-19 is being described as <a href="https://www.vox.com/recode/2020/3/12/21175570/coronavirus-covid-19-social-media-twitter-facebook-google">the first major pandemic of the social media age</a>. In troubling times, social media helps distribute vital knowledge to the masses.
Unfortunately, this comes with myriad misinformation, much of which is spread through social media bots.</p>
<p>These fake accounts are common on Twitter, Facebook, and Instagram. They have one goal: to spread fear and fake news. </p>
<p>We witnessed this in the <a href="https://www.adweek.com/digital/as-2020-election-nears-twitter-bots-have-only-gotten-better-at-seeming-human/">2016 United States presidential elections</a>, with arson rumours in the bushfire crisis, and we’re seeing it again in relation to the coronavirus pandemic.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bushfires-bots-and-arson-claims-australia-flung-in-the-global-disinformation-spotlight-129556">Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight</a>
</strong>
</em>
</p>
<hr>
<h2>Busy busting bots</h2>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=637&fit=crop&dpr=1 600w, https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=637&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=637&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=801&fit=crop&dpr=1 754w, https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=801&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/323527/original/file-20200327-146695-8rp8pq.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=801&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">This figure shows the top Twitter hashtags tweeted by bots over 24 hours.</span>
<span class="attribution"><span class="source">Bot Sentinel</span></span>
</figcaption>
</figure>
<p>The exact scale of misinformation is difficult to measure. But its global presence can be felt through snapshots of Twitter bot involvement in COVID-19-related hashtag activity.</p>
<p><a href="https://botsentinel.com/">Bot Sentinel</a> is a website that uses machine learning to identify potential Twitter bots, using a score and rating. According to the site, <a href="https://botsentinel.com/trending-topics?date=2020-3-26&hour=18">on March 26</a> bot accounts were responsible for 828 counts of #coronavirus, 544 counts of #COVID19 and 255 counts of #Coronavirus hashtags within 24 hours.</p>
<p>These hashtags respectively took the 1st, 3rd and 7th positions of all top-trolled Twitter hashtags.</p>
<p>It’s important to note the actual number of coronavirus-related bot tweets are likely much higher, as Bot Sentinel only recognises hashtag terms (such as #coronavirus), and wouldn’t pick up on “coronavirus”, “COVID19” or “Coronavirus”. </p>
<h2>How are bots created?</h2>
<p>Bots are usually managed by automated programs called bot “campaigns”, and these are controlled by human users. The actual process of creating such a campaign is relatively simple. There are several <a href="https://digitalmarketinginstitute.com/en-au/blog/grow-your-business-with-social-bots">websites</a> that teach people how to do this for “marketing” purposes. In the underground hacker economy on the dark web, <a href="https://www.techrepublic.com/article/the-dark-web-where-coronavirus-fraud-profiteering-malware-and-scams-are-discussed/">such services are available for hire</a>. </p>
<p>While it’s difficult to attribute bots to the humans controlling them, the purpose of bot campaigns is obvious: create social disorder by spreading misinformation. This can increase public anxiety, frustration and anger against authorities in certain situations.</p>
<p><a href="https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf">A 2019 report</a> published by researchers from the Oxford Internet Institute revealed a worrying trend in organised “social media manipulation by governments and political parties”. They reported: </p>
<blockquote>
<p>Evidence of organised social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically. </p>
</blockquote>
<h2>The modus operandi of bots</h2>
<p>Typically, in the context of COVID-19 messages, bots would spread misinformation through two main techniques.</p>
<p>The first involves <em>content creation</em>, wherein bots start new posts with pictures that validate or mirror existing worldwide trends. Examples include pictures of shopping baskets filled with food, or hoarders emptying supermarket shelves. This generates anxiety and confirms what people are reading from other sources.</p>
<p>The second technique involves <em>content augmentation</em>. In this, bots latch onto official government feeds and news sites to sow discord. They retweet alarming tweets or add false comments and information in a bid to stoke fear and anger among users. It’s common to see bots talking about a “frustrating event”, or some social injustice faced by their “loved ones”. </p>
<p>The example below shows a Twitter post from Queensland Health’s official twitter page, followed by comments from accounts named “Sharon” and “Sara” which I have identified as bot accounts. Many real users reading Sara’s post would undoubtedly feel a sense of injustice on behalf of her “mum”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=535&fit=crop&dpr=1 600w, https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=535&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=535&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=672&fit=crop&dpr=1 754w, https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=672&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/324493/original/file-20200401-66130-1b1evyh.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=672&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The official tweet from Queensland Health and the bots’ responses.</span>
</figcaption>
</figure>
<p>While we can’t be 100% certain these are bot accounts, many factors point to this very likely being the case. Our ability to accurately identify bots will get better as machine learning algorithms in programs such as Bot Sentinel improve.</p>
<h2>How to spot a bot</h2>
<p>To learn the characteristics of a bot, let’s take a closer look Sharon’s and Sara’s accounts.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=521&fit=crop&dpr=1 600w, https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=521&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=521&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=655&fit=crop&dpr=1 754w, https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=655&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/324495/original/file-20200401-66155-9mwijy.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=655&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Screenshots of the accounts of ‘Sharon’ and ‘Sara’.</span>
</figcaption>
</figure>
<p>Both profiles lack human uniqueness, and display some telltale signs they may be bots:</p>
<ul>
<li><p>they have no followers</p></li>
<li><p>they only recently joined Twitter</p></li>
<li><p>they have no last names, and have alphanumeric handles (such as Sara89629382) </p></li>
<li><p>they have only tweeted a few times</p></li>
<li><p>their posts have one theme: spreading alarmist comments</p></li>
</ul>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=855&fit=crop&dpr=1 600w, https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=855&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=855&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1075&fit=crop&dpr=1 754w, https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1075&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/324496/original/file-20200401-66148-1wd24hr.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1075&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Bot ‘Sharon’ tried to rile others up through her tweets.</span>
</figcaption>
</figure>
<ul>
<li>they mostly follow news sites, government authorities, or human users who are highly influential in a certain subject (in this case, virology and medicine). </li>
</ul>
<p>My investigation into Sharon revealed the bot had attempted to exacerbate anger on a news article about the federal government’s coronavirus response. </p>
<p>The language: “Health can’t wait. Economic (sic) can” indicates a potentially non-native English speaker. </p>
<p>It seems Sharon was trying to stoke the flames of public anger by calling out “bad decisions”.</p>
<p>Looking through Sharon’s tweets, I discovered Sharon’s friend “Mel”, another bot with its own programmed agenda. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=554&fit=crop&dpr=1 600w, https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=554&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=554&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=697&fit=crop&dpr=1 754w, https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=697&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/324498/original/file-20200401-66163-1synr48.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=697&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Bot ‘Mel’ spread false information about a possible delay in COVID-19 results, and retweeted hateful messages.</span>
</figcaption>
</figure>
<p>What was concerning was that a human user was engaging with Mel.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=803&fit=crop&dpr=1 600w, https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=803&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=803&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1009&fit=crop&dpr=1 754w, https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1009&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/324503/original/file-20200401-66155-40yl3n.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1009&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An account that seemed to belong to a real Twitter user began engaging with ‘Mel’.</span>
</figcaption>
</figure>
<h2>You can help tackle misinformation</h2>
<p>Currently, it’s simply too hard to attribute the true source of bot-driven misinformation campaigns. This can only be achieved with the full cooperation of social media companies. </p>
<p>The motives of a bot campaign can range from creating mischief to exercising geopolitical control. And some researchers still can’t agree on what exactly constitutes a “bot”. </p>
<p>But one thing is for sure: Australia needs to develop legislation and mechanisms to detect and stop these automated culprits. Organisations running legitimate social media campaigns should dedicate time to using a <a href="https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html">bot detection tool</a> to weed out and report fake accounts. </p>
<p>And as a social media user in the age of the coronavirus, you can also help by reporting suspicious accounts. The last thing we need is malicious parties making an already worrying crisis worse.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/you-can-join-the-effort-to-expose-twitter-bots-124377">You can join the effort to expose Twitter bots</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/134802/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ryan Ko receives funding from CSIRO Data61 and The University of Queensland. He previously received funding from New Zealand's Ministry of Business Innovation and Employment and the New Zealand Law Foundation. </span></em></p>According to Bot Sentinel, #coronavirus and #COVID19 are among the top hashtags being used by Twitter bot accounts.Ryan Ko, Chair Professor and Director of Cyber Security, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1333352020-03-30T12:14:47Z2020-03-30T12:14:47ZSocial media companies are taking steps to tamp down coronavirus misinformation – but they can do more<figure><img src="https://images.theconversation.com/files/322742/original/file-20200324-136168-2qhm1m.jpg?ixlib=rb-1.1.0&rect=3%2C3%2C2029%2C1349&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facebook, the least trusted tech company, has taken the lead in fighting coronavirus misinformation.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Israel-Facebook/e3c70941f7d84ef38df81454adc4d36a/17/0">AP Photo/Ben Margot</a></span></figcaption></figure><p>As we practice social distancing, our <a href="https://www.adweek.com/digital/reddit-sees-traffic-surge-during-coronavirus-outbreak/">embrace of social media gets only tighter</a>. The major social media platforms have emerged as the <a href="https://www.nytimes.com/2020/03/23/technology/coronavirus-facebook-news.html">critical information purveyors</a> for influencing the choices people make during the expanding pandemic. There’s also reason for worry: the World Health Organization is <a href="https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf">concerned about an “infodemic,”</a> a glut of accurate and inaccurate information about COVID-19.</p>
<p>The social media companies have been pilloried in recent years for practicing “<a href="https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/">surveillance capitalism</a>” and being a societal menace. The pandemic could be their moment of redemption. How are they rising to this challenge? </p>
<p>Surprisingly, Facebook, which had earned the reputation of being the <a href="https://www.vox.com/2018/4/10/17220060/facebook-trust-major-tech-company">least trusted tech company</a> in recent years, has led with the strongest, most consistent actions during the unfolding COVID-19 crisis. Twitter and Google-owned YouTube have taken steps as well to stem the tide of misinformation. Yet, all three could do better.</p>
<p>As <a href="https://fletcher.tufts.edu/people/bhaskar-chakravorti">an economist who tracks digital technology’s use worldwide</a> at The Fletcher School at Tufts University, I’ve identified three important ways to evaluate the companies’ responses to the pandemic. Are they informing while simultaneously curtailing misinformation? Are they enforcing responsible advertising policies? And are they providing helpful data to public health authorities without compromising privacy? </p>
<h2>Tackling the infodemic</h2>
<p>Social media companies can block, demote or elevate posts. According to Facebook, the average user <a href="https://www.pbs.org/newshour/show/how-facebooks-news-feed-can-be-fooled-into-spreading-misinformation">sees only 10% of their News Feed</a> and the platforms determine what users see <a href="https://www.pbs.org/newshour/show/how-facebooks-news-feed-can-be-fooled-into-spreading-misinformation">by reordering how stories appear</a>. This means demoting and elevating posts could be as essential as blocking them outright.</p>
<p>Blocking is the most difficult decision because it bumps up against First Amendment rights. Facebook, in particular, has recently been criticized for its <a href="https://www.nytimes.com/2020/01/09/technology/facebook-political-ads-lies.html">unwillingness to block false political ads</a>. But Facebook has had the most clear-cut policy on COVID-19 misinformation. It relies on third-party fact-checkers and health authorities flagging problematic content, and <a href="https://about.fb.com/news/2020/03/coronavirus/#limiting-misinfo">removes posts that fail the tests</a>. It also <a href="https://about.fb.com/news/2020/03/coronavirus/#limiting-misinfo">blocks or restricts hashtags that spread misinformation</a> on its sister platform, Instagram.</p>
<p>Twitter and YouTube have taken less decisive positions. Twitter says it has acted to <a href="https://blog.twitter.com/en_us/topics/company/2020/covid-19.html">protect against malicious behaviors</a>. Del Harvey, Twitter’s vice president of trust and safety, told Axios that the company will “remove any pockets of smaller coordinated attempts to <a href="https://www.axios.com/unintentional-coronavirus-misinformation-new-threat-71af67ec-1520-4283-b017-b094b45f84cb.html">distort or inorganically influence the conversation</a>.”
YouTube <a href="https://www.bloomberg.com/news/articles/2020-03-10/dr-google-scrubs-coronavirus-misinformation-on-search-youtube">removes videos claiming to prevent infections</a>. However, neither company has a transparent blocking policy founded on solid fact-checking. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=429&fit=crop&dpr=1 600w, https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=429&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=429&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=539&fit=crop&dpr=1 754w, https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=539&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/322746/original/file-20200324-155666-q86ans.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=539&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The wave of misinformation on social media includes dubious preventatives and cures for COVID-19.</span>
<span class="attribution"><a class="source" href="https://flickr.com/photos/jagrap/5681679855/">Robert Patton/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>While all three platforms are demoting problematic content and elevating content from authoritative sources, the absence of consistent fact-checking standards has created a gray area where misinformation can slip through, particularly for Twitter. Panic-producing tweets claimed prematurely <a href="https://www.cnn.com/2020/03/14/tech/twitter-coronavirus-new-york-misinformation/index.html">that New York was under lockdown</a>, and bots or fake accounts <a href="https://www.cnn.com/2020/03/14/tech/twitter-coronavirus-new-york-misinformation/index.html">have slipped in rumors</a>.</p>
<p>Even the principle of deferring to authoritative sources can cause problems. For example, the widely read @realDonaldTrump has <a href="https://www.vox.com/policy-and-politics/2020/3/9/21171582/coronavirus-trump-tweets-stock-market-denial">tweeted misinformation</a>. Influential figures who are not officially designated authoritative sources have also managed to circulate misinformation. Elon Musk, founder of Tesla and SpaceX, tweeted a false assertion about the coronavirus to 32 million followers and Twitter has <a href="https://www.bbc.com/news/technology-51975377">declined to remove his tweet</a>. John McAfee, founder of the eponymous security solutions company, also tweeted <a href="https://archive.vn/U3nj3">a false assertion about the coronavirus</a>. That tweet was removed but not before it had been widely shared.</p>
<h2>Harnessing influence for good</h2>
<p>Besides blocking and re-ordering posts, the social media companies must also ask how people are experiencing their platforms and interpreting the information they encounter there. Social media platforms are meticulously designed to <a href="https://www.wired.com/story/phone-addiction-formula/">anticipate the user’s experience, hold their attention and influence actions</a>. It’s essential that the companies apply similar techniques to influence positive behavior in response to COVID-19. </p>
<p>Consider some examples across each of the three platforms of failing to influence positive behaviors by ignoring the user experience. </p>
<p>For Facebook users, private messaging is, increasingly, a <a href="https://www.nbcnews.com/tech/social-media/facebook-groups-coronavirus-misinformation-thrives-despite-broader-crackdown-n1151466">key source of social influence</a> and information about the coronavirus. Because these groups often bring together more trusted networks – family, friends, classmates – there is a greater risk that people will turn to them during anxious times and become susceptible to misinformation. Facebook-owned Messenger and WhatsApp – both closed platforms in contrast to Twitter – are of particular concern since the company’s <a href="https://www.nbcnews.com/tech/social-media/facebook-groups-coronavirus-misinformation-thrives-despite-broader-crackdown-n1151466">ability to monitor content on these platforms is still limited</a>. </p>
<p>For Twitter, it’s essential to track <a href="https://business.twitter.com/en/blog/secrets-of-social-media-influencers.html">“influencers,” or people with many followers</a>. Content shared by these users has greater impact and ought to pass through additional filters. </p>
<p>YouTube has taken the approach of pairing misleading coronavirus content with a link to an alternative authoritative source, such as the Centers for Disease Control and Prevention or World Health Organization. This juxtaposition can have the opposite of the intended effect. A video from a non-authoritative individual <a href="https://www.buzzfeednews.com/article/ryanhatesthis/the-most-popular-youtube-videos-about-the-coronavirus-are">appears with the CDC or WHO logo beneath it</a>, which could unintentionally give viewers the impression that those public health authorities have approved the videos.</p>
<h2>Responsible advertising</h2>
<p>There is money to be made from ads offering products related to the outbreak. However, some of those ads are not in the public interest. Facebook set a standard by <a href="https://about.fb.com/news/2020/03/coronavirus/#banning-ads">banning ads for medical face masks</a> and <a href="https://www.searchenginejournal.com/google-facebook-ban-ads-for-face-masks-as-coronavirus-spreads/354402/#close">Google followed suit</a>, <a href="https://blog.twitter.com/en_us/topics/company/2020/covid-19.html">as did Twitter</a>.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=599&fit=crop&dpr=1 600w, https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=599&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=599&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=753&fit=crop&dpr=1 754w, https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=753&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/323710/original/file-20200327-146712-1bbspbs.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=753&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Social media companies are giving the CDC and WHO free advertising to promote coronavirus-related messages like this WHO Facebook post.</span>
<span class="attribution"><a class="source" href="https://www.facebook.com/WHO/photos/a.167668209945237/3015563535155676/">World Health Organization</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>All three companies have offered free ads to appropriate public health and nonprofit organizations. Facebook has <a href="https://www.facebook.com/4/posts/10111615249124441/?d=n">offered unlimited ads to the WHO</a>, while Google has made a <a href="https://blog.google/inside-google/company-announcements/coronavirus-covid19-response/">similar but less open-ended offer</a> and Twitter offers <a href="https://blog.twitter.com/en_us/topics/company/2020/covid-19.html">Ads for Good credits</a> to fact-checking nonprofit organizations and health information disseminators.</p>
<p>There have been some <a href="https://www.axios.com/unintentional-coronavirus-misinformation-new-threat-71af67ec-1520-4283-b017-b094b45f84cb.html">policy reversals</a>. YouTube initially blocked ads meant to profit from content related to COVID-19, but then allowed some ads that follow the company’s guidelines.</p>
<p>Overall, the companies have responded to the crisis, but their policies on ads vary, have changed and have left loopholes: Users could <a href="https://www.businessinsider.com/face-masks-ads-being-served-by-google-after-ban-2020-3">still see ads for face masks</a> served by Google even after it had officially banned them. Clearer industry-wide principles and firm policies can help keep businesses and people from exploiting the outbreak for commercial gain. </p>
<h2>Data to track the outbreak</h2>
<p>Social media can be a source of essential data for mapping the spread of the disease and managing it. The key is that the companies protect user privacy, recognize the limits of data analysis and not oversell it. <a href="https://ij-healthgeographics.biomedcentral.com/articles/10.1186/s12942-020-00202-8">Geographic information systems that build on data</a> from social media and other sources have already become key to mapping the worldwide spread of COVID-19. Facebook is <a href="https://about.fb.com/news/2020/03/coronavirus/#empowering-partners">collaborating with researchers</a> at Harvard and National Tsing Hua University in Taiwan by sharing data about people’s movements – stripped of identifying information – and high-resolution population density maps. </p>
<p>Search and location data on YouTube and its parent, Google, are invaluable trend-trackers. Google hasn’t offered its trends analyses for COVID-19 in any systematic manner to date, perhaps out of reluctance because of the <a href="https://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/">failure of an earlier Google Trends program</a> that attempted to predict the paths of transmission of influenza and completely missed the peak of the 2013 flu season. </p>
<p><a href="https://www.thinkwithgoogle.com/">Think with Google</a>, the company’s current data analytics service for marketers, offers a powerful example of insights that can be gleaned from Google’s data. It could help with <a href="https://www.nytimes.com/2020/03/19/us/coronavirus-location-tracking.html">projects for contact tracing and social distancing compliance</a>, provided it’s done in a way that respects user privacy. For example, as users’ locations are tagged along with their posts, the people they’ve met and the places they’ve been can help determine whether people on the whole or in a location are complying with public health safety orders and guidelines. </p>
<p>Moreover, data shared by companies – stripped of identifying information – could be used by independent researchers. For example, researchers could use Facebook-owned Instagram and CrowdTangle to <a href="https://apps.crowdtangle.com/public-hub/covid19">correlate travelers’ movements to COVID-19 hotspots</a> with user conversations to locate sources of transmission. <a href="https://sites.tufts.edu/digitalplanet/">Research teams I direct</a> have been analyzing coronavirus-related Twitter hashtags to identify the primary misinformation sources to detect patterns.</p>
<p>The expanding footprint of the pandemic and its consequences are evolving quickly. To their credit, the social media companies have attempted to respond quickly as well. Yet, they can do more. This could be their time to rebuild trust with the public and with regulators, but the window to make the right choices is narrow. Their own futures and the futures of millions may depend on it.</p>
<p>[<em>You need to understand the coronavirus pandemic, and we can help.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=upper-coronavirus-help">Read our newsletter</a>.]</p><img src="https://counter.theconversation.com/content/133335/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bhaskar Chakravorti has founded and directs the Institute for Business in the Global Context at Fletcher/Tufts that has received funding from Mastercard, Microsoft, the Gates Foundation, the Rockefeller Foundation and the Onassis Foundation. He is a Non-Resident Senior Fellow at Brookings India and a Senior Advisor on Digital Inclusion at the Mastercard Center for Inclusive Growth.</span></em></p>Facebook, Google and Twitter are stepping up to block misinformation and promote accurate information about the coronavirus. Their track records on self-policing are poor. The results so far are mixed.Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1263492019-11-05T21:07:02Z2019-11-05T21:07:02ZPersonal data isn’t the ‘new oil,’ it’s a way to manipulate capitalism<figure><img src="https://images.theconversation.com/files/300098/original/file-20191104-88394-pm2a4h.jpg?ixlib=rb-1.1.0&rect=686%2C166%2C3940%2C2400&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Manipulating our own personal data can allow us to manipulate capitalism.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>My <a href="https://www.academia.edu/39114905/Automated_Neoliberalism_Bureaucracy_and_the_Organization_of_Markets_in_Technoscientific_Capitalism">recent research</a> increasingly focuses on how individuals can and do manipulate, or “game,” contemporary capitalism. It involves what social scientists call <a href="http://www.qualres.org/HomeRefl-3703.html">reflexivity</a> and physicists call the <a href="https://ieeexplore.ieee.org/document/8423983">observer effect</a>. </p>
<p>Reflexivity can be summed up as the way our knowledge claims end up changing the world and the behaviours we seek to describe and explain.</p>
<p>Sometimes this is self-fulfilling. A knowledge claim — like “everyone is selfish,” for example — can change social institutions and social behaviours so that we actually end up acting <em>more</em> selfish, thereby enacting the original claim. </p>
<p>Sometimes it has the opposite effect. A knowledge claim can change social institutions and behaviours altogether so that the original claim is no longer correct — for example, on hearing the claim that people are selfish, we might strive to be more altruistic. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-art-and-science-of-analyzing-big-data-112933">The art and science of analyzing Big Data</a>
</strong>
</em>
</p>
<hr>
<p>Of particular interest to me is the political-economic understanding and treatment of our personal data in this reflexive context. We’re constantly changing as individuals as the result of learning about the world, so any data produced about us always changes us in some way or another, rendering that data inaccurate. So how can we trust personal data that, by definition, changes after it’s produced? </p>
<p>This ambiguity and fluidity of personal data is a central concern for data-driven tech firms and their business models. David Kitkpatrick’s 2010 book <em><a href="https://www.simonandschuster.com/books/The-Facebook-Effect/David-Kirkpatrick/9781439102121">The Facebook Effect</a></em> dedicates a whole chapter to exploring Mark Zuckerberg’s design philosophy that “you have one identity” — from now unto eternity — and anything else is evidence of a lack of personal integrity. </p>
<p>Facebook’s terms of service stipulate that users must do things like: “Use the same name that you use in everyday life” and “provide accurate information about yourself.” Why this emphasis? Well, it’s all about the monetization of our personal data. You cannot change or alter yourself in Facebook’s world view, largely because it would disrupt the data on which their algorithms are based. </p>
<h2>Drilling for data</h2>
<p>Treating personal data this way seems to underscore the oft-used metaphor that it is the “new oil.” Examples include a 2014 <a href="https://www.wired.com/insights/2014/07/data-new-oil-digital-economy/"><em>Wired</em> article</a> likening data to “an immensely, untapped valuable asset” and a 2017 cover of <em><a href="https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data">The Economist</a></em> showing various tech companies drilling in a sea of data. Even though people <a href="https://ftalphaville.ft.com/2019/05/08/1557318291000/Data-is-not-the-new-oil/">have criticized</a> this metaphor, it has come to define public debate about the future of personal data and the expectation that it’s the resource of our increasingly <a href="https://www.cigionline.org/articles/economics-data-implications-data-driven-economy">data-driven economies</a>. </p>
<p>Personal data are valued primarily because data can be turned into a <a href="https://www.theglobeandmail.com/business/commentary/article-we-must-consider-what-can-happen-if-our-personal-data-become-a-private/">private asset</a>. <a href="https://journals.sagepub.com/doi/full/10.1177/0162243916661633">This assetization</a> process, however, has significant implications for the political and societal choices and the future we get to make or even imagine. </p>
<h2>We don’t own our data</h2>
<p>Personal data reflect our web searches, emails, tweets, where we walk, videos we watch, etc. We don’t own our personal data though; whoever processes it ends up owning it, which means giant monopolies like Google, Facebook and Amazon. </p>
<p>But owning data is not enough because the value of data derives from its use and its flow. And this is how personal data are turned into assets. Your personal data are owned as property, and the revenues from its use and flow are captured and capitalized by that owner. </p>
<p>As noted above, the use of personal data is reflexive — its owners recognize how their own actions and claims affect the world, and have the capacity and desire then to act upon this knowledge to change the world. With personal data, its owners — Google, Facebook, Amazon, for example — can claim that they will use it in specific ways leading to self-reinforcing expectations, prioritizing future revenues.</p>
<p>They know that investors — and others — will act <a href="https://www.theglobeandmail.com/business/commentary/article-five-reasons-canadas-digital-charter-will-be-a-bust-before-it-even/">on those expectations</a> (for example, by investing in them), and they know that they can produce self-reinforcing effects, like returns, if they can lock those investors, as well as governments and society, into pursuing those expectations. </p>
<p>In essence, they can try to game capitalism and lock us into the expectations that benefit them at the expense of everyone else.</p>
<h2>The scourge of click farms</h2>
<p>What are known as <a href="https://www.youtube.com/watch?v=IwjCAM0XxzE">click farms</a> are a good example of this gaming of capitalism. </p>
<p>A click farm is a room with shelves containing thousands of cellphones where workers are paid to imitate real internet users by clicking on promoted links, or viewing videos, or following social media accounts — basically, by producing “personal” data. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/IwjCAM0XxzE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A video on how click farms work by France24.</span></figcaption>
</figure>
<p>And while they might seem seedy, it’s worth remembering that blue-chip companies <a href="https://variety.com/2019/digital/news/facebook-settlement-video-advertising-lawsuit-40-million-1203361133/">like Facebook</a> have been sued by advertisers for inflating the video viewing figures on its platform.</p>
<p>More significantly, <a href="http://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html">a 2018 article in <em>New York Magazine</em></a> pointed out that half of internet traffic is now made up of bots watching other bots clicking on adverts on bot-generated websites designed to convince yet more bots that all of this is creating some sort of value. And it does, weirdly, create value if you look at the capitalization of <a href="https://kenney.faculty.ucdavis.edu/wp-content/uploads/sites/332/2018/11/Unicorns-Chesire-cats-and-new-dilemmas-of-entrepreneurial-finance-1.pdf">technology “unicorns</a>.”</p>
<h2>Are we the asset?</h2>
<p>Here is the rub though: Is it the personal data that is the asset? Or is it actually us? </p>
<p>And this is where the really interesting consequences of treating personal data as a private asset arise for the future of capitalism. </p>
<p>If it’s us, the individuals, who are the assets, then <a href="https://www.academia.edu/39114905/Automated_Neoliberalism_Bureaucracy_and_the_Organization_of_Markets_in_Technoscientific_Capitalism">our reflexive</a> understanding of this and its implications — in other words, the awareness that everything we do can be mined to target us with adverts and exploit us through personalized pricing or <a href="http://www.superrewards.com/micro-transaction">micro-transactions</a> — means that we can, do and will knowingly alter the way we behave in a deliberate attempt to game capitalism too. </p>
<p>Just think of all those people who fake their social media selves. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=316&fit=crop&dpr=1 600w, https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=316&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=316&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=398&fit=crop&dpr=1 754w, https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=398&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/300109/original/file-20191104-88409-17p2jj7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=398&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">We have the ability to alter the way we behave online to game capitalism ourselves.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>On the one hand, we can see some of the consequences of our gaming of capitalism in the unfolding political scandals surrounding Facebook dubbed <a href="https://www.ft.com/content/76578fba-fca1-11e8-ac00-57a2a826423e">the “techlash.”</a> We know data can be gamed, leaving us with no idea about what data to trust anymore.</p>
<p>On the other hand, we have no idea what ultimate consequences will flow from all the small lies we tell and retell thousands of times across multiple platforms.</p>
<p>Personal data is nothing like oil — it’s far more interesting and far more likely to change our future in ways we cannot imagine at present. And whatever the future holds, we need to start thinking about ways to govern this reflexive quality of personal data as it’s increasingly turned into the private assets that are meant to drive our futures.</p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/126349/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kean Birch receives funding from Social Sciences and Humanities Research Council of Canada. </span></em></p>Personal data is valued primarily because data can be turned into a private asset. That has significant implications for political and societal choices.Kean Birch, Associate Professor, Science and Technology Studies, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1234302019-09-18T12:59:16Z2019-09-18T12:59:16ZMalicious bots and trolls spread vaccine misinformation – now social media companies are fighting back<figure><img src="https://images.theconversation.com/files/292866/original/file-20190917-19059-12esxxq.jpg?ixlib=rb-1.1.0&rect=1495%2C322%2C5681%2C4465&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">At least half of parents of young children report having encountered negative messages about vaccines on social media.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/UH-xs-FizTk">Alexander Dummer/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Social media have become one of the preeminent <a href="https://doi.org/10.1542/peds.2017-1117">ways of disseminating accurate information about vaccines</a>. However, a lot of the vaccine information propagated across social media in the United States has been <a href="https://www.infectiousdiseaseadvisor.com/home/topics/prevention/social-medicine-the-effect-of-social-media-on-the-anti-vaccine-movement/">inaccurate</a> or <a href="https://www.ama-assn.org/delivering-care/public-health/stopping-scourge-social-media-misinformation-vaccines">misleading</a>. At a time when <a href="https://www.nature.com/articles/s41390-019-0354-3">vaccine-preventable diseases</a> are <a href="https://www.ibms.org/resources/news/vaccine-preventable-diseases-on-the-rise/">on the rise</a>, vaccine misinformation has become a <a href="https://www.aappublications.org/news/2019/06/06/measles060619">cause of concern</a> to public health officials.</p>
<p>A 2018 study showed that <a href="https://doi.org/10.2105/AJPH.2018.304567">a lot of anti-vaccine information</a> is generated by <a href="https://www.washingtonpost.com/opinions/as-if-bots-werent-bad-enough-already-now-theyre-anti-vaccine/2018/08/28/a945efa0-aa2d-11e8-b1da-ff7faa680710_story.html?noredirect=on">malicious automated programs</a> – known as bots – and <a href="https://www.nytimes.com/2018/08/23/health/russian-trolls-vaccines.html">online trolls</a>. In a striking parallel with the <a href="https://www.cyberscoop.com/russian-twitter-bots-laid-dormant-for-months-before-impersonating-activists/">2016 presidential campaign</a> and the <a href="https://www.cnbc.com/2019/02/04/twitter-bots-were-more-active-than-previously-known-during-2018-midterms-study.html">2018 midterm elections</a>, some <a href="https://www.hsph.harvard.edu/ecpe/vaccines-social-media-spread-misinformation/">vaccine misinformation</a> on American social media has been <a href="https://www.historyofvaccines.org/content/blog/anti-vaccine-russian-trolls">traced back to Russia</a>.</p>
<p>At Saint Louis University’s <a href="https://www.slu.edu/law/health/index.php">Center for Health Law Studies</a>, I monitor <a href="https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2667484">legal and policy responses to vaccine misinformation</a>. Now platforms like Twitter, Facebook and Pinterest are developing strategies to address anti-vaccine bots and to try to reduce their reach in the United States.</p>
<h2>Vaccine misinformation is all over social media</h2>
<p>“<a href="http://origin.who.int/immunization/research/forums_and_initiatives/1_RButler_VH_Threat_Child_Health_gvirf16.pdf">Vaccine hesitancy</a>” is what public health officials call the “delay in acceptance or refusal of vaccines” despite their availability. The World Health Organization has classified vaccine hesitancy as one of 10 big <a href="https://www.who.int/emergencies/ten-threats-to-global-health-in-2019">threats to global health</a> in 2019, on the list with air pollution, heart disease, cancer and pandemic outbreaks due to viruses like Ebola.</p>
<p>In recent years, social media platforms have become effective vehicles for conveying <a href="https://www.kff.org/news-summary/false-misleading-information-on-vaccines-must-be-removed-from-social-media-to-prevent-hesitancy-experts-at-wha-side-event-say/">inaccurate information</a> about vaccines, <a href="https://www.infectiousdiseaseadvisor.com/home/topics/prevention/social-medicine-the-effect-of-social-media-on-the-anti-vaccine-movement/">amplifying</a> anti-vaccine movements and giving <a href="https://www.infectiousdiseaseadvisor.com/home/topics/prevention/social-medicine-the-effect-of-social-media-on-the-anti-vaccine-movement/">greater visibility</a> to scientifically unsound data.</p>
<p>In a 2019 experiment, several journalists searched the term “vaccine” on Facebook. What came back was predominantly <a href="https://www.rsph.org.uk/uploads/assets/uploaded/f8cf580a-57b5-41f4-8e21de333af20f32.pdf">anti-vaccine content</a>, even though the vast majority of parents – <a href="https://www.rsph.org.uk/uploads/assets/uploaded/f8cf580a-57b5-41f4-8e21de333af20f32.pdf">91% in one survey</a> – are pro-vaccine.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1097335677912522757"}"></div></p>
<p>One study by the Royal Society for Public Health in the U.K. found that 41% of parents using social media reported having encountered <a href="https://www.rsph.org.uk/uploads/assets/uploaded/f8cf580a-57b5-41f4-8e21de333af20f32.pdf">“negative messages” related to vaccination</a>. The number increased to 50% among parents of children younger than 5.</p>
<iframe src="https://me.me/embed/i/2204293" width="100%" height="425" frameborder="0" class="meme-embed" style="max-width:100%;margin:0 auto;" allowfullscreen=""></iframe>
<p>via <a href="https://me.me">MEME</a></p>
<p>Memes and other eye-catching visuals can also help <a href="https://www.historyofvaccines.org/content/blog/anti-vaccine-russian-trolls">propagate the idea</a> that vaccines are unnecessary or harmful, without any reference to scientific or medical data.</p>
<p>Anyone with access to a computer can easily spread inaccurate information about vaccines through social media.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"449525268529815552"}"></div></p>
<p>But bots trolling social media can accomplish this goal at a massive level, as they have been doing in the United States <a href="https://doi.org/10.2105/AJPH.2018.304567">at least since 2014</a>. </p>
<h2>Malicious bots targeting vaccine info</h2>
<p>Bots account for a large percentage of online activity overall. Calculations suggest that <a href="https://thenextweb.com/security/2019/04/17/bots-drove-nearly-40-of-internet-traffic-last-year-and-the-naughty-ones-are-getting-smarter/">between 40%</a> and <a href="https://www.theatlantic.com/technology/archive/2017/01/bots-bots-bots/515043/">52% of all internet traffic</a> is automated. A study analyzing online bot activity in 2018 estimated that <a href="https://thenextweb.com/security/2019/04/17/bots-drove-nearly-40-of-internet-traffic-last-year-and-the-naughty-ones-are-getting-smarter/">20.4% of bots were malicious</a>. Researchers estimate that between 9% and 15% of active Twitter accounts, for instance, are <a href="https://arxiv.org/pdf/1703.03107.pdf">run by bots</a>, instead of people.</p>
<p>A 2018 <a href="https://doi.org/10.2105/AJPH.2018.304567">study analyzing Twitter data</a> examined the role of <a href="https://www.historyofvaccines.org/content/blog/anti-vaccine-russian-trolls">bots and Russian trolls</a> in spreading vaccine misinformation. Researchers looked at over 1.7 million vaccine-related tweets between July 2014 and September 2017. Accounts associated with these two categories tweeted at a higher rate about vaccines than average users. While there are no published studies about other social media, researchers have <a href="https://www.cnn.com/2018/08/23/health/russia-trolls-vaccine-debate-study/index.html">warned of similar activity</a> on Facebook and YouTube. </p>
<p>In the case of Twitter, there seem to be at least two separate goals behind spreading misleading news about vaccines. Most vaccine-focused bots are deployed with the direct goal of spreading vaccine misinformation, presumably with the purposed of amplifying anti-vaccine views.</p>
<p>But content originating in Russia conveys both pro- and anti-vaccine messages. This is part of a broader strategy aimed at <a href="https://foreignpolicy.com/2019/04/09/in-the-united-states-russian-trolls-are-peddling-measles-disinformation-on-twitter/">sowing discord</a> in the U.S. by stirring up conflict around divisive topics.</p>
<p>Some Russian tweets identified in the study used the Twitter #vaccinateUS hashtag. Of all the #vaccinateUS tweets that had Russian sources, <a href="https://doi.org/10.2105/AJPH.2018.304567">43% were pro-vaccine, 38% were anti-vaccine and 19% were neutral</a>. A pro-vaccine one <a href="https://www.reuters.com/article/us-health-vaccines-russia-trolls/russian-trolls-fan-flames-in-u-s-vaccine-debate-idUSKCN1LF2C4">asked</a>: “Do you still treat your kids with leaves? No? And why don’t you #vaccinate them? It’s medicine!” An <a href="https://www.reuters.com/article/us-health-vaccines-russia-trolls/russian-trolls-fan-flames-in-u-s-vaccine-debate-idUSKCN1LF2C4">example</a> of an anti-vaccine one read: “#vaccines are a parents choice. Choice of a color of a little coffin.”</p>
<p>The U.S. is not alone in facing increasing levels of vaccine misinformation on social media. <a href="https://www.cbc.ca/news/elections/false-vaccine-spread-by-bots-trolls-1.5113716">Canada has also reported</a> a rise in the number of online bots spreading vaccine misinformation. Moreover, as content from social media is <a href="https://www.unicef.org/eca/media/1556/file/Tracking%20anti-vaccination%20sentiment%20in%20Eastern%20European%20social%20media%20networks.pdf">consumed across borders</a>, these issues are now turning into a <a href="https://medicalxpress.com/news/2019-07-anti-vaccine-movement-man-made-health-crisis.html">global problem</a>.</p>
<h2>Social media platforms clear out misinformation</h2>
<p>A 2015 <a href="https://doi.org/10.1016/j.vaccine.2015.08.064">study analyzing vaccine pins</a> on Pinterest found that the majority were anti-vaccine. By early 2019, the company decided to <a href="https://www.washingtonpost.com/business/2019/02/21/pinterest-is-blocking-all-vaccine-related-searches-all-or-nothing-approach-policing-health-misinformation/">block all vaccine content</a> from the platform.</p>
<p>Initially, the <a href="https://fortune.com/2019/02/20/how-pinterest-is-going-further-than-facebook-and-google-to-quash-anti-vaccination-misinformation/">ban was absolute</a>, regardless of the accuracy or source of the information. In late August, <a href="https://newsroom.pinterest.com/en/post/bringing-authoritative-vaccine-results-to-pinterest-search">Pinterest announced</a> that it would start allowing content from <a href="https://www.webmd.com/children/vaccines/news/20190829/pinterest-limits-sources-of-vaccine-content">public health organizations</a>, including the U.S. Centers for Disease Control and Prevention, the American Academy of Pediatrics and World Health Organization.</p>
<p>In March 2019, <a href="https://www.wired.com/story/facebook-anti-vaccine-crack-down/">Facebook announced</a> that it would take steps to diminish anti-vaccine content. The company <a href="https://newsroom.fb.com/news/2019/03/combatting-vaccine-misinformation/">no longer allows anti-vaccine advertising</a> and says it is considering removing fundraising tools from anti-vaccination Facebook pages. It no longer “recommends” anti-vaccine content and reduced the rankings of groups and pages conveying vaccine misinformation. They’re less visible, but not banned – these groups and pages are still present on Facebook.</p>
<p>Also in 2019, YouTube <a href="https://www.buzzfeednews.com/article/carolineodonovan/youtube-just-demonetized-anti-vax-channels">prohibited advertising</a> on channels and videos that run anti-vaccination content. Until then, most YouTube searches for “vaccine” served up misinformation at the top of the list results. Afterwards, <a href="https://www.youtube.com/watch?v=7VG_s2PCH_c">John Oliver’s HBO episode on vaccines</a> and similar content jumped to the top.</p>
<h2>Plenty of misinformation still online</h2>
<p>As I wrote this article, dozens of new tweets were added to the #vaccine hashtag on Twitter. Several were similar to <a href="https://twitter.com/ViraBurnayeva/status/1173418319157837824">this one</a>, tweeted from an account with over 11,000 followers, that conveys an anti-vaccine message under the guise of scientific information.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1173418319157837824"}"></div></p>
<p>This account, which appears to be <a href="https://theconversation.com/anti-vaxxers-appear-to-be-losing-ground-in-the-online-vaccine-debate-114406">closely related</a> to a previously suspended one, tweeted multiple times per hour. Less than an hour before the tweet above, it had tweeted a <a href="https://twitter.com/ViraBurnayeva/status/1173409976221671424">visually more blunt message</a> asserting the false link between vaccines and autism.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1173409976221671424"}"></div></p>
<p>Like most Twitter users, I have no idea whether this is a personal account or one operated by a bot. But for several hours the vast majority of the tweets on the vaccine hashtag were spreading content that is <a href="http://sites.nationalacademies.org/BasedOnScience/vaccines-are-safe/">not supported by</a> <a href="http://sites.nationalacademies.org/BasedOnScience/vaccines-do-not-cause-autism/">current scientific consensus</a>.</p>
<p>While the latest tweets were predominantly anti-vaccination, when I sorted results by “top tweets,” a <a href="https://twitter.com/HHSGov/status/1123335449249030145">tweet from the U.S. Department of Health and Human Services</a>, pointing readers toward its own vaccine information page, appeared first.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1123335449249030145"}"></div></p>
<p>But tweets 4, 5 and 9 in the top 10 belonged to the same account with 11,000 followers I encountered repeatedly while monitoring the Twitter vaccine hashtag.</p>
<p>With outbreaks of <a href="https://theconversation.com/measles-why-its-so-deadly-and-why-vaccination-is-so-vital-110779">vaccine-preventable diseases on the rise</a>, public health institutions like the Centers for Disease Control and Prevention have been <a href="http://www.phf.org/programs/immunization/Documents/CDC_Webinar_National_Infant_Immunization_Week_2019Mar_Slides.pdf">increasing their social media presences</a>. Social media platforms can continue to help reduce misinformation that could further increase vaccine hesitancy in the United States and elsewhere. As suggested by Pinterest’s approach, these tech companies can increase the amount and visibility of vaccine content from reliable sources. While it’s virtually impossible to eliminate all inaccurate posts, I believe social media can and should be redesigned to facilitate the promotion of accurate vaccine information.</p>
<p>[ <em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/123430/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ana Santos Rutschman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Anti-vaccine info online might have foreign roots and political aims.Ana Santos Rutschman, Assistant Professor of Law, Saint Louis UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1230712019-09-10T22:26:13Z2019-09-10T22:26:13ZScant evidence of active Twitter bots as Canadian election kicks off<figure><img src="https://images.theconversation.com/files/291664/original/file-20190910-109952-1msizz4.jpg?ixlib=rb-1.1.0&rect=763%2C655%2C4652%2C2209&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There's little evidence that Twitter is being overrun with partisan bots in the leadup to the Canadian election.</span> <span class="attribution"><span class="source">Waldemar Brandt/Unsplash</span></span></figcaption></figure><p>During the politically sensitive weeks before an election, public discussions often intensify. In Canada this summer, the buzz has included discussions about the spread of bots on social media. </p>
<p>Bots are automated social media accounts that are programmed by humans to send hundreds and sometimes thousands of messages a day. <a href="https://www.cbc.ca/news/technology/twitter-bots-canada-influence-unclear-1.5240778">News stories</a> are written to inform and possibly warn Canadians about the existence of these bots, which are currently at the centre of public and academic debates, sometimes reaching the level of moral panic. </p>
<p>Curious about this issue, I recently decided to examine a large dataset that I’ve been collecting from Twitter. My examination shows that the spectre of bots is being weaponized by all sides to discredit opponents and possibly silence them. </p>
<p>Using a digital media platform, I collected 1,733,852 tweets with the hashtag #CDNpoli that were posted by 163,241 unique users. This hashtag, an abbreviated version of “Canadian politics,” is among the most popular used by social media users when they’re tweeting about Canadian politics. </p>
<h2>A spike following SNC-Lavalin news</h2>
<p>The Twitter data was collected from May 23, 2019, to Sept. 4, 2019. The day of Aug. 14 had the highest number of tweets so far — 39,382.</p>
<p>The spike coincided with the breaking news that Prime Minster Justin Trudeau broke ethics rules in the <a href="https://globalnews.ca/news/5764034/justin-trudeau-snc-lavalin-broke-ethics-rules/">SNC-Lavalin affair</a>. </p>
<p>In my view, this shows a healthy engagement in political issues on social media because democracy always needs to hold the powerful accountable. </p>
<p>To move forward in my project, I extracted the most popular stories in these tweets (most retweeted) and the most active users (those who tweet more than others) using two codes <a href="https://www.python.org/">in Python</a>, a program for processing large datasets. </p>
<p>The following graph shows the most retweeted messages and their frequencies. They pertain to news reports and partisan messages mostly praising or attacking Prime Minister Justin Trudeau, Conservative leader Andrew Scheer and Maxime Bernier, head of the People’s Party of Canada. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=512&fit=crop&dpr=1 600w, https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=512&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=512&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=644&fit=crop&dpr=1 754w, https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=644&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/291181/original/file-20190905-175710-1n4qroe.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=644&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The most retweeted #CDNpoli tweets.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>But there are two other notable features. </p>
<p>First, there are many references to climate change in the discourse on the Canadian election (tweets No. 6 and 10). Using a machine learning technique, I identified the most important topics. Climate change action is at the top — something that requires further scholarly attention.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/canadians-in-every-riding-support-climate-action-new-research-shows-122918">Canadians in every riding support climate action, new research shows</a>
</strong>
</em>
</p>
<hr>
<p>Second, there are references to bots (tweet No. 2). The tweet challenges some claims that the popular hashtag #TrudeauMustGo is generated by bots, while tweet No. 8 accuses an active supporter of the Conservative Party of Canada of being a bot. </p>
<p>From my preliminary evaluation, the term itself seems weaponized to discredit opponents and possibly silence them just for being active users. In fact, this technique is not unique to Canada, for other weaponized terms like “<a href="https://journals.sagepub.com/doi/abs/10.1177/0894439318795849">fake news</a>” in the United States and “<a href="https://ijoc.org/index.php/ijoc/article/view/8989/2600">electronic flies”</a> in the Middle East are used in similar situations.</p>
<h2>No evidence</h2>
<p>The problem here is that these accusations are made without evidence. One of the users accused of being a bot scored 0.4 out of five using a digital tool called the Botometer, which is often employed to identify individual and bulk users based on several criteria. It gives a score ranging from zero for being human to five for being bot, and we can manually check individual users by using the <a href="https://botometer.iuni.iu.edu/#!/">Botometer website</a>.</p>
<p>To understand whether these users who tweet about Canadian politics are in fact bots, I examined the most active 1,000 users to determine their likelihood of being bots using a <a href="https://pypi.org/project/botometer/">Python code</a>. </p>
<p>In total, those active users tweeted 633,105 times, constituting 36 per cent of the total dataset. I focused on these active users because my initial suspicion was that many could be mostly bots due to their high activity. But I was wrong. </p>
<p>The top user, for instance, tweeted 5,345 times promoting anti-Conservative views using the hashtag #CDNpoli. A closer examination showed this user is human, not a bot. It scored 0.6 out of five on the Botometer.</p>
<h2>Private accounts</h2>
<p>From the 1,000 users examined, there were 14 accounts that were not detectable mostly because they were suspended or their accounts were private. The following graph shows the Botometer scores of the users I examined.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=320&fit=crop&dpr=1 600w, https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=320&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=320&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=402&fit=crop&dpr=1 754w, https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=402&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/291689/original/file-20190910-109939-9ydbel.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=402&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Botometer scores of active #CDNpoli Twitter users.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>As it turns out, only 16 users of the 163,241 I examined might be categorized as bots, a fairly insignificant number in comparison to other studies that showed a large number of bots in political discussions on social media, especially among <a href="https://www.emerald.com/insight/content/doi/10.1108/OIR-02-2018-0065/full/html">active users</a>. </p>
<p>The highest scoring user (4.4 out of 5 on the Botometer, indicating the user is a bot), for instance, tweeted more than 49,000 times from May to early September with messages supporting the Conservative Party. Another user, @EnvironTwitBot, tweeted more than 78,924 times and describes itself as follows: </p>
<blockquote>
<p>“I am a twitter bot created for a dissertation about online communities and predictive algorithms.”</p>
</blockquote>
<p>The remaining bot-like users have different political positions and affiliations.</p>
<p>Again, these few accounts should not cause any concern because there are only few of them, and this small-scale examination revealed that the most retweeted posts on Canadian political issues are from genuine and diverse Twitter users.</p>
<p>Claims that tweets on the Canadian election are the work of bot accounts, without empirical evidence or verification, need to be taken with a grain of salt. Political activists from many sides seem to be using the bot argument to attack and discredit their opponents, with the possible goal of silencing healthy political debates on social media. </p>
<p>[ <em><a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=expertise">Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day.</a></em> ]</p><img src="https://counter.theconversation.com/content/123071/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ahmed Al-Rawi receives funding from SSHRC to study fake news on social media and mainstream media in Canada. </span></em></p>Claims that tweets on the Canadian election are the work of bot accounts, without empirical evidence or verification, need to be taken with a grain of salt.Ahmed Al-Rawi, Assistant Professor, Simon Fraser UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1151882019-04-10T13:46:11Z2019-04-10T13:46:11ZTrump supporters on Twitter during 2016 US election show little evidence of Russian infiltration – new research<figure><img src="https://images.theconversation.com/files/268416/original/file-20190409-2921-gcp0kq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The biggest little bird in the nest. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/donald-trump-social-media-communication-vector-656904574?src=idm7TVmH8D1Idr_BCQ0V9g-1-16">doamama</a></span></figcaption></figure><p>Donald Trump’s <a href="https://www.nytimes.com/elections/2016/results/president">meteoric rise</a> from political outsider to president of the United States surprised nearly everyone – not least political analysts and scientists. Many are hoping for an easy explanation from the <a href="https://theconversation.com/the-mueller-probe-kenneth-starr-sees-eerie-echoes-of-his-1990s-clinton-investigation-113509">Mueller report</a>, including evidence of heavy Russian interference in the campaign. Mueller <a href="http://nymag.com/intelligencer/article/robert-mueller-investigation-what-we-know.html">has indicted</a> numerous Russians in this regard, though more details will emerge when the report is published in the coming days. </p>
<p>In our <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0214854">new paper</a>, which has been published in the PLoS One journal, we have covered similar ground via Trump’s stronghold: Twitter. By sampling some 250,000 accounts, we found a powerful new group of Trump supporters emerged during the election and effectively usurped the Republican Party on the social network. But very much to our surprise, very few bots or Russian accounts were involved. This suggests that if the Russians were acting to influence the election, the effect at least on Twitter may have been much more limited than <a href="https://www.newyorker.com/magazine/2018/10/01/how-russia-helped-to-swing-the-election-for-trump">has been claimed</a>. </p>
<p>We identified three kinds of Twitter accounts that were particularly relevant to the election: a Republican Party group; a Trump group; and a group of more extreme <a href="https://www.theatlantic.com/politics/archive/2017/12/alt-right/549242/">alt-right</a> adherents. We classified accounts into these groups based on who they followed and the hashtags and other words that they used in posts. The Trump group often used #maga, for instance, as in “make America great again”, and also “Trump supporter”. Mainstream Republicans often used #tcot or #tgdn, respectively “top conservatives on Twitter” and “Twitter gulag defence network”; while the far right used the #altright hashtag and words like “white” and “nationalist”. </p>
<h2>What happened on Twitter</h2>
<p>When we looked at how these three groups had developed over time, we found the Republican accounts mainly dated from the <a href="https://abcnews.go.com/Politics/tea-party-protesters-march-washington/story?id=8557120">Tea Party marches</a> following Barack Obama’s first election victory in 2008, and also the 2012 Obama vs Romney <a href="https://www.bbc.co.uk/news/world-us-canada-20216038">campaign</a>. Conversely, the Trump and alt-right groups had largely emerged during the 2016 election campaign. </p>
<p>By late 2016, very few new accounts were being opened that fit the characteristics of our Republican Twitter group. We found a big shift in following behaviour as well, with existing Republican accounts becoming more likely to follow accounts in our Trump group rather than other mainstream Republicans. This reflects the way in which Trump suddenly jumped ahead of a crowded field in the Republican primaries. When you combine the followers of the three Twitter groups, they amount to some 57m unique users: this almost certainly made the difference in an election where the margin of victory was so tight – remember Trump <a href="https://www.bbc.co.uk/news/election/us2016/results">beat</a> Hillary Clinton in the electoral college, but without winning the popular vote, winning key marginal states by only a few tens of thousands of votes. </p>
<p>But what drove support for this shift? Staying with following behaviour, we found that members of all three groups tended to follow people who came under the same group, while those that we identified within the Trump and Republican groups frequently followed one another. But while members of the alt-right group followed those in the Trump group, this was not reciprocated to the same degree. This suggests that the widely held idea that the far right were very influential in the growth of support for Trump may be an exaggeration. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/268418/original/file-20190409-2912-smj0ue.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rarer than you’d think.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/donald-trump-social-media-communication-vector-656904574?src=idm7TVmH8D1Idr_BCQ0V9g-1-16">PP77LSK</a></span>
</figcaption>
</figure>
<p>To estimate Twitter bots, we used a US tool called <a href="https://botometer.iuni.iu.edu">Botometer</a>, which scores each account on the likelihood that it is automated. We concluded that Twitter bots and <a href="https://www.recode.net/2017/11/2/16598312/russia-twitter-trump-twitter-deactivated-handle-list">foreign accounts</a> were certainly part of Trump’s Twitter community, and <a href="http://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2016/11/Data-Memo-US-Election.pdf">played a role</a> in spreading his message, but were vastly outnumbered by the massive groups of real-life supporters who suddenly started joining Twitter and following one another after Trump announced his election campaign. In fact, we found more automated accounts in the Republican Party’s group than in Trump’s group. Our findings match <a href="https://www.sciencemag.org/news/2019/01/majority-americans-were-not-exposed-fake-news-2016-us-election-twitter-study-suggests">other recent research</a>, which found that fake news was not nearly as pervasive on Twitter and Facebook as previously feared; in the case of Twitter, for instance, 80% of fake news appeared on only 1.1% of users’ newsfeeds.</p>
<p>Our point is not that foreign-owned bots generating fake news didn’t interfere with the election, but rather that they probably had less influence than various other factors – particularly Trump himself, his group of highly motivated supporters and the US media. Trump’s supporters did not coalesce around an army of bots – they do appear to have been a grassroots movement of previously disengaged voters.
Trump’s victory seems more driven by his own particular style of campaigning, galvanising his followers into a political backlash against “Washington elites”. </p>
<p>These kinds of movements certainly aren’t unknown. Political analysts are very familiar with the concept of the <a href="https://www.newstatesman.com/politics/2015/04/what-overton-window">Overton Window</a>, in which the political centre ground shifts in response to pressure from disenfranchised and frustrated groups on the fringes. In Trump’s case, the shift was surprisingly rapid. In only a few months, a relatively small group had grown to the point it was able to subsume the traditional Republican Party. </p>
<h2>Predicting the future</h2>
<p>Our era will long be remembered for the populist swings that took place in politics – not only Trump but elections in the likes of Hungary and Japan, and also the UK’s Brexit referendum. These results frequently surprised politicians and the media, prompting <a href="https://www.washingtonpost.com/news/worldviews/wp/2017/01/03/opinion-polls-missed-trump-and-brexit-this-french-newspaper-says-it-has-the-solution/">much</a> discussion <a href="https://fivethirtyeight.com/features/macron-won-but-the-french-polls-were-way-off/">about</a> problems with the tools with which we have tracked people’s voting intentions. </p>
<p>We think that our method, or one derived from it, can be a valuable addition to the toolbox in future. By following the development of political groups on Twitter, you can observe what is happening in real time. In future, this could help identify disenfranchised voter groups amenable to populist candidates and better understand their behaviour and the issues that motivate them. Our method might also make it easier to determine when extremist political minorities, massively amplified by the global reach of Twitter, might be exerting a disproportionate level of influence. </p>
<p>Studying Twitter allows you to observe these things at a speed that traditional polling and analysis can’t match. Hopefully by studying the world of online political discourse in a more rigorous and systematic way like this, we can finally start to catch up with the breakneck speed of modern political change.</p><img src="https://counter.theconversation.com/content/115188/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>John Bryden receives funding from the Economic and Social Research Council. </span></em></p><p class="fine-print"><em><span>Eric Silverman receives funding from the Medical Research Council and the Chief Scientist Office as a member of the University of Glasgow's MRC/CSO Social and Public Health Sciences Unit. </span></em></p>On the back of the Mueller investigation’s apparent exoneration of the POTUS, here’s another surprise.John Bryden, Research Fellow, Royal Holloway University of LondonEric Silverman, Research Fellow, Social and Public Health Sciences Unit, University of GlasgowLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1043772018-11-05T11:41:53Z2018-11-05T11:41:53ZEven a few bots can shift public opinion in big ways<figure><img src="https://images.theconversation.com/files/242368/original/file-20181025-71020-1unqn6t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Adding bots into an online discussion can definitely affect the views of real people.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/robots-hands-typing-on-keyboard-3d-706565200">Tatiana Shepeleva/Shutterstock.com</a></span></figcaption></figure><p>Nearly <a href="https://arxiv.org/abs/1810.12398">two-thirds of the social media bots</a> with political activity on Twitter before the 2016 U.S. presidential election supported Donald Trump. But all those Trump bots were far less effective at shifting people’s opinions than the smaller proportion of bots backing Hillary Clinton. As my recent research shows, a <a href="https://arxiv.org/abs/1810.12398">small number of highly active bots</a> can significantly change people’s political opinions. The main factor was not how many bots there were – but rather, how many tweets each set of bots issued.</p>
<p>My work focuses on <a href="http://mitmgmtfaculty.mit.edu/zlisto/">military and national security aspects</a> of social networks, so naturally I was intrigued by concerns that bots might affect the outcome of the upcoming 2018 midterm elections. I began investigating what exactly bots did in 2016. There was <a href="https://www.washingtonpost.com/blogs/post-partisan/wp/2017/01/18/russias-radical-new-strategy-for-information-warfare/">plenty</a> of <a href="http://time.com/4783932/inside-russia-social-media-war-america/">rhetoric</a> – but only one basic factual principle: If <a href="https://theconversation.com/how-the-russian-government-used-disinformation-and-cyber-warfare-in-2016-election-an-ethical-hacker-explains-99989">information warfare efforts</a> using bots had succeeded, then voters’ opinions would have shifted. </p>
<p><a href="https://scholar.google.com/citations?user=SA6zXVIAAAAJ&hl=en">I</a> wanted to measure how much bots were – or weren’t – responsible for changes in humans’ political views. I had to find a way to identify social media bots and evaluate their activity. Then I needed to measure the opinions of social media users. Lastly, I had to find a way to estimate what those people’s opinions would have been if the bots had never existed.</p>
<h2>Finding tweeters and bots</h2>
<p>To narrow the research a bit, my students and I focused our analysis on the Twitter discussion around one event in the <a href="https://www.bbc.com/news/technology-37684418">lead-up to the election</a>: the <a href="https://www.nytimes.com/2016/10/10/us/politics/transcript-second-debate.html">second debate between Clinton and Trump</a>. We collected 2.3 million tweets that contained keywords and hashtags related to the debate. </p>
<p>Then we made a list of the roughly 78,000 Twitter users who posted those tweets and constructed the network of who followed whom among those users. To identify the bots among them, we used an <a href="https://arxiv.org/abs/1805.10244">algorithm based on our observation</a> that bots often retweeted humans but were not themselves frequently retweeted.</p>
<p>This method found 396 bots – or less than 1 percent of the active Twitter users. And just 10 percent of the accounts followed them. I felt good about that: It seemed unlikely that such a small number of relatively disconnected bots could have a major effect on people’s opinions.</p>
<h2>A closer look at the people</h2>
<p>Next we set out to measure the opinions of the people in our data set. We did this with a type of machine learning algorithm called a <a href="https://theconversation.com/how-computers-help-biologists-crack-lifes-secrets-48416">neural network</a>, which in this case we set up to evaluate the content of each tweet, determining the extent to which it supported Clinton or Trump. Individuals’ opinions were calculated as the average of their tweets’ opinions. </p>
<p>Once we had assigned each human Twitter user in our data a score representing how strong a Clinton or Trump backer they were, the challenge was to measure how much the bots shifted people’s opinions – which meant calculating what their opinions would have been if the bots hadn’t existed.</p>
<p>Fortunately, a model from <a href="http://doi.org/10.2307/2285509">as far back as the 1970s</a> had established a way to gauge people’s sentiments in a social network based on connections between them. In this network-based model, individuals’ opinions tend to align with the people connected to them. After slightly modifying the model to apply it to Twitter, we used it to calculate people’s opinions based on who followed whom on Twitter – rather than looking at their tweets. We found that the opinions we calculated from the network model matched well with opinions measured from the content of their tweets.</p>
<h2>Life without the bots</h2>
<p>So far we had shown that the follower network structure in Twitter could accurately predict people’s opinions. This now allowed to us to ask questions such as: What would their opinions have been if the network were different? The different network we were interested in was one that contained no bots. So for our last step, we removed the bots from the network and recalculated the network model, to see what real people’s opinions would have been without bots. Sure enough, bots had shifted human users’ opinions – but in a surprising way. </p>
<p>Given <a href="https://www.npr.org/sections/alltechconsidered/2017/04/03/522503844/how-russian-twitter-bots-pumped-out-fake-news-during-the-2016-election">much of the news reporting</a>, we were expecting the bots to help Trump – but they didn’t. In a network without bots, the average human user had a pro-Clinton score of 42 out of 100. With the bots, though, we had found the average human had a pro-Clinton score of 58. That shift was a far larger effect than we had anticipated, given how few and unconnected the bots were. The network structure had amplified the bots’ power.</p>
<p><iframe id="UJpPF" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/UJpPF/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>We wondered what had made the Clinton bots more effective than the Trump bots. Closer inspection showed that the 260 bots supporting Trump posted a combined 113,498 tweets, or 437 tweets per bot. However, the 150 bots supporting Clinton posted 96,298 tweets, or 708 tweets per bot. It appeared that the power of the Clinton bots came not from their numbers, but from how often they tweeted. We found that most of what the bots posted were retweets of the candidates or other influential individuals. So they were not really crafting original tweets, but sharing existing ones.</p>
<p>It’s worth noting that our analysis looked at a relatively small number of users, especially when compared to the voting population. And it was only during a relatively short period of time around a specific event in the campaign. Therefore, they don’t suggest anything about the <a href="http://time.com/5286013/twitter-bots-donald-trump-votes/">overall election results</a>. But they do show the potential effect bots can have on people’s opinions.</p>
<p>A small number of very active bots can actually significantly shift public opinion – and despite <a href="https://www.washingtonpost.com/technology/2018/07/06/twitter-is-sweeping-out-fake-accounts-like-never-before-putting-user-growth-risk/">social media companies’ efforts</a>, there are <a href="https://theconversation.com/why-do-so-many-people-fall-for-fake-profiles-online-102754">still large numbers of bots out there</a>, constantly tweeting and retweeting, trying to influence real people who vote. </p>
<p>It’s a reminder to be careful about what you read – and what you believe – on social media. We recommend double-checking that you are following people you know and trust – and keeping an eye on who is tweeting what on your favorite hashtags.</p><img src="https://counter.theconversation.com/content/104377/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tauhid Zaman is a registered Democrat.</span></em></p>Measuring Twitter bots’ effects on the opinions of real people can yield surprising results about what makes them influential.Tauhid Zaman, Associate Professor of Operations Management, MIT Sloan School of ManagementLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/994022018-07-24T10:00:45Z2018-07-24T10:00:45ZA vicious online propaganda war that includes fake news is being waged in Zimbabwe<figure><img src="https://images.theconversation.com/files/228219/original/file-20180718-142408-1pgb4gt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Protesters from the MDC-Alliance march in Harare demanding electoral reforms. </span> <span class="attribution"><span class="source">EPA-EFE/Aaron Ufumeli</span></span></figcaption></figure><p>Fake news is <a href="https://www.newsday.co.zw/2018/03/2018-elections-of-fake-news-social-media/">on the upsurge</a> as Zimbabwe gears up for its watershed elections on 30 July. Mobile internet and social media have become vehicles for spreading a mix of fake news, rumour, hatred, disinformation and misinformation. This has happened because there are no explicit official rules on the use of social media in an election.</p>
<p>Coming soon after the 2017 military coup that ended Robert Mugabe’s <a href="https://www.bbc.com/news/world-africa-42071488">37 years in power</a>, these are the first elections <a href="https://edition.cnn.com/2018/05/30/africa/zimbabwe-elections-july-intl/index.html">since independence</a> without his towering and domineering figure. They are also the first elections in many years without opposition leader Morgan Tsvangirai, who <a href="https://www.enca.com/africa/zimbabwean-opposition-leader-tsvangirai-dies">died in February</a>. </p>
<p>The polls therefore potentially mark the beginning of a new order in Zimbabwe. The stakes are extremely high. </p>
<p>For the ruling Zanu-PF, the elections are crucial for legitimising President Emmerson Mnangagwa (75)‘s reign, and restoring constitutionalism. The opposition, particularly the MDC-Alliance led by Tsvangirai’s youthful successor, <a href="https://www.bbc.com/news/world-africa-44741062">Nelson Chamisa (40)</a>, views the elections as a real chance to capture power after Mugabe’s departure.</p>
<p>The intensity of the fight has seen the two parties use desperate measures in a battle for the hearts and minds of voters. They have teams of spin-doctors and “online warriors” (a combination of bots, paid or volunteering youths) to manufacture and disseminate party propaganda on Twitter, Facebook and WhatsApp. </p>
<p>Known as <a href="https://www.zimbabwesituation.com/news/eds-office-speaks-on-sms-campaign/?PageSpeed=noscript">“<em>Varakashi</em>”</a>, (Shona for “destroyers”) Zanu-PF’s “online warriors” are pitted against the <a href="http://www.thegwerutimes.com/2018/05/15/of-zimbabwe-and-toxic-politics/">MDC’s “<em>Nerrorists</em>”</a> (after Chamisa’s nickname, “Nero”) in the unprecedented online propaganda war to discredit each other.</p>
<p>Besides the fundamental shifts in the Zimbabwean political field, the one thing that distinguishes this election from previous ones is the explosion in mobile internet and <a href="https://t3n9sm.c2.acecdn.net/wp-content/uploads/2018/03/Annual-Sector-Perfomance-Report-2017-abridged-rev15Mar2018-003.pdf">social media</a>. Information is generated far more easily. It also spreads much more rapidly and widely than before. </p>
<p>What’s happening in the run-up to the polls should be a warning for those responsible for ensuring the elections are credible. </p>
<h2>Seeing is believing</h2>
<p>Images shared on social media platforms have become a dominant feature in the spread of fake news ahead of the elections. Both political parties have used doctored images of rallies from the past, or from totally different contexts, to project the false impression of overwhelming support. </p>
<p>Supporters of the MDC-Alliance, which shares the red colour with South Africa’s Economic Freedom Fighters <a href="https://www.effonline.org/">EFF</a>, have been sharing doctored images of EFF rallies – and claiming them as their own – to give the impression of large crowds, according to journalists I interviewed in Harare.</p>
<p>Doctored documents bearing logos of either government, political parties or the Zimbabwe Electoral Commission are being circulated on social media to drive particular agendas. Examples include:</p>
<ul>
<li><p>A purported official letter announcing the resignation of the president of the newly formed <a href="https://www.newzimbabwe.com/chaos-rock-mugabe-party-spokesman-denies-interim-leader-resignation/">National Patriotic Front</a>. </p></li>
<li><p>The circulation of a fake sample of a ballot paper aimed at discrediting the <a href="http://www.chronicle.co.zw/fake-ballot-paper-sample-in-circulation/">electoral commission</a>, and</p></li>
<li><p>A sensational claim that Chamisa had offered to make controversial former first Lady Grace Mugabe his <a href="https://www.news24.com/Africa/Zimbabwe/ill-never-appoint-grace-mugabe-as-my-deputy-says-mdc-leader-chamisa-20180710">vice president</a> if he wins. </p></li>
</ul>
<p>A number of these fake images and documents have gained credibility, after they were picked up as news by the mainstream media. This speaks to the diminishing capacity of newsrooms to <a href="https://www.sla.org/wp-content/uploads/2014/07/Information-Verification.pdf">verify information</a> from social media, in the race to be first with the news.</p>
<p>And, contrary to electoral <a href="https://www.mediasupport.org/new-guidelines-prepare-zimbabwean-media-for-up-coming-elections/">guidelines for public media</a> partisan reporting continues unabated. The state media houses are endorsing Mnangagwa while the private media largely roots for the <a href="https://www.mediasupport.org/wp-content/uploads/2018/06/MONITORS-BASELINE-REPORT-3.pdf">MDC-Alliance</a>. </p>
<h2>Explosion of the internet</h2>
<p>These are the first elections in a significantly developed social media environment in Zimbabwe. Mobile internet and social media have been rapidly growing over the years. </p>
<p>Internet penetration has increased by 41.1% (from 11% of the population to 52.1%) <a href="https://t3n9sm.c2.acecdn.net/wp-content/uploads/2017/04/Mar-2014-Zimbabwe-telecoms-report-POTRAZ.pdf">between 2010 and 2018</a>, while mobile phone penetration has risen by 43.8% from 58.8% to 102.7% <a href="https://t3n9sm.c2.acecdn.net/wp-content/uploads/2018/07/Sector-Perfomance-report-First-Quarter-2018-Abridged-9-July-2018.pdf">over the same period</a>.</p>
<p>That means half the population now has internet access, compared to 11% in 2010. </p>
<p>Ideally, these technologies should be harnessed for the greater good – such as voter education. Instead, they are being used by different interest groups in a way that poses a great danger to the electoral process. This can potentially cloud the electoral field, and even jeopardise the entire process. </p>
<p>A good example are the attacks on the Zimbabwe Electoral Commission, which has become a major target of fake news. These attacks threaten to erode its <a href="https://www.newsday.co.zw/2017/03/african-agriculture-expresses-differences-men-women/">credibility as a neutral arbiter</a>. For example, an app bearing its logo, prompting users to “click to vote”, went viral on WhatsApp. But, responding to the prompt led to a message congratulating the user on <a href="https://www.techzim.co.zw/2018/05/zimbabwe-electoral-commission-distances-itself-from-fake-whatsapp-message/">voting for Mnangagwa</a>, suggesting that the supposedly independent electoral body had endorsed the Zanu-PF leader.</p>
<p>Numerous other unverified stories have also been doing the rounds on social media, <a href="https://www.newsday.co.zw/2018/06/its-a-fake-voters-roll/">labelling the voters’ roll “shambolic”</a>. This, and claims of bias against it, have forced the commission to persistently issue statements refuting what it dismisses as “fake news”.</p>
<p>Events in Zimbabwe and <a href="https://portland-communications.com/pdf/How-Africa-Tweets-2018.pdf">elsewhere on the continent</a> point to the need for measures to guard against the abuse of social media, and bots to subvert democratic processes. There’s also a need for social media literacy to ensure that citizens appreciate the power the internet gives them - and to use it responsibly.</p><img src="https://counter.theconversation.com/content/99402/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dumisani Moyo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Zimbabwe’s upcoming elections potentially marks the start of a new order in the country, where the stakes are extremely high.Dumisani Moyo, Associate Professor, Department of Journalism, Film and Television, and Vice Dean Faculty of Humanities, University of JohannesburgLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/994772018-07-12T14:04:37Z2018-07-12T14:04:37ZThree ‘living labs’ which show how autonomous robots are changing cities<figure><img src="https://images.theconversation.com/files/227434/original/file-20180712-27018-1k0hs6z.jpg?ixlib=rb-1.1.0&rect=44%2C29%2C4947%2C3300&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-asian-engineer-flying-drone-over-1131933119?src=tSSPiJ4FMuSoruX-NsvEUA-1-8">Shutterstock.</a></span></figcaption></figure><p>Ready or not, autonomous robots are leaving laboratories to be tested in real-world contexts. With more and more people <a href="https://esa.un.org/unpd/wup/publications/files/wup2014-highlights.pdf">living in cities</a>, these technologies offer ways to cope with ageing populations and poorly maintained infrastructures, while promoting safer transport, productive manufacturing and secure energy supplies. </p>
<p>Urban “living labs” are one way scientists are trying to understand how autonomous robots – or Robotics and Autonomous Systems (RAS), to give them their full title – will affect our everyday lives. Autonomous robots are interconnected, interactive, cognitive and physical tools, which can perceive their environments, reason about events, make or revise plans and control their own actions. These technologies are designed to draw on big data and connect with the <a href="https://theconversation.com/explainer-the-internet-of-things-16542">Internet of Things</a>, to make our lives easier by increasing accuracy and efficiency. </p>
<p>But the everyday dynamics of cities are complex, which makes them far less predictable than the usual test zones. City leaders recognise that <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/664563/industrial-strategy-white-paper-web-ready-version.pdf">real world experimentation</a> can support innovation, as well as attracting international investment. As a result, cities around the world are competing to become urban test beds. But as a new <a href="http://hamlyn.doc.ic.ac.uk/uk-ras/sites/default/files/UK_RAS_wp_Urban_single_1.4.pdf">white paper</a> by researchers from Sheffield University’s Urban Institute sets out, there are some big challenges when it comes to promoting RAS technologies and ensuring meaningful trials in cities.</p>
<h2>Last mile logistics</h2>
<p>Logistics companies are under pressure to meet growing customer expectations for quick delivery, while battling against traffic congestion. Companies aim to fill this gap with last mile delivery robots. <a href="https://www.theverge.com/circuitbreaker/2018/5/31/17413836/alibaba-driverless-robot-deliver-packages-speed">Alibaba recently announced</a> that their bot, the G Plus, will go from being road-tested at their headquarters in Hangzhou, eastern China, to commercial operations by the end of 2018. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/gqBro4sZGS4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>In this trial, consumers download an app, place a grocery order and pinpoint where they want their goods to be delivered. Purchased items are placed into the driverless bot, which can carry several packages of different sizes. The robot has a built-in navigation system that relies on LIDAR – a technology that bounces light off nearby surfaces to create a 360-degree 3D map of the world around it. It drives autonomously, at speeds of up to 9.3 miles per hour, to the delivery location, where the customer enters a PIN code to retrieve their shopping. </p>
<p>Similar tests are taking place in <a href="https://www.bbc.co.uk/news/technology-43949554">Milton Keynes, in the UK</a>, and the US city of San Francisco. But these trials have not been without error – some delivery bots have experienced navigation issues, such as <a href="http://www.abc.net.au/news/2017-11-23/delivery-robots-could-they-solve-australias-logistic-problem/9185794">getting stuck or crashing</a> into obstacles including people, not to mention resistance from <a href="https://www.bbc.co.uk/news/technology-42265048">citizens and activists</a> interested in protecting public space and pedestrian safety.</p>
<h2>Self-repairing cities</h2>
<p>Buried under city streets are millions of kilometres of pipe and cable networks that provide essential water, drainage and energy services. There is mounting pressure on cities and utility companies to maintain these ageing invisible infrastructures, while dealing with the challenges of growing urban populations, ecological turbulence and citizens’ expectations. </p>
<p>Autonomous robots can detect defects in infrastructure – such as cracks in the asphalt – and identify and eliminate their triggers, whether it’s a leaking pipe or physical overloading. For example, The University of Leeds, together with local councils and industry partners, are running a project on <a href="http://selfrepairingcities.com">self-repairing cities</a> to test a range of autonomous robotic technologies. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/vBmfMqTS31U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>There are drones that can perform remote maintenance of street lights; swarms of flying vehicles for autonomous inspection and repair of potholes on motorways; and hybrid robots designed to inspect, repair, meter and report the condition of utility pipes. </p>
<p>These robots can go where human access is impossible (inside pipes) or undesirable (at height in the streetscape) and work systematically over long periods (during overnight closures). Such technologies could greatly extend the life of vital city infrastructures, reduce maintenance expenditure and lead to massive savings. </p>
<p>But questions remain about how city areas and residential populations are selected to benefit from these upgrades. Authorities will need to ensure that it’s not just the affluent and well-connected areas of cities that benefit from RAS trials. </p>
<h2>Robots that care</h2>
<p>Humanoid robots are touted as the solution to <a href="https://futurism.com/dubai-wants-robots-to-make-up-25-of-its-police-force-by-2030/">urban policing</a>, <a href="https://www.businessdestinations.com/relax/hotels/japans-hospitality-robots/">customer service</a> and <a href="https://www.bbc.co.uk/news/education-38770516">social care challenges</a>. Pepper – a white humanoid robot standing just over a metre tall – has already taken up employment meeting, greeting and advising customers in over 140 SoftBank mobile phone shops in Japan, and <a href="https://www.softbankrobotics.com/emea/en/robots/pepper">Nestle is planning on installing Pepper in 1,000 sales outlets</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/zJHyaD1psMc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>According to his developers, “<a href="https://www.softbankrobotics.com/emea/en/robots/pepper">Pepper</a> has been designed to identify your emotions and to select the behaviour best suited to the situation”. Programmed to meet the individual care needs of patients, social robots such as Pepper are now being trialled as personal companions, to augment the role of human carers. </p>
<p>In 2017, care homes in <a href="http://www.dailymail.co.uk/health/article-5014079/15-000-robot-look-elderly-Southend-care-home.html">Southend, Essex</a> adopted the companion robot to interact with the elderly, raising fears that they could replace staff. Yet it’s forecast that the UK will need up to <a href="https://www.nao.org.uk/wp-content/uploads/2018/02/The-adult-social-care-workforce-in-England.pdf">700,000</a> more care workers by 2030. </p>
<p>Robots may help alleviate this pressure on care homes and hospitals, by allowing people to live independently in their own homes for longer, providing entertainment via memory games, and enabling better connection with loved ones through smart appliances. But while robots may be able to facilitate patient monitoring and help with physical tasks, arguably there can be no replacement for human emotional connection and sensitivities.</p>
<p>No longer simply fantasy or limited to niche applications, autonomous robots are slowly becoming a part of our everyday lives. While developers strive for RAS technologies to be neutral in design and to work seamlessly with the city and its citizens, there will always be challenges associated with this aspiration. That’s why urban “living labs” are crucial in demonstrating the opportunities and limits of autonomous robots, and ensuring that policies and standards are put in place to protect human rights, and guard against widening social inequalities.</p><img src="https://counter.theconversation.com/content/99477/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rachel Macrorie does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Delivery bots, maintenance drones and care robots are all being tested in real world contexts – and that’s just the beginning.Rachel Macrorie, Research Associate in Urban Automation and Robotics, University of SheffieldLicensed as Creative Commons – attribution, no derivatives.