tag:theconversation.com,2011:/ca/topics/social-media-misuse-40181/articlesSocial media misuse – The Conversation2024-02-07T12:03:02Ztag:theconversation.com,2011:article/2224082024-02-07T12:03:02Z2024-02-07T12:03:02ZUsing AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls<figure><img src="https://images.theconversation.com/files/573450/original/file-20240205-17-4tssh6.jpg?ixlib=rb-1.1.0&rect=33%2C0%2C3693%2C2460&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/technology-security-concept-personal-authentication-system-709257292">metamorworks/Shutterstock</a></span></figcaption></figure><p>Every minute, millions of social media posts, photos and videos flood the internet. <a href="https://www.socialpilot.co/blog/social-media-statistics">On average</a>, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 hours of video. </p>
<p>This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence. </p>
<p>The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations. </p>
<p>The concerted effort in recent years to <a href="https://www.tandfonline.com/doi/full/10.1080/1057610X.2023.2222901">develop tools</a> for the identification and removal of online terrorist content has, in part, been fuelled by the emergence of new laws and regulations. This includes the EU’s terrorist content online <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3A32021R0784">regulation</a>, which requires hosting service providers to remove terrorist content from their platform within one hour of receiving a removal order from a competent national authority.</p>
<h2>Behaviour and content-based tools</h2>
<p>In broad terms, there are two types of tools used to root out terrorist content. The first looks at certain account and message behaviour. This includes how old the account is, the use of trending or unrelated hashtags and abnormal posting volume. </p>
<p>In many ways, this is similar to spam detection, in that it does not pay attention to content, and is <a href="https://www.resolvenet.org/research/remove-impede-disrupt-redirect-understanding-combating-pro-islamic-state-use-file-sharing">valuable for detecting</a> the rapid dissemination of large volumes of content, which are often bot-driven. </p>
<p>The second type of tool is content-based. It focuses on linguistic characteristics, word use, images and web addresses. Automated content-based tools take <a href="https://tate.techagainstterrorism.org/news/tcoaireport">one of two approaches</a>. </p>
<p><strong>1. Matching</strong></p>
<p>The first approach is based on comparing new images or videos to an existing database of images and videos that have previously been identified as terrorist in nature. One challenge here is that terror groups are known to try and evade such methods by producing subtle variants of the same piece of content. </p>
<p>After the Christchurch terror attack in New Zealand in 2019, for example, hundreds of visually distinct versions of the livestream video of the atrocity <a href="https://about.fb.com/news/2019/03/technical-update-on-new-zealand/">were in circulation</a>. </p>
<p>So, to combat this, matching-based tools generally use <a href="https://about.fb.com/news/2019/08/open-source-photo-video-matching/">perceptual hashing</a> rather than cryptographic hashing. Hashes are a bit like digital fingerprints, and cryptographic hashing acts like a secure, unique identity tag. Even changing a single pixel in an image drastically alters its fingerprint, preventing false matches. </p>
<p>Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel colour adjustments, but identifies images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so could potentially be used to try and <a href="https://towardsdatascience.com/black-box-attacks-on-perceptual-image-hashes-with-gans-cc1be11f277">recreate</a> the original image.</p>
<figure class="align-center ">
<img alt="A close up of a mobile phone screen displaying several social media apps." src="https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/573540/original/file-20240205-25-jovm4l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Millions of posts, images and videos are uploaded to social media platforms every minute.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/moscow-russia-29072023-new-elon-musks-2339442245">Viktollio/Shutterstock</a></span>
</figcaption>
</figure>
<p><strong>2. Classification</strong></p>
<p>The second approach relies on classifying content. It <a href="https://www.cambridgeconsultants.com/insights/whitepaper/ofcom-use-ai-online-content-moderation">uses</a> machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a lot of examples like texts labelled as terrorist content or not by human content moderators. By analysing these examples, the AI learns which features distinguish different types of content, allowing it to categorise new content on its own. </p>
<p>Once trained, the algorithms are then able to predict whether a new item of content belongs to one of the specified categories. These items may then be removed or flagged for human review. </p>
<p>This approach also <a href="https://tate.techagainstterrorism.org/news/tcoaireport">faces challenges</a>, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and <a href="https://oro.open.ac.uk/69799/">resource-intensive</a>. </p>
<p>The training data may also become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs. Algorithms also have difficulty understanding context, including <a href="https://doi.org/10.1177/2053951719897945">subtlety and irony</a>. They also <a href="https://cdt.org/wp-content/uploads/2017/11/Mixed-Messages-Paper.pdf">lack</a> cultural sensitivity, including variations in dialect and language use across different groups. </p>
<p>These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as <a href="https://restofworld.org/2021/why-facebook-keeps-failing-in-ethiopia/">Ethiopia</a> and <a href="https://www.newamerica.org/the-thread/facebooks-content-moderation-language-barrier/">Romania</a>, while free speech activists in countries such as <a href="https://www.middleeasteye.net/news/revealed-seven-years-later-how-facebook-shuts-down-free-speech-egypt">Egypt</a>, <a href="https://syrianobserver.com/news/58430/facebook-deletes-accounts-of-assad-opponents.html">Syria</a> and <a href="https://www.accessnow.org/transparency-required-is-facebooks-effort-to-clean-up-operation-carthage-damaging-free-expression-in-tunisia/">Tunisia</a> have reported having their content removed.</p>
<h2>We still need human moderators</h2>
<p>So, in spite of advances in AI, human input remains essential. It is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged. </p>
<p>But this is demanding and draining work, and there have been <a href="https://www.wired.co.uk/article/facebook-content-moderators-ireland">damning reports</a> regarding the working conditions of moderators, with many tech companies such as Meta <a href="https://www.stern.nyu.edu/experience-stern/faculty-research/who-moderates-social-media-giants-call-end-outsourcing">outsourcing</a> this work to third-party vendors. </p>
<p>To address this, we <a href="https://tate.techagainstterrorism.org/news/tcoaireport">recommend</a> the development of a set of minimum standards for those employing content moderators, including mental health provision. There is also potential to develop AI tools to safeguard the wellbeing of moderators. This would work, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly. </p>
<p>But at the same time, few, if any, platforms have the resources needed to develop automated content moderation tools and employ a sufficient number of human reviewers with the required expertise. </p>
<p>Many platforms have turned to off-the-shelf products. It is estimated that the content moderation solutions market will be <a href="https://www.prnewswire.com/news-releases/content-moderation-solutions-market-to-cross-us-32-bn-by-2031-tmr-report-301514155.html">worth $32bn by 2031</a>. </p>
<p>But caution is needed here. Third-party providers are not currently subject to the same level of oversight as tech platforms themselves. They may rely disproportionately on automated tools, with insufficient human input and a lack of transparency regarding the datasets used to train their algorithms.</p>
<p>So, collaborative initiatives between governments and the private sector are essential. For example, the EU-funded <a href="https://tate.techagainstterrorism.org/">Tech Against Terrorism Europe</a> project has developed valuable resources for tech companies. There are also examples of automated content moderation tools being made openly available like Meta’s <a href="https://about.fb.com/news/2022/12/meta-launches-new-content-moderation-tool/">Hasher-Matcher-Actioner</a>, which companies can use to build their own database of hashed terrorist content. </p>
<p>International organisations, governments and tech platforms must prioritise the development of such collaborative resources. Without this, effectively addressing online terror content will remain elusive.</p><img src="https://counter.theconversation.com/content/222408/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stuart Macdonald receives funding from the EU Internal Security Fund for the project Tech Against Terrorism Europe (ISF-2021-AG-TCO-101080101). </span></em></p><p class="fine-print"><em><span>Ashley A. Mattheis receives funding from the EU Internal Security Fund for the project Tech Against Terrorism Europe (ISF-2021-AG-TCO-101080101).</span></em></p><p class="fine-print"><em><span>David Wells receives funding from the Council of Europe to conduct an analysis of emerging patterns of misuse of technology by terrorist actors (ongoing)</span></em></p>The complex task of tackling online terror needs human eyes as well as artificial intelligence.Stuart Macdonald, Professor of Law, Swansea UniversityAshley A. Mattheis, Postdoctoral Researcher, School of Law and Government, Dublin City UniversityDavid Wells, Honorary Research Associate at the Cyber Threats Research Centre, Swansea UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2122902023-08-29T16:48:03Z2023-08-29T16:48:03ZX users will need protection after the ‘block’ feature is removed – here’s why businesses are better than people at moderating negative comments<figure><img src="https://images.theconversation.com/files/544766/original/file-20230825-26-4n04v1.jpg?ixlib=rb-1.1.0&rect=10%2C0%2C6966%2C4616&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/man-hand-holding-phone-social-networking-2336316901">Evolf / Shutterstock</a></span></figcaption></figure><p>In a <a href="https://twitter.com/elonmusk/status/1692558414105186796?t=EdyRxGsju67txX8rd9bEIg&s=19">recent post</a>, the owner of X, (formerly Twitter), Elon Musk, announced his plans for the social media platform to <a href="https://www.theguardian.com/technology/2023/aug/19/blocking-feature-to-be-removed-from-former-twitter-platform-x-says-musk">remove its blocking feature</a>, except for in direct messages. </p>
<p>Users are concerned that this change in the platform’s content moderation will lead to a rise in hostile and abusive content, leaving those on the platform unable to protect themselves from its consequences.</p>
<p>It is not only social media users who rely on X’s blocking feature to control the content they see and interact with. Companies and brands with official social media accounts also depend on built-in moderation features. This ensures their fans and followers engage in positive and civil interactions. </p>
<p>Businesses <a href="https://www.sciencedirect.com/science/article/abs/pii/S0007681317301362">need to be able to encourage constructive discussions</a> on their social media accounts. This helps them build relationships with customers, increase word-of-mouth referrals and improve sales. Hostile online content directed at a company is not helpful to these business goals.</p>
<p>With the “block” feature significantly limited, companies and individual users seeking to control the spread of hostile online content will be forced to resort to other self-moderation. While individual users are sometimes used by companies to help moderate content, <a href="https://www.emerald.com/insight/content/doi/10.1108/EJM-03-2022-0227/full/html">our recent research</a> shows that official company accounts are much better placed to de-escalate hostile content.</p>
<h2>The importance of blocking</h2>
<p><a href="https://www.emerald.com/insight/content/doi/10.1108/EJM-03-2022-0227/full/html">Research</a> shows that, when social media users are exposed to offensive or abusive content online, it can lead to an array of negative consequences. They may experience mental distress and anxiety similar to that resulting from <a href="https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/">harassment that happens in person</a>. </p>
<p>According to the same work, when presented with hostile or offensive content on social media, users are also likely to experience negative emotions and refrain from interacting with others. For businesses, this can lead to <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563215300935">negative attitudes towards the company</a>, that could also spread, and loss of trust in the brand.</p>
<p>Mute, report and block are built in features on most social media platforms. These enable users to restrict content they are exposed to, as well as who can interact with their profile. These features allow users to enjoy the benefits of social media such as <a href="https://www.tandfonline.com/doi/abs/10.1080/0267257X.2017.1302975">following trends, staying informed and interacting with others</a>. They also allow users to avoid being targeted by offensive or unwanted content.</p>
<p>Mute and report are two features that can still be used to moderate hostile content. But these only partly address the issue, since they do not stop harassers from interacting with social media users or stalking their profiles. Blocking is arguably the most effective platform moderation feature. It gives users full control over who and what content they interact with on social media.</p>
<p>Blocking is not only a desirable moderation feature; it is a responsible business practice requirement. To prevent abusive and offensive content, both the App Store and Google Play Store policies state that the “<a href="https://developer.apple.com/app-store/review/guidelines/">ability to block abusive users from the service</a>” and providing “<a href="https://support.google.com/googleplay/android-developer/answer/9876937?hl=en-GB">an in-app system for blocking UGC (user generated content) and users</a>” are necessary conditions for all of the applications they list.</p>
<p>In the UK, the current version of the <a href="https://bills.parliament.uk/bills/3137">Online Safety Bill</a> will require social media platforms to offer adults appropriate tools to stop offensive or abusive content from reaching them. This is typically enabled by the blocking feature.</p>
<h2>Content moderation going forward</h2>
<p>Reporting and hiding hostile content could be viable moderation options for X going forward. It is, however, unlikely that these will be sufficient on their own. Removing the “block” feature could mean both companies and individual users have to take on increased responsibilities for content moderation.</p>
<p><a href="https://journals.sagepub.com/doi/abs/10.1016/j.intmar.2020.05.002?journalCode=jnma">Some research</a> has already demonstrated that official business accounts employ diverse moderation communications – beyond just censorship – in the presence of hostile content. And this can be suitable for improving business followers’ attitudes and its image. </p>
<p>Another way forward would be to rely on individual social media users for moderation, particularly prominent accounts that distinguish their status from others through digital badges such as the “<a href="https://help.twitter.com/en/managing-your-account/about-twitter-verified-accounts">blue check</a>” on X and “<a href="https://www.facebook.com/gpa/blog/top-fan-badge">top fan</a>” on Facebook. </p>
<p>This is because <a href="https://doi.org/10.1016/j.ijresmar.2022.06.001">research</a> shows that digital badge accounts actively and positively participate in discussions and user-generated content on social media. As a result, these accounts could act as informal and occasional moderators.</p>
<p><a href="https://www.emerald.com/insight/content/doi/10.1108/EJM-03-2022-0227/full/html">Our research</a> looked at whether official business accounts or prominent individual accounts were best at moderating hostile content on social media. In our first experiment, we presented participants with two scenarios. In one, an official business account moderated hostile content. In another, a digital badge user account intervened in a hostile interaction.</p>
<p>We found that companies are rated as the more credible moderators. They were best suited to de-escalate hostile content without the need to hide or remove it, or block the accounts involved.</p>
<p>On social media, it is common for moderation interventions to receive reactions and responses from observing users who support or disagree. The presence of reactions likely influences how the moderator is perceived. To this end, in our second experiment, we studied the appropriateness of the moderator depending on whether the account received positive or negative reactions from other users who had observed the interaction. </p>
<p>Participants were given four scenarios: two in which the moderation intervention by the official business account received positive or negative emojis and two where a digital badge user account moderated the hostile interaction and this received either positive or negative emojis. </p>
<p>Our findings again confirmed that company accounts were seen as most appropriate for hostile content moderation by other social media users. This is the case even when the moderation receives negative reactions from the business account followers. Digital badge user accounts, in contrast, are only seen as credible when their moderation receives positive reactions from those following the company’s social media account.</p>
<p>Whether or not the “block” feature on X is removed, moderating offensive and abusive content should not be left to businesses and individual accounts alone. </p>
<p>Content moderation should be a collective effort between platforms, businesses and individual users. Social media companies have the responsibility to equip their users with design features and tools that allow them to enjoy their platforms.</p><img src="https://counter.theconversation.com/content/212290/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Denitsa Dineva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Some users fear an uptick in hostile content following the removal of the feature.Denitsa Dineva, Lecturer in Marketing and Strategy, Cardiff UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1870792022-07-18T20:05:57Z2022-07-18T20:05:57ZCelebrity deepfakes are all over TikTok. Here’s why they’re becoming common – and how you can spot them<figure><img src="https://images.theconversation.com/files/474551/original/file-20220718-12-eww7qo.jpeg?ixlib=rb-1.1.0&rect=18%2C9%2C2026%2C1140&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> </figcaption></figure><p>One of the world’s most popular social media platforms, TikTok, is now host to a steady stream of deepfake videos. </p>
<p>Deepfakes are videos in which a subject’s face or body has been digitally altered to make them look like someone else – usually a famous person. </p>
<p>One notable <a href="https://www.tiktok.com/@deeptomcruise?lang=en">example is</a> the @deeptomcriuse TikTok account, which has posted dozens of deepfake videos impersonating Tom Cruise, and attracted some 3.6 million followers. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/iyiOVUbsPcM?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Deepfakes gained a lot of media attention last year, with videos impersonating Hollywood actor Tom Cruise going viral.</span></figcaption>
</figure>
<p>In another example, Meta CEO <a href="https://www.youtube.com/watch?v=cnUd0TpuoXI&ab_channel=TimesLIVEVideo">Mark Zuckerberg</a> seems to be confessing to conspiratorial data sharing. More recently there have been a number of silly videos featuring actors such as <a href="https://www.tiktok.com/@unreal_robert">Robert Pattinson</a> and <a href="https://www.tiktok.com/@unreal_keanu">Keanu Reeves</a>.</p>
<p>Although deepfakes are often used creatively or for fun, they’re increasingly being deployed in disinformation campaigns, for identity fraud and to discredit public figures and celebrities. </p>
<p>And while the technology needed to make them is sophisticated, it’s becoming increasingly accessible, leaving detection software and regulation lagging behind.</p>
<p>One thing is for sure – deepfakes are here to stay. So what can we do about them?</p>
<h2>Varying roles</h2>
<p>The manipulation of text, images and footage has long been a bedrock of interactivity. And deepfakes are no exception; they’re the outcome of a deep-seated desire to participate in culture, storytelling, art and <a href="https://journal.media-culture.org.au/index.php/mcjournal/article/view/686">remixing</a>.</p>
<p>The technology is used extensively in the digital arts and satire. It provides more refined (and cheaper) techniques for visual insertions, compared to green screens and computer-generated imagery.</p>
<p>Deepfake technology can also enable authentic-looking <a href="https://www.rollingstone.com/culture-council/articles/the-new-ticket-to-immortality-1324513/">resurrections of deceased actors</a> and historical re-enactments. They may even play a role in helping people grieve their <a href="https://theface.com/society/deepfakes-dead-relatives-deep-nostalgia-ai-digital-resurrection-kim-kardashian-rob-kardashian-grief-privacy">deceased loved ones</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/cQ54GDm1eL0?wmode=transparent&start=56" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Comedian Jordan Peele provides a voiceover of a deepfake with former US President Barack Obama.</span></figcaption>
</figure>
<h2>But they’re also available for misuse</h2>
<p>At the same time, deepfake technology is thought to present several social problems such as:</p>
<ul>
<li><p>deepfakes being used as “proof” for other fake news and disinformation</p></li>
<li><p>deepfakes being used to discredit celebrities and others whose livelihood depends on sharing content while maintaining a reputation</p></li>
<li><p>difficulties providing verifiable footage for political communication, health messaging and electoral campaigns</p></li>
<li><p>people’s faces being used in deepfake pornography.</p></li>
</ul>
<p>The last point is of particular concern. In 2019, deepfake detection software firm Deeptrace found 96% of 14,000 deepfakes were <a href="https://regmedia.co.uk/2019/10/08/deepfake_report.pdf">pornographic</a> in nature. Free apps such as the now-defunct DeepNude 2.0 have been used to make clothed women appear nude in footage, often for revenge porn and blackmail. </p>
<p>In Australia, deepfake apps have even allowed perpetrators to circumvent “revenge porn” <a href="https://www.abc.net.au/news/2019-08-30/deepfake-revenge-porn-noelle-martin-story-of-image-based-abuse/11437774">laws</a> – an issue expected to soon become more severe. </p>
<p>Beyond this, deepfakes are also used in <a href="https://www.pandasecurity.com/en/mediacenter/technology/deepfake-fraud/">identity fraud and scams</a>, particularly in the form of video messages from a trusted “colleague” or “relative” requesting a money transfer. One study found identity fraud using digital manipulation cost US financial institutions <a href="https://www.pymnts.com/identity-theft/2022/synthetic-identity-fraud-costs-businesses-billions-each-year-data-shows/">US$20 billion in 2020</a>]. </p>
<h2>A growing concern</h2>
<p>The creators of deepfakes stress the amount of time and effort it takes to make these video look realistic. Take Chris Ume, the visual effects and AI artist behind the @deeptomcruise TikTok account. When this account <a href="https://www.abc.net.au/news/2021-06-24/tom-cruise-deepfake-chris-ume-security-washington-dc/100234772">made</a> <a href="https://edition.cnn.com/2021/08/06/tech/tom-cruise-deepfake-tiktok-company/index.html">headlines</a> last year, Ume <a href="https://www.theverge.com/2021/3/5/22314980/tom-cruise-deepfake-tiktok-videos-ai-impersonator-chris-ume-miles-fisher">told</a> The Verge “you can’t do it by just pressing a button”. </p>
<p>But there’s good evidence deepfakes are becoming easier to make. Researchers at the United Nation Global Pulse initiative have <a href="https://genevasolutions.news/science-tech/deepfakes-are-getting-more-real-and-so-are-the-security-threats-un-dialogue">demonstrated</a> how speeches can be realistically faked in just 13 minutes.</p>
<p>As more deepfake apps are developed, we can expect lesser-skilled people to increasingly produce authentic-looking deepfakes. Just think about how much photo editing has boomed in the past decade.</p>
<p>Legislation, regulation and detection software are struggling to keep up with advances in deepfake technology. </p>
<p>In 2019, Facebook <a href="https://apnews.com/article/artificial-intelligence-technology-business-ca-state-wire-international-news-fdc96134c2e4be6a4018d30eacab292d">came in for criticism</a> for failing to remove a doctored video of American politician Nancy Pelosi, after it fell short of its definition of a deepfake. </p>
<p>In 2020, <a href="https://help.twitter.com/en/rules-and-policies/manipulated-media">Twitter banned</a> the sharing of synthetic media that may deceive, confuse or harm people (except where a label is applied). <a href="https://techcrunch.com/2020/08/05/tiktok-updates-policies-to-ban-deepfakes-expand-fact-checks-and-flag-election-misinfo/">TikTok</a> did the same. And <a href="https://www.infosecurity-magazine.com/news/youtube-issues-deepfake-ban/">YouTube banned deepfakes</a> related to the 2020 US federal election. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/instead-of-showing-leadership-twitter-pays-lip-service-to-the-dangers-of-deep-fakes-127027">Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes</a>
</strong>
</em>
</p>
<hr>
<p>But even if these are well-meaning policies, it’s unlikely <a href="https://theconversation.com/revenge-of-the-moderators-facebooks-online-workers-are-sick-of-being-treated-like-bots-125127">platform moderators</a> will be able to react to reports and remove deepfakes fast enough. </p>
<p>In Australia, <a href="http://www5.austlii.edu.au/au/journals/CommsLawB/2019/24.pdf">lawyers at the NSW firm Ashurst</a> have said existing copyright and defamation laws could fall short of protecting Australians against deepfakes.</p>
<p>And while attempts to develop laws have begun overseas, these are focused on political communication. For example, California <a href="https://openstates.org/ca/bills/20192020/AB730/">has</a> made it illegal to post or distribute digitally manipulated content of a candidate during an election – but has no protections for non-politicians or celebrities. </p>
<h2>How to detect a deepfake</h2>
<p>One of the best remedies against harmful deepfakes is for users to equip themselves with as many detection skills as they can. </p>
<p>Usually, the first sign of a deepfake is that something will feel “off”. If so, look more closely at the subject’s face and ask yourself:</p>
<ul>
<li><p>is the face too smooth, or are there unusual cheekbone shadows?</p></li>
<li><p>do the eyelid and mouth movements seem disjointed, forced or otherwise unnatural? </p></li>
<li><p>does the hair look fake? Current deepfake technology struggles to maintain the original look of hair (especially facial hair).</p></li>
</ul>
<p>Context is also important:</p>
<ul>
<li><p>ask yourself what the figure is saying or doing. Are they disavowing vaccines, or performing in a porn clip? Anything that seems out of character or contrary to public knowledge will be relevant here</p></li>
<li><p>search online for keywords about the video, or the person in it, as many suspicious deepfakes will have already been debunked</p></li>
<li><p>try to judge the reliability of the source – does it seem genuine? If you’re on a social media platform, is the poster’s account verified? </p></li>
</ul>
<p>A lot of the above is basic digital literacy and requires exercising good judgment. Where common sense fails, there are some more in-depth ways to try to spot deepfakes. You can:</p>
<ul>
<li><p>search for keywords used in the video to see if there’s a public transcript of what’s being said – outlets often cover quotes by high-profile politicians and celebrities within 72 hours</p></li>
<li><p>take a screenshot of the video playing and do a Google <a href="https://images.google.com/">reverse image search</a>. This can reveal whether an original version of the video exists, which you may then compare to the dubious one</p></li>
<li><p>run any suspicious videos featuring a “colleague” or “relative” by that individual directly.</p></li>
</ul>
<p>Finally, if you do manage to spot a deepfake, don’t keep it to yourself. Always hit the report button.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fake-viral-footage-is-spreading-alongside-the-real-horror-in-ukraine-here-are-5-ways-to-spot-it-177921">Fake viral footage is spreading alongside the real horror in Ukraine. Here are 5 ways to spot it</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/187079/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rob Cover does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Earlier this year, a deepfake impersonating Ukrainian President Volodymyr Zelenskyy spread on social media – with Zelenskyy supposedly asking Ukrainians to surrender to Russia.Rob Cover, Professor of Digital Communication, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1863972022-07-09T08:37:52Z2022-07-09T08:37:52ZSouth Africa’s deadly July 2021 riots may recur if there’s no change<figure><img src="https://images.theconversation.com/files/472813/original/file-20220706-26-8nkoxc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The aftermath of the looting and violence of July 2021 in Durban, KwaZulu-Natal.</span> <span class="attribution"><span class="source">Rajesh Jantilal/AFP via Getty Images</span></span></figcaption></figure><p>Last July South Africa was hit by a wave of devastating violence that left over 350 people dead and caused <a href="https://theconversation.com/what-lies-behind-social-unrest-in-south-africa-and-what-might-be-done-about-it-166130">massive economic damage</a>. Different people have used different terms to describe what happened: civil unrest, looting, food riots, uprising, rebellion, counter-revolution.</p>
<p>Even government ministers were initially divided about <a href="https://mg.co.za/news/2021-07-19-security-cluster-disagrees-over-describing-recent-unrest-as-an-insurrection/">what to call the events</a>. President Cyril Ramaphosa labelled them <a href="https://www.gov.za/speeches/president-cyril-ramaphosa-update-security-situation-country-16-jul-2021-0000">an insurrection</a>: a calculated, orchestrated effort to destabilise the country, sabotage the economy, and undermine constitutional democracy.</p>
<p>Whichever way the events are described, they can be attributed to:</p>
<ul>
<li><p>the pervasiveness of weak state institutions which failed at implementation,</p></li>
<li><p>ineffective security institutions which failed to uphold the law, and </p></li>
<li><p>poor oversight and consequence management at national, provincial, and local government levels.</p></li>
</ul>
<p>The picture pieced together by an <a href="https://www.thepresidency.gov.za/content/report-expert-panel-july-2021-civil-unrest">expert panel</a> appointed by Ramaphosa to probe the riots was of a build-up, over several months, of a deliberate and targeted campaign that set the stage for what was to come. This included violent rhetoric, social media mobilisation, and threats aimed at intimidating the courts and law enforcement agencies. There were other incendiary acts that fitted into a generalised pattern of public disorder. They included the burning of trucks, blockades of highways and sabotage of infrastructure.</p>
<p>These multi-layered currents fed off and reinforced each other. They <a href="https://www.thepresidency.gov.za/content/report-expert-panel-july-2021-civil-unrest">sometimes ran parallel to each other</a>. The jailing of former president Jacob Zuma <a href="https://www.dailymaverick.co.za/opinionista/2021-07-30-south-africas-july-riots-and-the-long-shadow-of-jacob-zuma-fall-over-party-and-state/">for contempt of court</a> was only a trigger. </p>
<p>The notion of an insurrection suggests that there were key politically motivated actors who exploited weaknesses in the state’s capacity to drive a general campaign of violence. The violence undermined the legitimacy of state institutions and left the nation psychologically traumatised.</p>
<p>It left a lingering sense that untouchable people could act with impunity. This perception has been reinforced by the <a href="https://www.timeslive.co.za/news/south-africa/2022-07-08-the-july-riots-a-year-later-but-no-justice-for-the-237-people-murdered/">slow trickle of prosecutions</a>, and unconvincing promises by the state to uncover the presumed masterminds.</p>
<p>A troubling question is whether a recurrence of the devastating events of July 2021 is possible. In my view, it is possible, if there is no meaningful change. </p>
<h2>Growing seeds of discontent</h2>
<p>The objective conditions which made the riots possible remain in place. These include the periodic disruptions and <a href="https://www.news24.com/fin24/economy/trucks-block-roads-in-mpumalanga-including-n4-to-mozambique-20220706">blockades on national roads</a>, calls for <a href="https://businesstech.co.za/news/government/604062/unions-plan-national-shutdown-in-south-africa-amid-worries-were-becoming-another-zimbabwe/">national shutdowns</a>, and deliberate <a href="https://www.enca.com/news/sabotage-and-syndicates-hampering-eskom-recovery-gordhan">damage to infrastructure</a>. </p>
<p>Social media continues to be used to stoke fears and spread rumours of unrest. Moreover, the governing African National Congress (ANC) is wracked by internal rivalry. It is failing to provide much-needed leadership.</p>
<p>South Africa has for years seen <a href="https://theconversation.com/what-lies-behind-social-unrest-in-south-africa-and-what-might-be-done-about-it-166130">almost daily protests</a> over a lack of decent municipal services such as water, sanitation, a lack of housing and land. A trigger event, or set of conditions, could easily ignite the flames.</p>
<p>After two years of hardship brought about by COVID-19, there have been other shocks. Earlier this year, <a href="https://reliefweb.int/report/south-africa/south-africa-kwazulu-natal-floods-emergency-appeal-no-mdrza012-operational-strategy">KwaZulu-Natal</a> and <a href="https://ewn.co.za/2022/04/28/more-than-1-000-people-homeless-as-mabuyane-reveals-extent-of-ec-flood-damage">other parts of the country</a> were hit hard by devastating <a href="https://ewn.co.za/2022/06/15/clean-up-operations-underway-in-waterlogged-western-cape-following-heavy-rains">floods</a>, evoking further trauma.</p>
<p>In other parts of the country, drought is creating <a href="https://theconversation.com/south-africa-has-had-lots-of-rain-and-most-dams-are-full-but-water-crisis-threat-persists-178788">serious water shortages</a>, bringing with it a new source of insecurity and instability. </p>
<p>Unemployment <a href="https://www.gcis.gov.za/content/resourcecentre/newsletters/insight/issue13">has risen</a>. Many of those with jobs are failing to make ends meet. The violent rhetoric that has been building up <a href="https://www.amnesty.org/en/latest/news/2022/04/south-africa-migrants-living-in-constant-fear-after-deadly-attacks/">against migrants</a> could almost be out of the July 2021 playbook. The rhetoric includes the circulating of untraceable videos designed to stoke tension and fear.</p>
<p>The Ukraine war has severely affected energy security and food security, with a knock-on effect on the cost of living in South Africa. </p>
<h2>Addressing the problem</h2>
<p>Ramaphosa has admitted to a lack of leadership on the part of government, adding that his <a href="https://www.gov.za/speeches/president-cyril-ramaphosa-2022-state-nation-address-10-feb-2022-0000">cabinet accepts responsibility for the violence</a>. He pledged to drive a national response plan to address the weaknesses that the expert panel identified. This included the filling of critical vacancies in the security services, and appointing new leadership. </p>
<p>A new national <a href="https://ewn.co.za/2022/03/31/sehlahle-fannie-masemola-announced-as-new-national-police-commissioner">police commissioner has been appointed</a>. Likewise, the State Security Agency has <a href="https://www.sowetanlive.co.za/news/2022-02-28-state-security-agency-finally-gets-a-permanent-boss/">a new head</a>. And Treasury has released funds to recruit and train <a href="https://businesstech.co.za/news/trending/583460/the-saps-is-hiring-thousands-of-officers-young-and-old/">more police officers</a> to bolster public order policing. </p>
<p>Since last year, the National Joint Operational and Intelligence Structure (<a href="https://mobile.twitter.com/natjoints">NatJOINTS</a> has been responding regularly to unrest. This is welcome, but there is a risk of law enforcement agencies becoming stretched if they do not base their operational plans on reliable intelligence.</p>
<p>The recent findings of the <a href="https://www.statecapture.org.za/">judicial inquiry into state capture</a> <a href="https://www.gov.za/sites/default/files/gcis_document/202206/electronic-state-capture-commission-report-part-v-vol-i.pdf">point</a> to the hollowing out and abuse for political ends of intelligence services during the Zuma era. It is not surprising, therefore, that the security sector was so ill-prepared to preempt the violent unrest. </p>
<p>If there is an area in which all the security services need to improve their capabilities, it is in the most modern methods of technical surveillance and digital intelligence. The era of fake news and disinformation requires a new generation of personnel with digital skills. </p>
<p>The security services need to be better prepared in case there is a similar outbreak of violence.</p>
<p>They need to hone their skills and improve the coordination of the roles and resources of local, provincial and national government with those of the emergency services, civil society, business and private security providers. There is also a need to improve intelligence capacity, and to work closely with communities, business and civil society for more timely sharing of information. </p>
<p>But, the state cannot outsource its overall constitutional responsibility for guaranteeing public safety and security. Intelligence services must forewarn government and the country of threats to security, using lawful means. </p>
<p>Other countries provide lessons. When policing powers are not overseen in a well-regulated and lawful manner, the space created can be filled by militias, <a href="https://theconversation.com/rising-vigilantism-south-africa-is-reaping-the-fruits-of-misrule-179891">vigilantes</a> and others trading on the vulnerability of communities.</p>
<h2>What lies ahead</h2>
<p>On the anniversary of the July unrest, South Africans are demanding accountability and justice. Many feel let down by weak governance, political dysfunction, and economic inequality – mainly at the expense of the country’s poverty-stricken black majority. </p>
<p>The Minister in the Presidency, Mondli Gungubele, in presenting the <a href="https://www.gov.za/speeches/minister-mondli-gungubele-state-security-dept-budget-vote-202223-24-may-2022-0000">State Security budget vote</a> for 2022/23, pledged a doctrinal shift in approach, away from “state security” towards a people-centred notion of security.</p>
<p>The need for such a turn in approach had also been highlighted by the <a href="https://www.gov.za/sites/default/files/gcis_document/201903/high-level-review-panel-state-security-agency.pdf">report</a> of a panel appointed by Ramaphosa in June 2018, to review the workings of the country’s intelligence services.</p>
<p>The president has also promised an inclusive process of developing a national security strategy. Civil society bodies should use this opportunity to put their demands on the table. </p>
<p>South Africa needs a multi-pronged strategy to build peaceful, sustainable neighbourhoods, communities, and a nation where the rule of law prevails. </p>
<p>New notions of security that reflect a people-centred ethos, are needed. To face violent and destabilising crimes similar to July’s events, the country may need to review the mandates, capabilities and resourcing of the security services.</p>
<p>This does not imply the escalation of the use of deadly force. Methods aimed at deescalating conflict, engaging community leaders, and averting bloodshed are needed. This requires serious and dedicated security services and accountable political representatives to oversee the services to avoid abuses of power. </p>
<p>An engaged citizenry is also one that acts lawfully to save the country from civil conflict. South Africans would do well to consider carefully whether and how to institutionalise the many acts of heroism displayed last year. They include spontaneously formed community patrols protecting shopping centres and private security companies assisting the police with operational equipment. </p>
<p>South Africa can hopefully avoid a repeat of the events of July 2021. But that calls for a recalibrated security sector which is effective, responsive, accountable, serving the country’s democracy and not the interests of a few who manipulate them for personal or partisan gain. </p>
<p><em>This is an edited version of a speech delivered at the recent <a href="http://defendourdemocracy.co.za/">Defend our Democracy conference</a></em>.</p><img src="https://counter.theconversation.com/content/186397/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sandy Africa was chairperson of the Expert Panel on the July 2021 civil unrest, appointed to assess the shortcomings of the South African security services' response to the violence. She writes in her personal capacity.</span></em></p>South Africa needs a multi-pronged strategy for building peaceful, sustainable neighbourhoods, communities, and a nation where the rule of law prevails.Sandy Africa, Associate Professor, Political Sciences, and Deputy Dean Teaching and Learning (Humanities), University of PretoriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1824332022-05-18T04:05:27Z2022-05-18T04:05:27ZWrong, Elon Musk: the big problem with free speech on platforms isn’t censorship. It’s the algorithms<figure><img src="https://images.theconversation.com/files/463819/original/file-20220518-15-bgihmy.jpeg?ixlib=rb-1.1.0&rect=37%2C37%2C4955%2C3285&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Imagine there is a public speaking square in your city, much like the ancient Greek agora. Here you can freely share your ideas without censorship. </p>
<p>But there’s one key difference. Someone decides, for their own economic benefit, who gets to listen to what speech or which speaker. And this isn’t disclosed when you enter, either. You might only get a few listeners when you speak, while someone else with similar ideas has a large audience. </p>
<p>Would this truly be free speech? </p>
<p>This is an important question, because the modern agoras are social media platforms – and this is how they organise speech. Social media platforms don’t just present users with the posts of those they follow, in the order they’re posted. </p>
<p>Rather, algorithms decide what content is shown and in which order. In <a href="https://journals.sagepub.com/doi/abs/10.1177/02683962211013358">our research</a>, we’ve termed this “algorithmic audiencing”. And we believe it warrants a closer look in the debate about how free speech is practised online. </p>
<h2>Our understanding of free speech is too limited</h2>
<p>The free speech debate has once more been ignited by news of Elon Musk’s plans to <a href="https://independentaustralia.net/business/business-display/twitter-will-be-a-platform-of-free-speech--if-elon-musk-says-so,16322">take over Twitter</a>, his promise to reduce content moderation (including by <a href="https://www.theguardian.com/technology/2022/may/10/elon-musk-pledges-overturn-twitter-ban-donald-trump">restoring</a> Donald Trump’s account) and, more recently, speculation he might <a href="https://www.abc.net.au/news/2022-05-17/elon-musk-twitter-takeover-lower-bid-fake-accounts/101073656">pull out</a> of the deal if Twitter can’t prove the platform isn’t inundated with bots.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1507259709224632344"}"></div></p>
<p>Musk’s approach to free speech is typical of how this issue is often framed: in terms of <a href="https://www.newyorker.com/magazine/2020/10/19/why-facebook-cant-fix-itself">content moderation, censorship</a> and matters of deciding what speech can enter and stay on the platform. </p>
<p>But <a href="https://journals.sagepub.com/doi/abs/10.1177/02683962211013358">our research</a> reveals this focus misses how platforms systematically interfere with free speech on the audience’s side, rather than the speaker’s side. </p>
<p>Outside the social media debate, free speech is commonly understood as the “<a href="https://www.mtsu.edu/first-amendment/article/328/abrams-v-united-states">free trade of ideas</a>”. Speech is about discourse, not merely the right to speak. Algorithmic interference in who gets to hear which speech serves to directly undermine this free and fair exchange of ideas. </p>
<p>If social media platforms are <a href="https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/">“the digital equivalent of a town square”</a> committed to defending free speech, as both <a href="https://www.wired.com/story/zuckerberg-defends-free-speech-even-when-speech-false">Facebook’s Mark Zuckerberg</a> and <a href="https://www.economist.com/business/2022/04/30/elon-musk-is-taking-twitters-public-square-private">Musk argue</a>, then algorithmic audiencing must be considered for speech to be free.</p>
<h2>How it works</h2>
<p>Algorithmic audiencing happens through algorithms that either amplify or curb the reach of each message on a platform. This is done by design, based on a platform’s monetisation logic. </p>
<p>Newsfeed algorithms amplify content that keeps <a href="http://opentranscripts.org/transcript/algorithmic-spiral-silence/">users the most “engaged”</a>, because engagement leads to more user attention on <a href="https://www.technologyreview.com/2020/10/23/1011119/the-weirdly-specific-filters-campaigns-are-using-to-micro-target-you/">targeted advertising</a>, and more data collection opportunities. </p>
<p>This explains why some users have large audiences while others with similar ideas <a href="https://www.nytimes.com/2020/10/29/technology/dan-bongino-has-no-idea-why-facebook-loves-him.html">are barely noticed</a>. Those who speak to the algorithm achieve the widest circulation of their ideas. This is akin to <a href="https://firstmonday.org/article/view/4901/4097">large-scale social engineering</a>.</p>
<p>At the same time, the workings of Facebook’s and Twitter’s <a href="https://www.scientificamerican.com/article/its-time-to-open-the-black-box-of-social-media/">algorithms remain largely opaque</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/is-your-phone-really-listening-to-your-conversations-well-turns-out-it-doesnt-have-to-162172">Is your phone really listening to your conversations? Well, turns out it doesn't have to</a>
</strong>
</em>
</p>
<hr>
<h2>How it interferes with free speech</h2>
<p>Algorithmic audiencing has a material effect on public discourse. While content moderation only applies to harmful content (which makes up a <a href="https://www.wired.com/story/facebooks-deceptive-math-when-it-comes-to-hate-speech/">tiny fraction of all speech</a> on these platforms), algorithmic audiencing systematically applies to all content.</p>
<p>So far, this kind of interference in free speech has been overlooked, because it’s unprecedented. It was not possible in traditional media.</p>
<p>And it is relatively recent for social media as well. In the early days messages would simply be sent to one’s follower network, rather than subjected to algorithmic distribution. Facebook, for example, only started filling newsfeeds with the <a href="https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/">help of algorithms</a> that optimise for engagement in 2012, after it was publicly listed and faced increased pressure to monetise.</p>
<p>Only in the past five years has algorithmic audiencing really become a widespread issue. At the same time, the extent of the issue isn’t fully known because it’s almost impossible for researchers to gain access <a href="https://www.scientificamerican.com/article/its-time-to-open-the-black-box-of-social-media/">to platform data</a>.</p>
<p>But we do know addressing it is important, since it can drive the proliferation of harmful content such as <a href="https://www.nytimes.com/2017/04/25/magazine/can-facebook-fix-its-own-worst-bug.html?login=email&auth=login-email">misinformation and disinformation</a>. </p>
<p>We know such content <a href="https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90">gets commented on and shared more</a>, attracting further amplification. <a href="https://techcrunch.com/2020/10/30/facebook-group-recommendations-election/">Facebook’s own research</a> has shown its algorithms can drive users to join extremist groups.</p>
<h2>What can be done?</h2>
<p>Individually, Twitter users should heed <a href="https://mashable.com/article/elon-musk-jack-dorsey-twitter-algorithm">Elon Musk’s recent advice</a> to re-organise their newsfeeds back to chronological order, which would curb the extent of algorithmic audiencing being applied.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1525612988115320838"}"></div></p>
<p>You can also do this <a href="https://www.businessinsider.com/facebook-social-media-switch-feed-chronological-timeline-2021-11">for Facebook</a>, but not as a default setting – so you’ll have to choose this option every time you use the platform. It’s the same case <a href="https://www.popsci.com/diy/how-to-make-instagram-feed-chronological/">with Instagram</a> (which is also owned by Facebook’s parent company, Meta).</p>
<p>What’s more, switching to chronological order will only go so far in curbing algorithmic audiencing – because you’ll still get other content (apart from what you directly opt-in to) which will target you based on the platform’s monetisation logic.</p>
<p>And we also know only a fraction of users ever change <a href="https://uxplanet.org/the-power-of-defaults-992d50b73968">their default settings</a>. In the end, regulation is required. </p>
<p>While social media platforms are private companies, they enjoy far-ranging privileges to moderate content on their platforms under <a href="https://www.nytimes.com/2021/03/25/technology/section-230-explainer.html">section 230 of the US’s Communications Decency Act</a>. </p>
<p>In return, the public expects platforms to facilitate a free and fair exchange of their ideas, as these platforms provide the space <a href="https://www.theguardian.com/commentisfree/2018/sep/15/facebook-twitter-social-media-public-discourse">where public discourse happens</a>. Algorithmic audiencing constitutes a breach of this privilege. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1525633876676136961"}"></div></p>
<p>As US legislators contemplate <a href="https://www.nytimes.com/live/2021/03/25/business/social-media-disinformation">social media regulation</a>, addressing algorithmic audiencing must be on the table. Yet, so far it has hardly part of the debate at all – with the focus squarely on content moderation.</p>
<p>Any serious regulation will need to challenge platforms’ entire business model, since algorithmic audiencing is a direct outcome of <a href="https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-capitalism.html">surveillance capitalist logic</a> – wherein platforms capture and commodify our content and data to predict (and influence) our behaviour – all to turn a profit.</p>
<p>Until we are regulating this use of algorithms, and the monetisation logic that underpins it, speech on social media will never be free in any genuine sense of the word.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-is-tilting-the-political-playing-field-more-than-ever-and-its-no-accident-148314">Facebook is tilting the political playing field more than ever, and it's no accident</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/182433/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There’s a tension between facilitating free and fair debate on social media, and businesses’ bottom line. And it must be resolved with the public interest in mind.Kai Riemer, Professor of Information Technology and Organisation, University of SydneySandra Peter, Director, Sydney Business Insights, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1826732022-05-10T04:13:39Z2022-05-10T04:13:39ZStuff-up or conspiracy? Whistleblowers claim Facebook deliberately let important non-news pages go down in news blackout<figure><img src="https://images.theconversation.com/files/462154/original/file-20220510-14-89py3q.jpeg?ixlib=rb-1.1.0&rect=64%2C69%2C3030%2C2083&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>On Friday, the Wall Street Journal published information from Facebook whistleblowers, alleging Facebook (which is owned by Meta) deliberately caused havoc in Australia last year <a href="https://www.wsj.com/articles/facebook-deliberately-caused-havoc-in-australia-to-influence-new-law-whistleblowers-say-11651768302">to influence the News Media Bargaining Code</a> before it was passed as law. </p>
<p>During Facebook’s news blackout in February 2021, thousands of non-news pages were also blocked – including important emergency, health, charity and government pages.</p>
<p>Meta has continued to argue the takedown of not-for-profit and government pages was a technical error. It remains to be seen whether the whistleblower revelations will lead to Facebook being taken to court.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-conversations-submission-to-the-australian-senate-inquiry-into-the-news-media-bargaining-code-153532">The Conversation's submission to the Australian Senate Inquiry into the News Media Bargaining Code</a>
</strong>
</em>
</p>
<hr>
<h2>The effects of Facebook’s “error”</h2>
<p>The <a href="https://theconversation.com/in-a-world-first-australia-plans-to-force-facebook-and-google-to-pay-for-news-but-abc-and-sbs-miss-out-143740">News Media Bargaining Code</a> was first published in July 2020, with a goal to have Facebook and Google pay Australian news publishers for the content they provide to the platforms. </p>
<p>It was passed by the House of Representatives (Australia’s lower house) on February 17 2021. That same day, Facebook retaliated by issuing a <a href="https://about.fb.com/news/2021/02/changes-to-sharing-and-viewing-news-on-facebook-in-australia/">statement</a> saying it would remove access to news media business pages on its platform – a threat it had first made in August 2020.</p>
<p>It was arguably a reasonable threat of capital strike by a foreign direct investor, in respect to new regulation it regarded as “harmful” – and which it believed fundamentally “misunderstands the relationship between [its] platform and publishers who use it to share news content”.</p>
<p>However, the range of pages blocked was extensive. </p>
<p>Facebook has a label called the “News Page Index” which can be applied to its pages. News media pages, such as those of the ABC and SBS, are included in the index. All Australian pages on this index were taken down during Facebook’s news blackout. </p>
<p>But Facebook also blocked access to other pages, such as the page of the satirical website <a href="https://www.betootaadvocate.com">The Betoota Advocate</a>. The broadness of Facebook’s approach was also evidenced by the blocking of its own corporate page. </p>
<p>The <a href="https://www.theguardian.com/technology/2021/feb/18/time-to-reactivate-myspace-the-day-australia-woke-up-to-a-facebook-news-blackout">most major harm</a>, however, came from blocks to not-for-profit pages, including cancer charities, the Bureau of Meteorology and a variety of state health department pages – at a time when they were delivering crucial information about COVID-19 and vaccines.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1362185003635863553"}"></div></p>
<h2>Whistleblowers emerge</h2>
<p>The whistleblower material published by the Wall Street Journal, which was also filed to the US Department of Justice and the Australian Competition and Consumer Commission (ACCC), includes several email chains that show Facebook decided to implement its blocking threat through a broad strategy. </p>
<p>The argument for its broad approach was based on an anti-avoidance clause in the News Media Bargaining Code. The effect of the clause was to ensure Facebook didn’t attempt to avoid the rules of the code by simply substituting Australian news with international news for Australian users. In other words, it would have to be all or nothing.</p>
<p>As a consequence, Facebook did not use its News Page Index. It instead classified a domain as “news” if “60% [or] more of a domain’s content shared on Facebook is classified as news”. One product manager wrote:</p>
<blockquote>
<p>Hey everyone – the [proposed Australian law] we are responding to is extremely broad, so guidance from the policy and legal team has been to be over-inclusive and refine as we get more information.</p>
</blockquote>
<p>The blocking approach was algorithmic and based on these rules. There were some exceptions, that included not blocking “.gov” – but no such exclusion for “.gov.au”. The effect of this was the taking down of many charity and government pages. </p>
<p>The whistleblower material makes it clear a number of Facebook employees offered solutions to the perceived overreach. This included one employee proposal that Facebook should “proactively find all the affected pages and restore them”. However, the documents show these calls were ignored. </p>
<p>According to the Wall Street Journal:</p>
<blockquote>
<p>The whistleblower documents show Facebook did attempt to exclude government and education pages. But people familiar with Facebook’s response said some of these lists malfunctioned at rollout, while other whitelists didn’t cover enough pages to avoid widespread improper blocking.</p>
</blockquote>
<h2>Amendments following the blackout</h2>
<p>Following Facebook’s news blackout, there were last-minute amendments to the draft legislation before it was passed through the Senate.</p>
<p>The main change was that the News Media Bargaining Code would only apply to Facebook if deals were not struck with a range of key news businesses (which so far has not included SBS or <a href="https://twitter.com/ConversationEDU/status/1440562209206128653?s=20&t=FsviAWBLX7mKumr80Qiwzg">The Conversation</a>). </p>
<p>It’s not clear whether the amendment was as a result of Facebook’s actions, or if it would have been introduced in the Senate anyway. In either case, Facebook said it was “<a href="https://about.fb.com/news/2021/02/changes-to-sharing-and-viewing-news-on-facebook-in-australia/">satisfied</a>” with the outcome, and ended its news blackout.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/this-weeks-changes-are-a-win-for-facebook-google-and-the-government-but-what-was-lost-along-the-way-155865">This week's changes are a win for Facebook, Google and the government — but what was lost along the way?</a>
</strong>
</em>
</p>
<hr>
<h2>Facebook denies the accusations</h2>
<p>The definitions of “core news content” and “news source” in the News Media Bargaining Code were reasonably narrow. So Facebook’s decision to block pages so broadly seems problematic – especially from the perspective of reputational risk. </p>
<p>But as soon as that risk crystallised, Facebook denied intent to cause any harm. A Meta spokesperson said the removal of non-news pages was a “mistake” and “any suggestion to the contrary is categorically and obviously false”. Referring to the whistleblower documents, the spokesperson said:</p>
<blockquote>
<p>The documents in question clearly show that we intended to exempt Australian government pages from restrictions in an effort to minimise the impact of this misguided and harmful legislation. When we were unable to do so as intended due to a technical error, we apologised and worked to correct it. </p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/publishers-take-on-facebook-and-google-for-failing-to-pay-up-under-the-news-media-bargaining-code-179838">Publishers take on Facebook and Google for failing to pay up under the News Media Bargaining Code</a>
</strong>
</em>
</p>
<hr>
<h2>Possible legal action</h2>
<p>In the immediate aftermath of Facebook’s broad news takedown, former ACCC chair Allan Fels <a href="https://www.news.com.au/technology/online/social/facebook-could-face-lawsuits-for-unconscionable-conduct-over-nonnews-wipe-out/news-story/b312cef33b8e2261e8b5743f9bf87ca6">suggested</a> there could be a series of class actions against Facebook.</p>
<p>His basis was that Facebook’s action was unconscionable under the <a href="http://classic.austlii.edu.au/au/legis/cth/consol_act/caca2010265/toc-sch2.html">Australian Consumer Law</a>. We have not seen these actions taken.</p>
<p>It’s not clear whether the whistleblower material changes the likelihood of legal action against Facebook. If legal action is taken, it’s more likely to be a civil case taken by an organisation that has been harmed, rather than a criminal case.</p>
<p>On the other hand, one reading of the material is Facebook did indeed overreach out of caution, and then reduced the scope of its blocking over a short period. </p>
<p>Facebook suffered reputational harm as a result of its actions and apologised. However, if it engaged in similar actions in other countries, the balance between its actions being a stuff up, versus conspiracy, changes. </p>
<p>The Wall Street Journal described Facebook’s approach as an “overly broad and sloppy process”. Such a process isn’t good practice, but done once, it’s unlikely to be criminal. On the other hand, repeating it would create a completely different set of potential liabilities and causes of action.</p>
<hr>
<p><em>Disclosure: Facebook has refused to negotiate a deal with The Conversation under the News Media Bargaining Code. In response, The Conversation has called for Facebook to be “designated” by the Treasurer under the Code. This means Facebook would be forced to pay for content published by The Conversation on its platform.</em></p><img src="https://counter.theconversation.com/content/182673/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rob Nicholls is a member of the UNSW Allens Hub for Technology, Law and Innovation from which he receives research funding. He is also the faculty lead for the UNSW Institute for Cyber Security (IFCYBER), which provides support. UNSW has received an untied gift from Facebook, which is used to fund some of Rob's research.</span></em></p>A Meta spokesperson told The Conversation non-news pages had been taken down by mistake. Whistleblower allegations contradict this.Rob Nicholls, Associate professor in regulation and governance, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1816262022-04-26T19:55:56Z2022-04-26T19:55:56ZWhat will Elon Musk’s ownership of Twitter mean for ‘free speech’ on the platform?<figure><img src="https://images.theconversation.com/files/459708/original/file-20220426-22-k6moqy.jpeg?ixlib=rb-1.1.0&rect=22%2C61%2C3683%2C2411&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Eric Risberg/AP</span></span></figcaption></figure><p>In a surprise capitulation, the board of Twitter has announced it will support a <a href="https://www.ft.com/content/79e3bc48-96ef-4e62-b30b-d3ddb45d7a2f">takeover bid</a> by Elon Musk, the world’s richest person. But is it in the public interest? </p>
<p>Musk is offering US$54.20 a share. This values the company at US$44 billion (or A$61 billion) – making it one of the largest leveraged buyouts on record. </p>
<p><a href="https://www.sec.gov/Archives/edgar/data/1418091/000110465922048128/tm2213229d1_ex99-c.htm">Morgan Stanley and other large financial institutions</a> will lend him US$25.5 billion. Musk himself will put in around US$20 billion. This is about the size of a single <a href="https://www.theguardian.com/business/2022/apr/21/elon-musk-stands-to-collect-23bn-bonus-as-tesla-surges-ahead#:%7E:text=Elon%20Musk%2C%20chief%20executive%20of,company's%20reported%20record%20quarterly%20profits">bonus</a> he is expected to receive from Tesla. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">letter</a> to the chair of Twitter, Musk claimed he would “unlock” Twitter’s “extraordinary potential” to be “the platform for free speech around the globe”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1507777261654605828"}"></div></p>
<p>But the idea that social media has the potential to represent an unbridled mode of public discourse is underpinned by an idealistic understanding that has <a href="https://doi.org/10.1177%2F14614440222226244">surrounded social media</a> technologies for <a href="https://www.wired.com/1995/11/poster-if/">some time</a>. </p>
<p>In reality, Twitter being owned by one person, some of whose own tweets have been <a href="https://www.sec.gov/news/press-release/2018-226">false</a>, <a href="https://news.yahoo.com/one-tweet-elon-musk-captures-201842976.html">sexist</a>, <a href="https://www.vox.com/recode/2021/5/18/22441831/elon-musk-bitcoin-dogecoin-crypto-prices-tesla">market-moving</a> and <a href="https://www.abc.net.au/news/2019-10-28/elon-musk-saya-pedo-guy-is-a-common-insult-in-south-africa/11639090">arguably defamatory</a> poses a risk to the platform’s future.</p>
<h2>Can Twitter expect a total overhaul?</h2>
<p>We see Musk’s latest move in a less-than-benign light, as it gives him unprecedented power and influence over Twitter. He has mused about making several potential changes to the platform, including:</p>
<ul>
<li><a href="https://www.vox.com/recode/23041717/twitter-musk-business-plan-peter-kafka-column">reshuffling</a> the current <a href="https://www.theguardian.com/technology/2022/apr/25/five-things-in-elon-musks-in-tray-after-twitter-takeover">management</a>, in which he says he doesn’t have confidence </li>
<li>adding an <a href="https://theconversation.com/why-an-edit-button-for-twitter-is-not-as-simple-as-it-seems-181623">edit button</a> on tweets</li>
<li>weakening the current content moderation approach - including through supporting temporary suspensions on users rather than outright bans, and</li>
<li>potentially moving to a “freemium” model similar to Spotify’s, whereby users can <a href="https://www.theguardian.com/technology/2022/apr/25/five-things-in-elon-musks-in-tray-after-twitter-takeover">pay to avoid more intrusive advertisements</a>. </li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-an-edit-button-for-twitter-is-not-as-simple-as-it-seems-181623">Why an edit button for Twitter is not as simple as it seems</a>
</strong>
</em>
</p>
<hr>
<p>Shortly after becoming Twitter’s largest individual shareholder earlier this month, Musk <a href="https://www.thestreet.com/markets/elon-musk-ted-talk">said</a> “I don’t care about the economics at all”.</p>
<p>But the bankers who lent him US$25.5 billion to eventually acquire the platform probably do. Musk may come under pressure to lift Twitter’s profitability. He claims his top priority is free speech – but potential advertisers may not want their products featured next to an extremist rant.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1518681666633486341"}"></div></p>
<p>In recent years, Twitter has implemented a range of <a href="https://help.twitter.com/en/rules-and-policies#platform-integrity-and-authenticity">governance and content moderation</a> policies. For example, in 2020 it broadened its “<a href="https://blog.twitter.com/en_us/topics/company/2020/An-update-on-our-continuity-strategy-during-COVID-19">definition of harm</a>” to address COVID-19 content contradicting guidance from authoritative sources. </p>
<p>Twitter claims developments in its content moderation approach have been to “<a href="https://about.twitter.com/en">serve the public conversation</a>” and address <a href="https://help.twitter.com/en/rules-and-policies/medical-misinformation-policy">disinformation and misinformation</a>. It also claims to respond to user experiences <a href="https://about.twitter.com/en/our-priorities/healthy-conversations">of abuse</a> and general <a href="https://journals.sagepub.com/doi/10.1177/13548565211036797">incivility users must navigate</a>. </p>
<p>Taking a longer-term view, however, it seems Twitter’s bolstering of content moderation could be seen as an effort to save its reputation following <a href="https://www.nytimes.com/2020/11/17/technology/lawmakers-drill-down-on-how-facebook-and-twitter-moderate-content.html">extensive backlash</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/instead-of-showing-leadership-twitter-pays-lip-service-to-the-dangers-of-deep-fakes-127027">Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes</a>
</strong>
</em>
</p>
<hr>
<h2>Musk’s ‘town square’ idea doesn’t hold up</h2>
<p>Regardless of Twitter’s motivations Musk has openly challenged the growing number of moderation tools employed by the platform. </p>
<p>He has even labelled Twitter a “de facto public square”. This statement appears naïve at best. As communications scholar and Microsoft researcher <a href="https://yalebooks.yale.edu/book/9780300261431/custodians-internet/">Tarleton Gillespie</a> argues, the notion that social media platforms can operate as truly open spaces is fantasy, given how platforms must moderate content while also disavowing this process. </p>
<p>Gillespie goes on to suggest platforms are obliged to moderate, to protect users from their antagonists, to remove offensive, vile, or illegal content and to ensure they can present their best face to new users, advertisers, partners, and the public more generally. He <a href="https://yalebooks.yale.edu/book/9780300261431/custodians-internet/">says</a> the critical challenge then “is exactly when, how, and why to intervene”. </p>
<p>Platforms such as Twitter can’t represent “town squares” – especially as, in Twitter’s case, only a small proportion of the town is using the service.</p>
<p>Public squares are <a href="https://www.google.com.au/books/edition/Behavior_in_Public_Places/HM1kAAAAIAAJ?hl=en">implicitly</a> and explicitly regulated through social behaviours associated with <a href="https://www.routledge.com/Relations-in-Public-Microstudies-of-the-Public-Order/Goffman/p/book/9781412810067">relations in public</a>, backed by the capacity to defer to an authority to restore public order should disorder arise. In the case of a private business, which Twitter now is, the final say will largely default to Musk. </p>
<p>Even if Musk were to implement his own town square ideal, it would presumably be a particularly free-wheeling version. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1518704771372240896"}"></div></p>
<p>Providing users with more leeway in what they can say might contribute to increased polarity and further coarsen discourse on the platform. But this would again discourage advertisers – which would be an issue under Twitter’s current economic model (wherein <a href="https://www.theguardian.com/technology/2022/apr/25/five-things-in-elon-musks-in-tray-after-twitter-takeover">90% of revenue comes from advertising</a>).</p>
<h2>Free speech (but for all?)</h2>
<p>Twitter is considerably <a href="https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/">smaller than other</a> major social media networks. However, research has found it does have a disproportionate influence as tweets can proliferate with <a href="https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1449883">speed and virality, spilling over to traditional media</a>. </p>
<p>The viewpoints users are exposed to are determined by algorithms geared towards maximising exposure and clicks, rather than enriching users’ lives with <a href="https://theconversation.com/what-elon-musks-us-3-billion-twitter-deal-means-for-him-and-for-social-media-180742">thoughtful or interesting points of view</a>.</p>
<p>Musk has suggested he may make Twitter’s algorithms open source. This would be a welcome increase in transparency. But once Twitter becomes a private company, how transparent it is about operations will largely be up to Musk’s sole discretion. </p>
<p>Ironically, <a href="https://www.theguardian.com/technology/2022/apr/15/elon-musk-mark-zuckerberg-sun-king-louis-xiv">Musk has accused Meta</a> (previously Facebook) CEO Mark Zuckerberg of having too much control over public debate.</p>
<p>Yet Musk himself has a history of trying <a href="https://www.cnbc.com/2022/04/25/elon-musk-and-free-speech-track-record-not-encouraging.html">to stifle</a> <a href="https://www.theatlantic.com/technology/archive/2022/04/elon-musk-twitter-free-speech/629479/">his critics’</a> <a href="https://www.bloomberg.com/news/articles/2022-04-21/elon-musk-wants-free-speech-at-twitter-twtr-after-years-silencing-critics">points of view</a>. There’s little to suggest his actions are truly to create an open and inclusive town square through Twitter — and less yet to suggest it will be in the public interest.</p><img src="https://counter.theconversation.com/content/181626/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Musk has long touted Twitter’s potential as an open and inclusive ‘town square’ for public discourse – but the reality is social media platforms were never meant to fulfil this role.John Hawkins, Senior Lecturer, Canberra School of Politics, Economics and Society and NATSEM, University of CanberraMichael James Walsh, Associate Professor in Social Sciences, University of CanberraLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1819232022-04-25T21:07:54Z2022-04-25T21:07:54ZElon Musk’s plans for Twitter could make its misinformation problems worse<figure><img src="https://images.theconversation.com/files/459590/original/file-20220425-13-feqjsz.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6000%2C4004&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Elon Musk's moment of triumph is a moment of uncertainty for the future of one of the world's leading social media platforms.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/USMuskTwitter/360b354555564c63931e87a4eee568c6/photo">AP Photo/John Raoux</a></span></figcaption></figure><p>Elon Musk, the world’s richest person, <a href="https://www.wsj.com/articles/twitter-and-elon-musk-strike-deal-for-takeover-11650912837">acquired Twitter</a> in a US$44 billion deal on April 25, 2022, 11 days after announcing his bid for the company. Twitter announced that the public company will become <a href="https://www.prnewswire.com/news-releases/elon-musk-to-acquire-twitter-301532245.html">privately held after the acquisition is complete</a>. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">filing with the Securities and Exchange Commission</a> for his initial bid for the company, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=JpFHYKcAAAAJ">researcher of social media platforms</a>, I find that Musk’s ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.</p>
<h2>What makes Twitter unique</h2>
<p>Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.</p>
<p>Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is <a href="https://www.business2community.com/social-media-articles/how-your-contents-half-life-should-drastically-impact-your-social-media-strategy-in-2020-02290478">about 20 minutes</a>, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.</p>
<p>Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict <a href="https://ieeexplore.ieee.org/abstract/document/7045443">asthma-related emergency department visits</a>, measure <a href="https://www.cs.jhu.edu/%7Emdredze/publications/2016_ossm.pdf">public epidemic awareness</a>, and model <a href="https://doi.org/10.1080/1369118X.2016.1218528">wildfire smoke dispersion</a>. </p>
<p>Tweets that are part of a conversation are <a href="https://blog.twitter.com/en_us/a/2013/keep-up-with-conversations-on-twitter">shown in chronological order</a>, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive <a href="https://blog.twitter.com/en_us/a/2015/full-archive-search-api">provides instant and complete access to every public Tweet</a>. This positions Twitter as a <a href="https://twitter.com/sarahkendzior/status/1514590065674047488">historical chronicler of record</a> and a de facto fact checker.</p>
<h2>Changes on Musk’s mind</h2>
<p>A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several <a href="https://www.bloombergquint.com/business/twitter-shares-fall-after-musk-ditches-potential-board-role">suggestions about how to change Twitter</a>, including adding an edit button for tweets and granting automatic verification marks to premium users. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1511143607385874434"}"></div></p>
<p>There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets. </p>
<p>There are numerous ways to <a href="https://www.tweettabs.com/find-deleted-tweets/">retrieve deleted tweets</a>, which allows researchers to study them. While some studies show <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13133">significant personality differences</a> between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a <a href="https://doi.org/10.1080/1369118X.2016.1257041">way for people to manage their online identities</a>.</p>
<p>Analyzing deleting behavior can also yield valuable clues about <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14874">online credibility and disinformation</a>. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.</p>
<p>Studies of bot-generated activity on Twitter have concluded that <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots">nearly half of accounts tweeting about COVID-19 are likely bots</a>. Given <a href="https://doi.org/10.1073/pnas.1804840115">partisanship and political polarization in online spaces</a>, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1514590065674047488"}"></div></p>
<p>Musk has also indicated his intention to combat twitter bots, or automated accounts that post rapidly and repeatedly in the guise of people. He has called for <a href="https://twitter.com/elonmusk/status/1517215736606957573">authenticating users as real human beings</a>. </p>
<p>Given <a href="https://doi.org/10.1145/3131365.3131385">challenges such as doxxing</a> and other malicious personal harms online, it’s important for user authentication methods to preserve privacy. This is particularly important for activists, dissidents and whistleblowers who face threats for their online activities. Mechanisms such as <a href="https://www.ijert.org/decentralized-access-control-technique-with-anonymous-authentication">decentralized protocols</a> can enable authentication without sacrificing anonymity. </p>
<h2>Twitter’s content moderation and revenue model</h2>
<p>To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – <a href="https://warzel.substack.com/p/the-internets-original-sin?s=r">online advertising ecosystem</a> involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the <a href="https://www.wsj.com/articles/social-media-may-have-to-embrace-the-musk-11649691208">primary revenue source for Twitter</a>. </p>
<p>Musk’s vision is to <a href="https://finance.yahoo.com/news/musk-proposes-twitter-blue-subscription-024424750.html">generate revenue for Twitter from subscriptions</a> rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This could make Twitter a sort of freewheeling opinion site for paying subscribers. In contrast, until now Twitter has been <a href="https://www.techdirt.com/2021/02/10/content-moderation-case-study-twitter-attempts-to-tackle-covid-related-vaccine-misinformation-2020/">aggressive in using content moderation</a> in its attempts to address disinformation.</p>
<p>Musk’s description of a <a href="https://qz.com/2155098/elon-musks-twitter-bid-isnt-about-free-speech/">platform free from content moderation issues</a> is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as <a href="https://doi.org/10.1145/3468507.3468512">algorithms that assign gender</a> to users, <a href="https://doi.org/10.1145/3287560.3287587">potential inaccuracies and biases in algorithms</a> used to glean information from these platforms, and the impact on those <a href="https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616">looking for health information online</a>. </p>
<p>Testimony by Facebook whistleblower <a href="https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/">Frances Haugen</a> and recent regulatory efforts such as the <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">online safety bill unveiled in the U.K.</a> show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s acquisition of Twitter <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">highlights a whole host of regulatory concerns</a>. </p>
<p>Because of Musk’s other businesses, Twitter’s <a href="https://www.nasdaq.com/articles/how-does-social-media-influence-financial-markets-2019-10-14">ability to influence public opinion</a> in the sensitive industries of aviation and the automobile industry automatically creates a conflict of interest, not to mention affects the disclosure of <a href="https://www.investopedia.com/terms/m/materialinsiderinformation.asp">material information</a> necessary for shareholders. Musk has already been accused of <a href="https://www.cbsnews.com/news/elon-musk-twitter-shareholder-lawsuit/">delaying disclosure of his ownership stake in Twitter</a>.</p>
<p>Twitter’s own <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">algorithmic bias bounty challenge</a> concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to <a href="https://www.media.mit.edu/galleries/youtube-redesign/">re-imagine the YouTube platform with ethics in mind</a>. Perhaps it’s time to ask Musk to do the same with Twitter.</p>
<p><em>This is an updated version of <a href="https://theconversation.com/elon-musks-bid-spotlights-twitters-unique-role-in-public-discourse-and-what-changes-might-be-in-store-181374">an article</a> originally published on April 15, 2022.</em></p><img src="https://counter.theconversation.com/content/181923/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI. </span></em></p>Twitter, more than other social media platforms, fosters real-time discussion about events as they unfold. That could change now that Musk has gained control of the company.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1813742022-04-15T14:42:22Z2022-04-15T14:42:22ZElon Musk’s bid spotlights Twitter’s unique role in public discourse – and what changes might be in store<figure><img src="https://images.theconversation.com/files/458321/original/file-20220415-22-vd2ph3.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C5760%2C3828&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Twitter may not be a darling of Wall Street, but it occupies a unique place in the social media landscape.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/CapitolRiotInvestigationTech/d85dc445f8e84d0c9d08c8402a0d300a/photo">AP Photo/Richard Drew</a></span></figcaption></figure><p>Twitter has been in the news a lot lately, albeit for the wrong reasons. Its stock growth has languished and the platform itself has <a href="https://www.npr.org/2021/11/29/1059756077/jack-dorsey-steps-down-as-twitter-ceo">largely remained the same since its founding</a> in 2006. On April 14, 2022, Elon Musk, the world’s richest person, <a href="https://www.bloomberg.com/news/articles/2022-04-14/elon-musk-launches-43-billion-hostile-takeover-of-twitter">made an offer to buy Twitter</a> and take the public company private. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">filing with the Securities and Exchange Commission</a>, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”</p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=JpFHYKcAAAAJ">researcher of social media platforms</a>, I find that Musk’s potential ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.</p>
<h2>What makes Twitter unique</h2>
<p>Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.</p>
<p>Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is <a href="https://www.business2community.com/social-media-articles/how-your-contents-half-life-should-drastically-impact-your-social-media-strategy-in-2020-02290478">about 20 minutes</a>, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.</p>
<p>Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict <a href="https://ieeexplore.ieee.org/abstract/document/7045443">asthma-related emergency department visits</a>, measure <a href="https://www.cs.jhu.edu/%7Emdredze/publications/2016_ossm.pdf">public epidemic awareness</a>, and model <a href="https://doi.org/10.1080/1369118X.2016.1218528">wildfire smoke dispersion</a>. </p>
<p>Tweets that are part of a conversation are <a href="https://blog.twitter.com/en_us/a/2013/keep-up-with-conversations-on-twitter">shown in chronological order</a>, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive <a href="https://blog.twitter.com/en_us/a/2015/full-archive-search-api">provides instant and complete access to every public Tweet</a>. This positions Twitter as a <a href="https://twitter.com/sarahkendzior/status/1514590065674047488">historical chronicler of record</a> and a de facto fact checker.</p>
<h2>Changes on Musk’s mind</h2>
<p>A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several <a href="https://www.bloombergquint.com/business/twitter-shares-fall-after-musk-ditches-potential-board-role">suggestions about how to change Twitter</a>, including adding an edit button for tweets and granting automatic verification marks to premium users. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1511143607385874434"}"></div></p>
<p>There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets. </p>
<p>There are numerous ways to <a href="https://www.tweettabs.com/find-deleted-tweets/">retrieve deleted tweets</a>, which allows researchers to study them. While some studies show <a href="https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/viewPaper/13133">significant personality differences</a> between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a <a href="https://doi.org/10.1080/1369118X.2016.1257041">way for people to manage their online identities</a>.</p>
<p>Analyzing deleting behavior can also yield valuable clues about <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14874">online credibility and disinformation</a>. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.</p>
<p>Studies of bot-generated activity on Twitter have concluded that <a href="https://www.npr.org/sections/coronavirus-live-updates/2020/05/20/859814085/researchers-nearly-half-of-accounts-tweeting-about-coronavirus-are-likely-bots">nearly half of accounts tweeting about COVID-19 are likely bots</a>. Given <a href="https://doi.org/10.1073/pnas.1804840115">partisanship and political polarization in online spaces</a>, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1514590065674047488"}"></div></p>
<h2>Twitter’s content moderation and revenue model</h2>
<p>To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – <a href="https://warzel.substack.com/p/the-internets-original-sin?s=r">online advertising ecosystem</a> involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the <a href="https://www.wsj.com/articles/social-media-may-have-to-embrace-the-musk-11649691208">primary revenue source for Twitter</a>. </p>
<p>Musk’s vision is to generate revenue for Twitter from subscriptions rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This would make Twitter a sort of freewheeling opinion site for paying subscribers. Twitter has been <a href="https://www.techdirt.com/2021/02/10/content-moderation-case-study-twitter-attempts-to-tackle-covid-related-vaccine-misinformation-2020/">aggressive in using content moderation</a> in its attempts to address disinformation.</p>
<p>Musk’s description of a <a href="https://qz.com/2155098/elon-musks-twitter-bid-isnt-about-free-speech/">platform free from content moderation issues</a> is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as <a href="https://doi.org/10.1145/3468507.3468512">algorithms that assign gender</a> to users, <a href="https://doi.org/10.1145/3287560.3287587">potential inaccuracies and biases in algorithms</a> used to glean information from these platforms, and the impact on those <a href="https://theconversation.com/biases-in-algorithms-hurt-those-looking-for-information-on-health-140616">looking for health information online</a>. </p>
<p>Testimony by Facebook whistleblower <a href="https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/">Frances Haugen</a> and recent regulatory efforts such as the <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">online safety bill unveiled in the U.K.</a> show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s potential bid for Twitter <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">highlights a whole host of regulatory concerns</a>. </p>
<p>Because of Musk’s other businesses, Twitter’s <a href="https://www.nasdaq.com/articles/how-does-social-media-influence-financial-markets-2019-10-14">ability to influence public opinion</a> in the sensitive industries of aviation and the automobile industry would automatically create a conflict of interest, not to mention affecting the disclosure of <a href="https://www.investopedia.com/terms/m/materialinsiderinformation.asp">material information</a> necessary for shareholders. Musk has already been accused of <a href="https://www.cbsnews.com/news/elon-musk-twitter-shareholder-lawsuit/">delaying disclosure of his ownership stake in Twitter</a>.</p>
<p>Twitter’s own <a href="https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge">algorithmic bias bounty challenge</a> concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to <a href="https://www.media.mit.edu/galleries/youtube-redesign/">re-imagine the YouTube platform with ethics in mind</a>. Perhaps it’s time to ask Twitter to do the same, whoever owns and manages the company.</p>
<p>[<em>Over 150,000 readers rely on The Conversation’s newsletters to understand the world.</em> <a href="https://memberservices.theconversation.com/newsletters/?source=inline-150ksignup">Sign up today</a>.]</p><img src="https://counter.theconversation.com/content/181374/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anjana Susarla receives funding from the National Institute of Health and from the Omura-Saxena Professorship in Responsible AI. </span></em></p>Twitter, more than other social media platforms, fosters real-time discussion about events as they unfold. That could change if Musk gains control of the company.Anjana Susarla, Professor of Information Systems, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1692492021-10-05T07:58:21Z2021-10-05T07:58:21ZWhat caused the unprecedented Facebook outage? The few clues point to a problem from within<figure><img src="https://images.theconversation.com/files/424677/original/file-20211005-13-1vj7hmg.jpeg?ixlib=rb-1.1.0&rect=38%2C49%2C3611%2C2434&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Suddenly and inexplicably, Facebook, Instagram, WhatsApp, Messenger and Oculus services were gone. And it was no local disturbance. In a blog post, <a href="https://downdetector.com/">Downdetector.com</a>, a major monitoring service for online outages, <a href="https://www.theguardian.com/technology/2021/oct/04/facebook-instagram-and-whatsapp-hit-by-outage">called it</a> the largest global outage it had ever recorded — with 10.6 million reports from around the world.</p>
<p>The outage had an especially <a href="https://www.nytimes.com/2021/10/04/technology/facebook-down.html">massive knock-on effect</a> on individuals and businesses around the world that <a href="https://www.theguardian.com/technology/2021/oct/04/facebook-instagram-and-whatsapp-hit-by-outage">rely on Whatsapp</a> to communicate with friends, family, colleagues and customers. </p>
<p>It took Facebook nearly six hours to get services back online, albeit slowly at first. Ironically, the outage was so pervasive Facebook had to resort to using Twitter, its rival platform, to get updates out into the world.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1445155265360416773"}"></div></p>
<p>The internet and its outwardly visible face (the World Wide Web) is a remarkably fault-tolerant machine. It was designed to be resilient — and the web has never gone down completely. As such, global outages like this one are <a href="https://qz.com/2069139/facebook-whatsapp-and-instagram-all-went-down-at-the-same-time/">quite rare</a>. </p>
<p>But they do happen. To Google’s embarrassment, several of its services including Gmail, YouTube, Hangouts, Google Calendar and Google Maps <a href="https://www.nytimes.com/2020/12/14/business/google-down-worldwide.html">went offline</a> for about an hour in December last year. </p>
<p>And in June this year, a cloud-computing company that services clients such as the Guardian, the New York Times, Reddit and The Conversation went offline too. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fastly-global-internet-outage-why-did-so-many-sites-go-down-and-what-is-a-cdn-anyway-162371">Fastly global internet outage: why did so many sites go down — and what is a CDN, anyway?</a>
</strong>
</em>
</p>
<hr>
<h2>What caused it?</h2>
<p>While Facebook’s management was apologetic, they gave no hint as to what caused the outage. </p>
<p>With hacking issues becoming all too common in today’s cyber-security threat environment, the question arises whether Facebook’s outage might have been the result of a successful hack. But this seems unlikely. </p>
<p>According to a report from <a href="https://www.theverge.com/2021/10/4/22709575/facebook-outage-instagram-whatsapp">The Verge</a> referencing Facebook’s Chief Technology Officer and Vice President of Infrastructure, it seems the problem was probably Facebook’s internal infrastructure. </p>
<p>Facebook engineers were sent to one of the company’s data centres in California to work on the problem, which implies they were unable to log in remotely to the data centre. </p>
<p>Experts have <a href="https://www.theverge.com/2021/10/4/22709575/facebook-outage-instagram-whatsapp">said</a> the outage could have only have come from inside the company. It’s likely Facebook engineers inadvertently made changes to how the network is set up, creating a cascading set of problems. </p>
<p>Such events have happened before, albeit not with such a catastrophic effect. </p>
<p>However, given the highly confidential way Facebook operates its network, it’s not possible to know exactly what happened with the network configuration. We will probably never be told. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1445061016518201344"}"></div></p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1445121312976953351"}"></div></p>
<h2>A Domain Name Server problem</h2>
<p>Supporting the network configuration explanation is the fact that the error messages that appeared when people tried to contact facebook.com and whatsapp.com indicated it was a DNS problem. So the websites still existed, but couldn’t be reached. </p>
<p>DNS stands for <a href="https://www.cloudflare.com/en-au/learning/dns/what-is-dns/">Domain Name Server</a> and is described as the “phonebook of the internet”. It translates domain names read by us into encoded internet addresses (IP addresses) to be read by computers. </p>
<p>When you enter a domain name such as “facebook.com” or “whatsapp.com” into your browser, the Domain Name Server is consulted and the corresponding <a href="https://www.investopedia.com/terms/i/ip-address.asp">encoded internet address</a>, the IP, is called. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-my-ip-address-explaining-one-of-the-worlds-most-googled-questions-167316">'What is my IP address?' Explaining one of the world's most Googled questions</a>
</strong>
</em>
</p>
<hr>
<p>When everything is working as it should, the user is then connected to the requested domain. On the strength of evidence gleaned from expert sources close to Facebook, it seems most unlikely the outage was caused by an external attack. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=298&fit=crop&dpr=1 600w, https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=298&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=298&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=374&fit=crop&dpr=1 754w, https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=374&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/424667/original/file-20211005-24-njgb7j.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=374&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">According to Statista, the country with the largest number of Facebook users is India, followed by the US, Indonesia, Brazil and Mexico (based on data from July, 2021).</span>
<span class="attribution"><span class="source">Simon / Pixabay</span></span>
</figcaption>
</figure>
<h2>A whistleblower speaks up</h2>
<p>The Facebook outage occurred only hours after the US-based 60 Minutes program aired an <a href="https://www.youtube.com/watch?v=_Lx5VmAdZSI">incendiary interview</a> with former Facebook employee and whistleblower, 37-year-old Harvard graduate Frances Haugen. </p>
<p>In a complaint to federal law enforcement, and in the interview, Haugen <a href="https://www.theguardian.com/technology/2021/oct/04/how-friend-lost-to-misinformation-drove-facebook-whistleblower-frances-haugen">alleges</a> Facebook’s Instagram app is harming teenage girls, and that Facebook’s own research indicates the company “amplifies hate, misinformation and political unrest, but the company hides what it knows”. </p>
<p>To support the allegations, Haugen shared more than 10,000 pages of internal documentation with the US Securities and Exchange Commission — all pretty damning stuff. She <a href="https://www.usatoday.com/story/tech/2021/10/04/facebook-whistleblower-frances-haugen-what-we-know/5993959001/">said</a>: </p>
<blockquote>
<p>The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook, and Facebook over and over again chose to optimise for its own interests, like making more money.</p>
</blockquote>
<p>Given the timing of the interview and Facebook’s global outage, it’s natural to wonder whether the two events are connected. However, with the absence of any definitive evidence to support this theory, a causal link has not been established between both events. </p>
<p>But considering the seriousness of Haugen’s allegations, and the weight of objective evidence in the form of thousands of insider documents, it’s clear further investigation is warranted. </p>
<p>Facebook has around 2.89 billion monthly active users and a <a href="https://www.gobankingrates.com/money/business/how-much-is-facebook-worth/">market capitalisation</a> of US$1.21 trillion. By any standard, it’s a big and powerful company with a great deal of influence. Now is the time to shine a light on its ethics, or lack thereof. </p>
<p>Hopefully there won’t be any more outages to slow down this process.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1445112777832423424"}"></div></p><img src="https://counter.theconversation.com/content/169249/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It took Facebook nearly six hours to get its services back online. In the meantime, Twitter had a field day.David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1570992021-03-18T12:19:39Z2021-03-18T12:19:39Z7 ways to avoid becoming a misinformation superspreader<figure><img src="https://images.theconversation.com/files/389858/original/file-20210316-16-1ifjiq8.jpg?ixlib=rb-1.1.0&rect=14%2C14%2C4778%2C3671&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Identify and stop the lies.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/big-hand-with-cartoon-character-stop-sign-royalty-free-illustration/1292878719">NLshop/iStock via Getty Images Plus</a></span></figcaption></figure><p>The problem of misinformation isn’t going away. Internet platforms like Facebook and Twitter have <a href="https://www.reuters.com/article/us-usa-election-twitter-idUSKCN2590FU">taken some steps to curb its spread</a> and say they are working on doing more. But no method yet introduced has been completely successful at removing all misleading content from social media. The best defense, then, is self-defense. </p>
<p>Misleading or outright false information – broadly called “misinformation” – can come from websites pretending to be news outlets, political propaganda or “<a href="http://source.sheridancollege.ca/fhass_huma_publ/1">pseudo-profound</a>” reports that seem meaningful but are not. Disinformation is a type of misinformation that is deliberately generated to maliciously mislead people. Disinformation is intentionally shared, knowing it is false, but misinformation can be <a href="https://doi.org/10.1371/journal.pone.0239666">shared by people who don’t know it’s not true</a>, especially because people often share links online <a href="https://www.wired.com/story/dont-want-to-fall-for-fake-news-dont-be-lazy/">without thinking</a>.</p>
<p>Emerging psychology research has revealed some tactics that can help protect our society from misinformation. Here are seven strategies you can use to avoid being misled, and to prevent yourself – and others – from spreading inaccuracies.</p>
<h2>1. Educate yourself</h2>
<p>The best inoculation against what the World Health Organization is calling the “<a href="https://www.who.int/news-room/feature-stories/detail/immunizing-the-public-against-misinformation">infodemic</a>” is to understand the <a href="https://theconversation.com/10-ways-to-spot-online-misinformation-132246">tricks that agents of disinformation are using</a> to try to manipulate you.</p>
<p>One strategy is called “<a href="https://www.spsp.org/news-center/blog/roozenbeek-van-der-linden-resisting-digital-misinformation">prebunking</a>” – a type of debunking that happens before you hear myths and lies. Research has shown that <a href="https://doi.org/10.5334/joc.91">familiarizing yourself with the tricks of the disinformation trade</a> can help you <a href="https://doi.apa.org/doi/10.1037/xap0000315">recognize false stories</a> when you encounter them, making you less susceptible to those tricks.</p>
<p>Researchers at the University of Cambridge have developed an online game called “<a href="https://www.getbadnews.com/">Bad News</a>,” which their studies have shown can <a href="https://doi.org/10.1057/s41599-019-027">improve players’ identification of falsehoods</a>.</p>
<p>In addition to the game, you can also learn more about how <a href="https://doi.org/10.1073/pnas.1920498117">internet and social media platforms work</a>, so you better understand the tools available to people seeking to manipulate you. You can also learn more about <a href="https://doi.org/10.1002/sce.21581">scientific research and standards of evidence</a>, which can help you be <a href="https://doi.org/10.1098/rsos.201199">less susceptible to lies and misleading statements</a> about health-related and scientific topics. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Badges identify ways misinformation exploits people's minds" src="https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=504&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=504&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=504&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=633&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=633&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389863/original/file-20210316-13-j493c8.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=633&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Playing the ‘Bad News’ online game illustrates different ways information warriors can prey on people’s psychological vulnerabilities.</span>
<span class="attribution"><a class="source" href="https://www.getbadnews.com/">Screenshot of Get Bad News</a></span>
</figcaption>
</figure>
<h2>2. Recognize your vulnerabilities</h2>
<p>The prebunking approach works for people across the political spectrum, but it turns out that people who underestimate their biases are actually more vulnerable to being misled than people who acknowledge their biases. </p>
<p>Research has found people are more <a href="https://www.scientificamerican.com/article/biases-make-people-vulnerable-to-misinformation-spread-by-social-media/">susceptible to misinformation</a> that aligns with their preexisting views. This is called “<a href="https://www.usatoday.com/story/money/columnist/2018/05/15/fake-news-social-media-confirmation-bias-echo-chambers/533857002/">confirmation bias</a>,” because a person is biased toward believing information that confirms what they already believe.</p>
<p>The lesson is to be particularly critical of information from groups or people with whom you agree or find yourself aligned – whether politically, religiously, or by ethnicity or nationality. Remind yourself to <a href="https://doi.org/10.3389/fdata.2019.00011">look for other points of view</a>, and other sources with information on the same topic. </p>
<p>It is especially important to be honest with yourself about <a href="https://www.allsides.com/rate-your-bias">what your biases are</a>. Many people assume others are biased, but <a href="https://www.cmu.edu/news/stories/archives/2015/june/bias-blind-spot.html">believe they themselves are not</a> – and imagine that <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/poi3.214">others are more likely to share misinformation</a> than they themselves are.</p>
<h2>3. Consider the source</h2>
<p>Media outlets have a range of biases. The <a href="https://www.adfontesmedia.com/">Media Bias Chart</a> describes which outlets are <a href="https://observer.com/2018/06/media-bias-can-readers-trust-media-pew-research-center-knight-foundation/">most and least partisan</a> as well as how reliable they are at <a href="https://www.poynter.org/fact-checking/media-literacy/2021/should-you-trust-media-bias-charts/">reporting facts</a>.</p>
<p>You can play an online game called “<a href="https://fakey.osome.iu.edu/">Fakey</a>” to see how susceptible you are to different ways news is presented online.</p>
<p>When consuming news, make sure you know how trustworthy the source is – or whether it’s <a href="https://www.cjr.org/fake-beta">not trustworthy at all</a>. Double-check stories from other sources with low biases and high fact ratings to find out who – and what – you can actually trust, rather than just <a href="https://doi.org/10.1111/pops.12586">what your gut tells you</a>. </p>
<p>Also, be aware that some disinformation agents <a href="https://www.forbes.com/sites/christopherelliott/2019/02/21/these-are-the-real-fake-news-sites/">make fake sites</a> that look like real news sources – so make sure you’re conscious of which site you are actually visiting. Engaging in this level of <a href="http://dx.doi.org/10.1073/pnas.1806781116">thinking about your own thinking</a> has been shown to improve your ability to tell fact from fiction.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man leans back from his desk" src="https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389867/original/file-20210316-17-1xajml4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Take a moment to think before you decide to share something online.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/man-in-office-royalty-free-image/641199968">10'000 Hours/Digital Vision via Getty Images</a></span>
</figcaption>
</figure>
<h2>4. Take a pause</h2>
<p>When most people go online, especially on social media, they’re there for <a href="https://www.searchenginejournal.com/seo-101/why-do-people-visit-websites-today/">entertainment, connection or even distraction</a>. Accuracy isn’t always high on the priority list. Yet <a href="https://doi.org/10.1177/1461444820969893">few want to be a liar</a>, and the <a href="https://www.hbo.com/documentaries/after-truth-disinformation-and-the-cost-of-fake-news">costs of sharing misinformation</a> can be high – to individuals, their relationships and society as a whole. Before you decide to share something, take a moment to remind yourself of the <a href="https://doi.org/10.1177%2F0956797620939054">value you place on truth and accuracy</a>. </p>
<p>Thinking “is what I am sharing true?” can help you stop the spread of misinformation and will encourage you to <a href="https://www.patheos.com/blogs/nosacredcows/2018/09/study-confirms-most-people-share-articles-based-only-on-headlines/">look beyond the headline</a> and potentially fact-check before sharing. </p>
<p>Even if you don’t think specifically about accuracy, <a href="http://dx.doi.org/10.1037/xge0000729">just taking a pause before sharing</a> can give you a chance for your mind to catch up with your emotions. Ask yourself whether you really want to share it, and if so, <a href="https://doi.org/10.37016/mr-2020-009">why</a>. Think about what the potential consequences of sharing it might be. </p>
<p>Research shows that most misinformation is shared quickly and <a href="https://doi.org/10.1016/j.cognition.2018.06.011">without much thought</a>. The impulse to share without thinking can <a href="https://www.apa.org/news/apa/2020/02/fake-news">even be more powerful</a> than partisan sharing tendencies. Take your time. There is no hurry. You are not a <a href="https://www.niemanlab.org/2013/11/sharing-fast-and-slow-the-psychological-connection-between-how-we-think-and-how-we-spread-news-on-social-media/">breaking-news</a> organization upon whom thousands depend for immediate information. </p>
<h2>5. Be aware of your emotions</h2>
<p>People often share things because of their gut reactions, rather than the conclusions of critical thinking. In a <a href="https://www.spsp.org/news-center/blog/martel-emotion-misinformation-social-media">recent study</a>, researchers found that people who viewed their social media feed while in an emotional mindset were <a href="https://doi.org/10.1186/s41235-020-00252-3">significantly more likely to share misinformation</a> than those who went in with a more rational state of mind. </p>
<p><a href="https://doi.org/10.1111/jcom.12164">Anger and anxiety</a>, in particular, make people more vulnerable to falling for misinformation.</p>
<h2>6. If you see something, say something</h2>
<p>Stand up to misinformation publicly. It may feel uncomfortable to challenge your friends online, especially if you fear conflict. The person to whom you respond with a link to a <a href="https://snopes.com">Snopes post</a> or other fact-checking site may not appreciate being called out. </p>
<p>But evidence shows that <a href="https://doi.org/10.1037/xge0000635">explicitly critiquing the specific reasoning</a> in the post and <a href="http://dx.doi.org/10.1080/1369118X.2017.1313883">providing counterevidence like a link</a> about how it is fake is <a href="https://doi.org/10.1080/10810730.2020.1838671">an effective technique</a>.</p>
<p>Even <a href="https://doi.org/10.1111/bjop.12383">short-format refutations</a> – like “this isn’t true” – are more effective than saying nothing. <a href="https://doi.org/10.1177%2F1077699017710453">Humor – though not ridicule of the person</a> – can work, too. When <a href="http://dx.doi.org/10.1016/j.chb.2019.03.032">actual people correct misinformation online</a>, it can be <a href="http://dx.doi.org/10.1080/10410236.2017.1331312">as effective</a>, if not <a href="https://doi.org/10.1080/10410236.2020.1794553">more so</a>, as when a social media company labels something as questionable. </p>
<p>People <a href="https://doi.org/10.1177%2F2056305120935102">trust other humans</a> more than algorithms and bots, especially those in our own social circles. That’s particularly true if you have <a href="https://doi.org/10.1177%2F1075547017731776">expertise in the subject</a> or are a <a href="http://dx.doi.org/10.1080/10584609.2017.1334018">close connection</a> with the person who shared it. </p>
<p>An additional benefit is that public debunking notifies other viewers that they may want to look more closely before choosing to share it themselves. So even if you don’t discourage the original poster, you are discouraging others.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A child raises a finger" src="https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389871/original/file-20210316-22-1ehva9y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Even kids know to speak up when they see something wrong.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/latinx-toddler-points-his-index-finger-while-royalty-free-image/1198721866">Mireya Acierto/DigitalVision via Getty Images</a></span>
</figcaption>
</figure>
<h2>7. If you see someone else stand up, stand with them</h2>
<p>If you see someone else has posted that a story is false, don’t say “well, they beat me to it so I don’t need to.” When more people chime in on a post as being false, it signals that sharing misinformation is <a href="https://doi.org/10.1016/j.chb.2019.03.032">frowned upon by the group more generally</a>.</p>
<p>Stand with those who stand up. If you don’t and something gets shared over and over, that <a href="https://www.sciencedaily.com/releases/2019/12/191203094813.htm">reinforces people’s beliefs that it is OK</a> to share misinformation – because everyone else is doing it, and only a few, if any, are objecting.</p>
<p>Allowing misinformation to spread also makes it more likely that even more people will start to believe it – because people come to <a href="http://dx.doi.org/10.1037/xge0000098">believe things they hear repeatedly</a>, even if they know at first <a href="https://theconversation.com/unbelievable-news-read-it-again-and-you-might-think-its-true-69602">they’re not true</a>.</p>
<p>There is no perfect solution. Some misinformation is <a href="https://doi.org/10.1080/03637751.2018.1467564">harder to counter than others</a>, and some countering tactics are more effective at different times or for different people. But you can go a long way toward protecting yourself and those in your social networks from confusion, deception and falsehood.</p>
<p>[<em>Over 100,000 readers rely on The Conversation’s newsletter to understand the world.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=100Ksignup">Sign up today</a>.]</p><img src="https://counter.theconversation.com/content/157099/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>H. Colleen Sinclair receives funding from the Department of Defense.</span></em></p>A social psychologist explains how to avoid being misled, and how to prevent yourself – and others – from spreading inaccurate information.H. Colleen Sinclair, Associate Professor of Social Psychology, Mississippi State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1483142020-10-27T05:25:05Z2020-10-27T05:25:05ZFacebook is tilting the political playing field more than ever, and it’s no accident<p>As the US presidential election polling day draws close, it’s worth recapping what we know about how Facebook has been used to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3834737/">influence election results</a>.</p>
<p>The platform is optimised for boosting politically conservative voices calling for <a href="https://www.washingtonpost.com/opinions/2020/10/26/facebook-algorithm-conservative-liberal-extremes/">fascism, separatism and xenophobia</a>. It’s also these voices that tend to generate <a href="https://www.pbs.org/wgbh/nova/article/radical-ideas-social-media-algorithms/">the most clicks</a>. </p>
<p>In recent years, Facebook has on several occasions been made to choose between keeping to its <a href="https://www.facebook.com/communitystandards/introduction">community standards</a> or taking a path that avoids the ire of conservatives. Too many times, it has chosen the latter.</p>
<p>The result has been an onslaught of divisive rhetoric that continues to flood the platform and drive political polarisation in society.</p>
<h2>How democracy can be subverted online</h2>
<p>According to <a href="https://www.nytimes.com/2020/02/20/us/politics/russian-interference-trump-democrats.html">The New York Times</a>, earlier this year US intelligence officials warned Russia was interfering in the 2020 presidential campaign, with the goal of seeing President Donald Trump re-elected.</p>
<p>This was corroborated by <a href="https://www.brennancenter.org/our-work/analysis-opinion/new-evidence-shows-how-russias-election-interference-has-gotten-more">findings</a> from the US Brennan Centre for Justice. A research team led by journalism and communications professor Young Mie Kim identified a range of Facebook troll accounts deliberately sowing division “by targeting both the left and right, with posts to foment outrage, fear and hostility”.</p>
<p>Most were linked to Russia’s Internet Research Agency (IRA), <a href="https://www.reuters.com/article/us-usa-russia-sanctions/u-s-blacklists-individuals-entities-linked-to-leader-of-russias-ira-idUSKCN26E2HO">the company</a> also behind a 2016 US election influence campaign. Kim <a href="https://www.brennancenter.org/our-work/analysis-opinion/new-evidence-shows-how-russias-election-interference-has-gotten-more">wrote</a> the troll accounts seemed to discourage certain people from voting, with a focus on swing states.</p>
<p>This month, Facebook <a href="https://www.nytimes.com/2020/10/06/technology/facebook-qanon-crackdown.html">announced</a> a ban (across both Facebook and Instagram, which Facebook owns) on groups and pages devoted to the far-right conspiracy group QAnon. It also <a href="https://www.wsj.com/articles/facebook-takes-down-network-tied-to-conservative-group-citing-fake-accounts-11602174088">removed</a> a network of fake accounts linked to a conservative US political youth group, for violating rules against “coordinated inauthentic behavior”.</p>
<p>However, despite Facebook’s <a href="https://www.wired.com/story/facebooks-latest-fix-for-fake-news-ask-users-what-they-trust/">repeated promises</a> to clamp down harder on such behaviour — and <a href="https://theconversation.com/facebook-is-removing-qanon-pages-and-groups-from-its-sites-but-critical-thinking-is-still-the-best-way-to-fight-conspiracy-theories-147668">occasional</a> efforts to actually do so — the company has been <a href="https://www.washingtonpost.com/opinions/samantha-power-facebook-reduce-spread-misinformation/2020/10/23/d54c1bda-1496-11eb-bc10-40b25382f1be_story.html">widely</a> <a href="https://www.theguardian.com/technology/2020/oct/14/facebook-greatest-source-of-covid-19-disinformation-journalists-say">criticised</a> for doing far too little to curb the spread of disinformation, misinformation and election meddling.</p>
<p>According to a <a href="https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf">University of Oxford study</a>, 70 countries (including Australia) practised either foreign or domestic election meddling in 2019. This was up from 48 in 2018 and 28 in 2017. The study said Facebook was “the platform of choice” for this.</p>
<p>The Conversation approached Facebook for comment regarding the platform’s use by political actors to influence elections, including past US elections. A Facebook spokesperson said:</p>
<blockquote>
<p>We’ve hired experts, built teams with experience across different areas, and created new products, policies and partnerships to ensure we’re ready for the unique challenges of the US election. </p>
</blockquote>
<h2>When Facebook favoured one side</h2>
<p>Facebook has drawn widespread criticism for its failure to remove posts that clearly violate its policies on hate speech, including <a href="https://www.washingtonpost.com/technology/2020/06/28/facebook-zuckerberg-trump-hate/">posts</a> by Trump himself.</p>
<p>The company openly <a href="https://about.fb.com/news/2019/09/elections-and-political-speech/">exempts</a> politicians from its fact-checking program and knowingly hosts misleading content from politicians, under its “newsworthiness exception”. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1131728912835383300"}"></div></p>
<p>When Facebook tried to clamp down on misinformation in the aftermath of the 2016 presidential elections, <a href="https://www.bushcenter.org/people/joel-kaplan.html">ex-Republican staffer</a> turned Facebook executive Joel Kaplan argued doing so would disproportionately target conservatives, the Washington Post <a href="https://www.washingtonpost.com/technology/2020/02/20/facebook-republican-shift/">reported</a>.</p>
<p>The Conversation asked Facebook whether Kaplan’s past political affiliations indicated a potential for conservative bias in his current role. The question wasn’t answered.</p>
<p>Facebook’s board also now features a <a href="https://www.nytimes.com/2016/10/16/technology/peter-thiel-donald-j-trump.html">major Trump donor</a> and vocal supporter, Peter Thiel. Facebook’s chief executive Mark Zuckerberg has himself been accused of <a href="https://www.nytimes.com/2020/06/21/business/media/facebook-donald-trump-mark-zuckerberg.html">getting “too close”</a> to <a href="https://www.theguardian.com/commentisfree/2019/nov/22/surprised-about-mark-zuckerbergs-secret-meeting-with-trump-dont-be">Trump</a>.</p>
<p>Moreover, when the US Federal Trade Commission investigated Facebook’s role in the Cambridge Analytica scandal, it was <a href="https://www.theguardian.com/technology/2019/jul/12/facebook-fine-ftc-privacy-violations">Republican votes</a> that saved the company from facing antitrust litigation.</p>
<p>Overall, Facebook’s model has shifted <a href="https://www.theverge.com/interface/2019/4/11/18305407/social-network-conservative-bias-twitter-facebook-ted-cruz">towards increasing polarisation</a>. Incendiary and misinformation-laden posts tend to generate clicks.</p>
<p>As Zuckerberg himself <a href="https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/">notes</a>, “when left unchecked, people on the platform engage disproportionately” with such content.</p>
<p>Over the years, conservatives have accused Facebook of <a href="https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/">anti-conservative bias</a>, for which the company faced <a href="https://www.thewrap.com/trump-campaign-halts-twitter-spending-over-disgusting-bias-against-mitch-mcconnell/">financial penalties by the Republican Party</a>. This is despite research indicating <a href="https://www.mediamatters.org/facebook/study-analysis-top-facebook-pages-covering-american-political-news">no such bias exists</a> on the platform.</p>
<h2>Fanning the flames</h2>
<p>Facebook’s <a href="https://www.livescience.com/49585-facebook-addiction-viewed-brain.html">addictive</a> news feed rewards us for simply skimming headlines, conditioning us to react viscerally.</p>
<p>Its sharing features have been found to <a href="https://science.sciencemag.org/content/359/6380/1146">promote falsehoods</a>. They can <a href="https://www.motherjones.com/politics/2014/10/can-voting-facebook-button-improve-voter-turnout/">trick users</a> into attributing news to their friends, causing them to assign trust to unreliable news sources. This provides a breeding ground for <a href="https://www.abc.net.au/news/science/2020-10-05/conspiracy-theories-coronavirus-5g-conspiratorial-psychology/12722320">conspiracies</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflixs-the-social-dilemma-highlights-the-problem-with-social-media-but-whats-the-solution-147351">Netflix's The Social Dilemma highlights the problem with social media, but what's the solution?</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0207383">Studies</a> have also shown social media to be an ideal environment for campaigns aimed at creating mistrust, which explains the increasing <a href="https://thehill.com/policy/healthcare/516412-polls-show-trust-in-scientific-political-institutions-eroding">erosion of trust in science and expertise</a>.</p>
<p>Worst of all are Facebook’s “echo chambers”, which convince people that only their own opinions are mainstream. This encourages hostile “us versus them” dialogue, which leads to polarisation. This pattern <a href="https://www.pewresearch.org/internet/2020/02/21/concerns-about-democracy-in-the-digital-age/">suppresses valuable democratic debate</a> and has been described as an <a href="https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697">existential threat to democracy itself</a>.</p>
<p>Meanwhile, Facebook’s staff hasn’t been shy about skewing liberal, even suggesting in 2016 that Facebook work to <a href="https://www.gizmodo.com.au/2016/04/facebook-employees-asked-mark-zuckerberg-if-they-should-try-to-stop-a-donald-trump-presidency/">prevent Trump’s election</a>. Around 2017, they proposed a feature called “<a href="https://www.theverge.com/2018/12/23/18154111/facebook-common-grounds-feature-conservative-bias-concerns-shelved-joel-kaplan">Common Ground</a>”, which would have encouraged users with different political beliefs to interact in less hostile ways.</p>
<p>Kaplan opposed the proposition, according to <a href="https://www.wsj.com/articles/facebooks-lonely-conservative-takes-on-a-power-position-11545570000">The Wall Street Journal</a>, due to fears it could trigger claims of bias against conservatives. The project was eventually shelved in 2018.</p>
<p>Facebook’s track record isn’t good news for those who want to live in a healthy democratic state. Polarisation certainly doesn’t lead to effective political discourse. </p>
<p>While several <a href="https://about.fb.com/news/2020/10/preparing-for-election-day/">blog</a> <a href="https://about.fb.com/news/2020/08/preparing-for-myanmars-2020-election/">posts</a> from the company outline measures being taken to supposedly protect the integrity of the 2020 US presidential elections, it remains to be seen what this means in reality.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/seeing-is-believing-how-media-mythbusting-can-actually-make-false-beliefs-stronger-138515">Seeing is believing: how media mythbusting can actually make false beliefs stronger</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/148314/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Brand does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Facebook benefits financially from misinformation spreading on its platform. As long as it puts profits ahead of public good, the tilting of the political landscape will persist.Michael Brand, Adjunct A/Prof of Data Science and Artificial Intelligence, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1475092020-10-21T15:45:32Z2020-10-21T15:45:32ZWe must make moral choices about how we relate to social media apps<figure><img src="https://images.theconversation.com/files/364456/original/file-20201020-14-nybccz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">The Social Dilemma/Netflix</span></span></figcaption></figure><p>Recently a South African <a href="https://www.kfm.co.za/Show/kfm-breakfast">radio show</a> asked, “If you had to choose between your mobile phone and your pet, which would choose?” Think about that for a moment. Many callers responded they would choose their phone. I was shocked… But to be honest, I give more attention to my phone than to my beloved dogs!</p>
<p>Throughout history there have been discoveries that have changed society in unimaginable ways. Written language made it possible to communicate over space and time. The printing press, say historians, helped shape societies <a href="https://www.jstor.org/stable/24357082">through</a> the mass dissemination of ideas. New modes of transport <a href="https://hrcak.srce.hr/index.php?id_clanak_jezik=237992&show=clanak">radically transformed</a> social norms by bringing people into contact with new cultures.</p>
<p>Yet these pale in comparison to how the internet is shaping, and misshaping, our individual and social <a href="https://www.counterpointknowledge.org/social-media-as-religion-unexamined-desire-and-mis-information/">identities</a>. I remember the first time I heard a teenager speaking with an American accent and discovered she’d never been out of South Africa but picked up her accent from watching YouTube. We shape our technologies, but they also shape us. </p>
<p>The potentially negative impacts of social media have again been highlighted by <a href="https://www.imdb.com/title/tt11464826/"><em>The Social Dilemma</em></a> on Netflix. The documentary, which Facebook has <a href="https://www.indiewire.com/2020/10/facebook-response-the-social-dilemma-1234590361/">slammed</a> as sensational and unfair, shows how dominant and largely unregulated social media companies manipulate users by harvesting personal data, while using <a href="https://theconversation.com/do-social-media-algorithms-erode-our-ability-to-make-decisions-freely-the-jury-is-out-140729">algorithms</a> to push information and ads that can lead to social media addiction – and dangerous anti-social behaviour. Among others, the show makes an example of the conspiracy theory <a href="https://theconversation.com/how-qanon-uses-satanic-rhetoric-to-set-up-a-narrative-of-good-vs-evil-146281">QAnon</a>, which is <a href="https://www.dailymaverick.co.za/article/2020-09-26-qanon-originated-in-south-africa-now-that-the-global-cult-is-back-here-we-should-all-be-afraid/">increasingly</a> <a href="https://www.thedailybeast.com/qanon-targets-africa-with-new-conspiracy-that-democrats-are-stealing-local-children">targeting</a> Africans.</p>
<p>Despite its flaws, the doccie got me wondering what our relationship should be to social media? As an ethics professor, I’ve come to realise that we must make moral choices about how we relate to our technologies. This requires an honest evaluation of our needs and weaknesses, and a clear understanding of the intentions of these platforms. </p>
<h2>Tug-of-war with technology</h2>
<p><a href="https://www.ynharari.com">Yuval Noah Harari</a>, author of <a href="https://www.theguardian.com/books/2014/sep/11/sapiens-brief-history-humankind-yuval-noah-harari-review"><em>Sapiens</em></a>, contends it’s our ability to inhabit “fiction” that differentiates humans. <a href="https://www.harpercollins.com/products/sapiens-yuval-noah-harari?variant=32207215656994">He claims</a> you “could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven”. Humans have a capacity to believe in things we cannot see – which changes things that do exist. Ideas like prejudice and hatred, for example, are powerful enough to cause wars that displace thousands. </p>
<p>The wall between Israel and Palestine was conceived in people’s minds before being transformed into bricks and barbed wire. Philosopher Oliver Razac’s book <a href="https://thenewpress.com/books/barbed-wire"><em>Barbed Wire: A political history</em></a> traces how this razor-sharp technology has been deployed from farms that displaced indigenous peoples to the trenches of World War I and the prisons of contemporary democracies. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&rect=344%2C2%2C1572%2C778&q=45&auto=format&w=1000&fit=clip"><img alt="A young woman in a bathroom is engaged with her mobile phone, reflected in a mirror." src="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&rect=344%2C2%2C1572%2C778&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=245&fit=crop&dpr=1 600w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=245&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=245&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=308&fit=crop&dpr=1 754w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=308&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/364455/original/file-20201020-22-1v3ttyf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=308&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sophia Hammons as Isla in <em>The Social Dilemma</em>.</span>
<span class="attribution"><span class="source">The Social Dilemma/Netflix</span></span>
</figcaption>
</figure>
<p>Technology is in a constant psychological, political and economic tug-of-war with humanity. Yet, some of today’s technologies are much more subtle than barbed wire. They are deeply <a href="https://books.google.co.za/books?hl=en&lr=&id=9wq9DwAAQBAJ&oi=fnd&pg=PA85&dq=info:gxEdWsbuE_0J:scholar.google.com&ots=5b6P23i9n9&sig=oonwZAiBsas7XNjTpP7e8pXq2XM&redir_esc=y#v=onepage&q&f=false">integrated into</a> our lives – they know us better than we know ourselves.</p>
<p>I have thousands of ‘friends’ on social media – far too many to relate to meaningfully. Yet, at times I can be more present to people that I have never met than I am to my family. This is not by chance – social media platforms are <a href="https://www.counterpointknowledge.org/social-media-as-religion-unexamined-desire-and-mis-information/">designed</a> to seek and hold our attention. They are businesses, intent on making money. Harvard University professor <a href="https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy">Shoshana Zuboff</a>, who features in the documentary, explains in <a href="https://profilebooks.com/the-age-of-surveillance-capitalism.html"><em>The Age of Surveillance Capitalism</em></a> that social media “trades exclusively in human futures”.</p>
<h2>We are the product</h2>
<p>Zuboff says that social media platforms exploit our emotions and pre-cognate needs like belonging, recognition, acceptance and pleasure that are ‘hard wired’ into us to secure our survival. </p>
<p>Recognition relates to two of the primary <a href="https://books.google.co.za/books/about/The_Primal_Feast.html?id=TJF_xQAuLOYC&redir_esc=y">functions of the brain</a>, avoiding danger and finding ways to meet our basic survival needs (such as food or a mate to perpetuate our gene pool). These corporations, she says, are hiring the smartest engineers, social psychologists, behavioural economists and artists to hold our attention, while interspersing adverts between our videos, photos and status updates. They make money by offering a future that their advertisers will sell you. </p>
<p>Or, as former Google and Facebook employee Justin Rosenstein, says in <em>The Social Dilemma</em>:</p>
<blockquote>
<p>Our attention is the product being sold to advertisers. </p>
</blockquote>
<p>If our adult brains are so susceptible to this kind of manipulation, what effects are they having on the developing minds of children?</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/uaaC57tcci0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Trailer for <em>The Social Dilemma</em>.</span></figcaption>
</figure>
<p>The documentary also reminds the viewer that social media has a more subtle and powerful influence on our lives – shaping our social and political realities. </p>
<h2>Fake news and hate speech</h2>
<p>The documentary uses an example from 2017 in which Facebook use is linked to <a href="https://www.reuters.com/article/us-facebook-india-content-idUSKBN1X929F">violence</a> that led to the displacement of close to 700,000 Rohingya persons in Myanmar. Something that doesn’t really exist (a social media platform) violently changed something that does exist (the safety of people). Facebook was a primary means of communication in Myanmar. New phones came with Facebook pre-installed. What users were unaware of was a ‘third person’ – Facebook’s algorithms – feeding information that included hate speech and fake news into their conversations. In Africa, similar reports have emerged from <a href="https://www.buzzfeednews.com/article/jasonpatinkin/how-to-get-people-to-murder-each-other-through-fake-news-and#.cfxZRym4z">South Sudan</a> and <a href="https://theconversation.com/a-vicious-online-propaganda-war-that-includes-fake-news-is-being-waged-in-zimbabwe-99402">Zimbabwe</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflixs-the-social-dilemma-highlights-the-problem-with-social-media-but-whats-the-solution-147351">Netflix's The Social Dilemma highlights the problem with social media, but what's the solution?</a>
</strong>
</em>
</p>
<hr>
<p>Another example used is the <a href="https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook">Cambridge Analytica</a> <a href="https://theconversation.com/why-facebook-is-the-reason-fake-news-is-here-to-stay-94308">scandal</a>, which also played out in <a href="https://qz.com/africa/1089911/bell-pottinger-and-cambridge-analyticas-work-in-south-africa-kenya-is-raising-questions/">Africa</a>, most notably in <a href="https://theconversation.com/how-the-nigerian-and-kenyan-media-handled-cambridge-analytica-128473">Nigeria and Kenya</a>. Facebook user information was mined and sold to nefarious political actors. This information (like what people feared and what upset them) was used to spread misinformation and manipulate their voting decisions on important elections.</p>
<h2>What to do about it?</h2>
<p>So, what do we do? We can’t very well give up on social media completely, and I don’t think it is necessary. These technologies are already deeply intertwined with our daily lives. We cannot deny they have some value. </p>
<p>However, just like humans had to adapt to the responsible use of the printing press or long distance travel, we will need to be more intentional about how we relate to these new technologies. We can begin by cultivating healthier social media <a href="https://books.google.co.za/books?hl=en&lr=&id=9wq9DwAAQBAJ&oi=fnd&pg=PA85&dq=info:gxEdWsbuE_0J:scholar.google.com&ots=5b6P23i9n9&sig=oonwZAiBsas7XNjTpP7e8pXq2XM&redir_esc=y#v=onepage&q&f=false">habits</a>.</p>
<p>We should also develop a greater awareness of the aims of these companies and how they achieve them, while understanding how our information is being used. This will allow us to make some simple commitments that align our social media usage to our better values.</p><img src="https://counter.theconversation.com/content/147509/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dion Forster receives funding from the South African National Research Foundation. </span></em></p>As more comes to light about the money-making tactics of social media platforms we need to reevaluate our relationship with them.Dion Forster, Head of Department, Systematic Theology and Ecclesiology, Stellenbosch UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1472612020-10-01T20:05:45Z2020-10-01T20:05:45ZFacebook is merging Messenger and Instagram chat features. It’s for Zuckerberg’s benefit, not yours<p>Facebook Messenger and Instagram’s direct messaging services will be integrated into one system, Facebook has <a href="https://about.instagram.com/blog/announcements/say-hi-to-messenger-introducing-new-messaging-features-for-instagram">announced</a>. </p>
<p>The merge will allow shared messaging across both platforms, as well as video calls and the use of a range of tools drawn from both platforms. It’s currently being rolled out across countries on an opt-in basis, but hasn’t yet reached Australia.</p>
<p>Facebook CEO Mark Zuckerberg <a href="https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/">announced</a> plans in March last year to integrate Messenger, Instagram Direct and WhatsApp into a unified messaging experience. </p>
<p>At the crux of this was the goal to administer end-to-end encryption across the whole messaging “ecosystem”. </p>
<p>Ostensibly, this was part of Facebook’s renewed focus on privacy, in the wake of several highly publicised scandals. Most notable was its poor data protection that allowed political consulting firm <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Cambridge Analytica</a> to steal data from 87 million Facebook accounts and use it to target users with political ads ahead of the 2016 US presidential election.</p>
<p>In a <a href="https://about.fb.com/news/2020/09/new-messaging-features-for-instagram/">statement</a> released yesterday on the new merge, Instagram CEO Adam Mosseri and Messenger vice president Stan Chudnovsky wrote:</p>
<blockquote>
<p>… one out of three people sometimes find it difficult to remember where to find a certain conversation thread. With this update, it will be even easier to stay connected without thinking about which app to use to reach your friends and family.</p>
</blockquote>
<p>While that may seem harmless, it’s likely Facebook is actually attempting to make its apps inseparable, ahead of a <a href="https://www.bloomberg.com/news/articles/2020-09-15/ftc-said-to-prepare-possible-antitrust-lawsuit-against-facebook">potential anti-trust lawsuit</a> in the US that may try to see the company sell Instagram and WhatsApp. </p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/p/CFxRG23pZXV","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<h2>Together, with Facebook, 24/7</h2>
<p>The Messenger/Instagram Direct merge will <a href="https://mashable.com/article/facebook-messenger-instagram/">extend to</a> features rolled out during the pandemic, such as the “<a href="https://about.fb.com/news/2020/09/introducing-watch-together-on-messenger/">Watch Together</a>” tool for Messenger. As the name suggests, this lets users watch videos together in real time. Now, both Messenger and Instagram users will be able to use it, regardless of which app they’re on.</p>
<p>With the integration, new privacy challenges emerge. Facebook has <a href="https://about.fb.com/news/2020/09/privacy-matters-cross-app-communication/">already acknowledged</a> this. And these challenges will present despite Facebook’s overarching privacy policy applying to every app in its app “family”. </p>
<p>For example, in the new merged messaging ecosystem, a user you previously blocked on Messenger won’t automatically be blocked on Instagram. Thus, the blocked person will be able to <a href="https://about.fb.com/news/2020/09/privacy-matters-cross-app-communication/">once again contact you</a>. This could open doors to a plethora of unexpected online abuse.</p>
<h2>Why this is good for Mark Zuckerberg</h2>
<p>This first step – and Facebook’s <a href="https://www.facebook.com/notes/mark-zuckerberg/a-privacy-focused-vision-for-social-networking/10156700570096634/">full roadmap</a> for the encrypted integration of WhatsApp, Instagram Direct and Messenger – has three clear outcomes.</p>
<p>Firstly, end-to-end encryption means Facebook will have <a href="https://www.justice.gov/opa/press-release/file/1207081/download">complete deniability</a> for anything that travels across its messaging tools. </p>
<p>It won’t be able to “see” the messages. While this might be good from a user privacy perspective, it also means anything from bullying, to <a href="https://milwaukeenns.org/2014/05/21/special-report-diploma-mill-scams-continue-to-plague-milwaukees-adult-students/">scams</a>, to illegal drug sales, to <a href="https://www.justice.gov/usao-ednc/pr/jacksonville-man-sentenced-child-pornography-case">paedophilia</a> can’t be policed if it happens via these tools. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebooks-push-for-end-to-end-encryption-is-good-news-for-user-privacy-as-well-as-terrorists-and-paedophiles-128782">Facebook's push for end-to-end encryption is good news for user privacy, as well as terrorists and paedophiles</a>
</strong>
</em>
</p>
<hr>
<p>This would stop Facebook being blamed for hurtful or illegal uses of its services. As far as moderating the platform goes, Facebook would effectively become “invisible” (not to mention moderation is <a href="https://journals.sagepub.com/doi/10.1177/2056305120948186">expensive and complicated</a>). </p>
<p>This is all great news for Mark Zuckerberg, especially as Facebook stares down the barrel of <a href="https://www.theverge.com/2020/7/29/21335706/antitrust-hearing-highlights-facebook-google-amazon-apple-congress-testimony">potential anti-trust litigation</a>.</p>
<p>Secondly, once the apps are merged, functionally they will no longer be separate platforms. They will still <em>exist</em> as separate apps with some separate features, but the vast amount of personal data underpinning them will live in one giant, shared database. </p>
<p>Deeper data integration will let Facebook know users more intimately. Moreover, it will be able to leverage this new insight to target users with more advertising and expand further.</p>
<p>Finally, and perhaps most concerning, is that by integrating its apps Facebook could legitimately respond to <a href="https://www.wsj.com/articles/ftc-preparing-possible-antitrust-suit-against-facebook-11600211840">anti-trust lawsuits</a> by saying it can’t separate Instagram or WhatsApp from the main Facebook platform – because they’re the same thing now. </p>
<p>And if they can’t be separated, there’s no way Facebook could sell Instagram or WhatsApp, even if it wanted to. </p>
<h2>100 billion messages a day</h2>
<p>The messaging traffic across Facebook’s platforms <a href="https://about.fb.com/news/2020/09/new-messaging-features-for-instagram/">is vast</a>, with more than 100 billion messages sent daily. And this has <a href="https://www.warc.com/newsandopinion/news/pandemic-lifts-social-media-use-but-for-how-long/43552">only</a> <a href="https://www.nytimes.com/interactive/2020/04/07/technology/coronavirus-internet-use.html">increased</a> during the COVID-19 pandemic.</p>
<p>With the sheer size of its user database, Facebook continues to either purchase, or squash, its competition. Concerns about the company being a monopoly aren’t without merit. </p>
<p><a href="https://www.theverge.com/2018/9/4/17816572/tim-wu-facebook-regulation-interview-curse-of-bigness-antitrust">Researchers</a> and <a href="https://www.theverge.com/2019/5/9/18538106/facebook-co-founder-chris-hughes-breakup-regulation-ftc-us-government">founding Facebook employees</a> have called to have the company split up – and for Instagram and Whatsapp to become separate again.</p>
<p>Just a few months ago, Facebook released its Instagram-housed tool <a href="https://about.instagram.com/blog/announcements/introducing-instagram-reels-announcement">Reels</a> which bears a striking resemblance to TikTok, another social app sweeping the globe. </p>
<p>It seems this is just another example of Facebook trying to use the sheer size of its network to stifle growing competition, aided (perhaps unwittingly) by Donald Trump’s anti-China sentiment.</p>
<p>If competition is important to encouraging innovation and diversity, then the newest development from Facebook discourages both these things. It further entrenches Facebook and its services into the lives of consumers, making it harder to pull away. And this certainly isn’t far from monopolistic behaviour.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/trumps-tiktok-deal-explained-who-is-oracle-why-walmart-and-what-does-it-mean-for-our-data-146566">Trump's TikTok deal explained: who is Oracle? Why Walmart? And what does it mean for our data?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/147261/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tama Leaver receives funding from Australian Research Council (ARC); he is currently a Chief Investigator in the ARC Centre of Excellence for the Digital Child.</span></em></p>Having an end-to-end encrypted messaging ‘ecosystem’ is a great way for Facebook to evade the full wrath of the law. It has come at a convenient time, too.Tama Leaver, Associate Professor in Internet Studies, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1417032020-07-02T01:39:31Z2020-07-02T01:39:31ZReddit removes millions of pro-Trump posts. But advertisers, not values, rule the day<p>On Monday, online discussion platform Reddit <a href="https://www.theguardian.com/technology/2020/jun/29/reddit-the-donald-twitch-social-media-hate-speech">permanently took down</a> its largest community of Donald Trump supporters, r/The_Donald.</p>
<p>The community had more than 7,000 active users per day (although this has previously been much higher). The ban was <a href="https://www.reddit.com/r/announcements/comments/hi3oht/update_to_our_content_policy/">on the grounds</a> that some posts incited violence, and the community had engaged in harassment on other subreddits. It will have removed hundreds of thousands of posts, and millions of comments going back many years. </p>
<p>The “r/The_Donald” subreddit is a themed, online message board where users can submit, comment and vote on posts. The <a href="https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html">decision to ban</a> it comes as several other platforms censure racist and violent material from Trump and his supporters.</p>
<p>Twitter recently <a href="https://www.reuters.com/article/us-twitter-factcheck/with-fact-checks-twitter-takes-on-a-new-kind-of-task-idUSKBN2360U0">fact-checked</a> some of Trump’s posts, video live-streaming service Twitch has temporarily <a href="https://www.theverge.com/2020/6/29/21307145/twitch-donald-trump-ban-campaign-account">banned</a> the president’s account, and Facebook is now <a href="https://www.nytimes.com/2020/06/29/business/dealbook/facebook-boycott-ads.html">losing advertisers</a> over its unwillingness to moderate hateful material and disinformation, including from the president.</p>
<p>According to the <a href="https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html">New York Times</a>, Reddit <a href="https://thenextweb.com/apps/2020/06/29/reddit-bans-r-thedonald-and-2000-other-hateful-subreddits-because-it-was-about-time/">also banned</a> another 2,000 communities across the political spectrum alongside the pro-Trump community, including left-leaning groups. </p>
<p>But while some may celebrate these actions, the moves should be understood within the context of a largely deregulated information economy, in which “doing good” is mostly about “doing well”. In other words: making money.</p>
<p>Upon a close look, the removal of r/The_Donald exposes the inadequacies of market-based information governance. Even in cases where individual governance decisions benefit society, the information economy remains primarily motivated by profit.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-vs-news-australia-wants-to-level-the-playing-field-facebook-politely-disagrees-141043">Facebook vs news: Australia wants to level the playing field, Facebook politely disagrees</a>
</strong>
</em>
</p>
<hr>
<h2>Reddit’s changing approach</h2>
<p>Started in 2015, r/The_Donald was the largest and most controversial subreddit dedicated to supporting Trump. Before the ban, it had more than 790,000 subscribers and was at times one of the most popular subreddits on the platform.</p>
<p>In June last year, Reddit “quarantined” <a href="https://www.theverge.com/2019/6/26/18759967/reddit-quarantines-the-donald-trump-subreddit-misbehavior-violence-police-oregon">the subreddit over posts inciting violence</a>. Several months later it purged most of the community’s volunteer moderators, arguing they weren’t upholding the platform’s policies, particularly through allowing banned content to stay up.</p>
<p>These shifts mirror changes in Reddit’s overall governance approach.</p>
<p>Historically, the platform has sold itself as a democratic space for free speech, with administrators resisting censorship in <a href="https://www.dailydot.com/unclick/reddit-beatingwomen-misogyny-images/">favour of a hands-off philosophy</a>. However, like other platforms, Reddit now faces pressure from advertisers that don’t want their brands associated with political extremism.</p>
<p>Advertising is a <a href="https://www.cnbc.com/2018/06/29/how-reddit-plans-to-make-money-through-advertising.html">growing part of Reddit’s economic model</a>. And with major partners such as <a href="https://www.redditinc.com/assets/case-studies/LOreal_Case_Study.pdf">L'Oréal</a> and <a href="https://www.redditinc.com/assets/case-studies/Audi_Case_Study.pdf">Audi</a>, advertisers’ preferences undoubtedly hold sway in how the website is regulated. </p>
<p>But as digital marketing agency iCrossing’s chief media officer <a href="https://www.cnbc.com/2018/06/29/how-reddit-plans-to-make-money-through-advertising.html">has previously argued</a>:</p>
<blockquote>
<p>What makes it (Reddit) attractive to consumers, which is the free and open ability to post, makes them scary to advertisers.</p>
</blockquote>
<h2>Walking a tightrope</h2>
<p>For major social media platforms, content regulation is a delicate issue, teetering on a balance between value and liability. </p>
<p>Reddit’s laissez-faire approach and community-led model invites broad participation and has helped its user base grow. However, this also fosters content that’s distasteful, unseemly and potentially dangerous – creating brand associations many advertisers would rather avoid. </p>
<p>The r/The_Donald subreddit embodies this tension. Reddit’s gradual regulation of it, and eventual banning, indicates the value-liability balance has tipped towards the latter.</p>
<p>While there is reason to laud these regulatory shifts, they are products of political-economic realities, rather than social priorities. And they speak to a much broader issue of information policy in contemporary society. </p>
<p>Although social media platforms are central to civic discourse, they’re also products in a competitive market economy. As long as that market economy remains deregulated by governments, individual companies will have outsized power. </p>
<p>They <em>may</em> use their power for social good, but this decision will be market-based, and thus can change with the winds of financial promise. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1277659843613077505"}"></div></p>
<h2>Risks for Reddit, risks for the internet</h2>
<p>Much of Reddit’s popularity has come from its status as the “wild west” of the internet. </p>
<p>The platform’s new approach may alienate its more dedicated user base. In trying to balance the ethos of free speech with increasing pressure to regulate, Reddit finds itself stuck <a href="https://thesocietypages.org/cyborgology/2018/10/29/reddit-quarantined/">between a rock and a hard place</a>.</p>
<p>And as Reddit moves to moderate and ban hateful content, more extreme users are going elsewhere. Prior to the r/The_Donald subreddit’s banning, participants had already established their own <a href="https://thedonald.win/">external site</a> and were encouraging others to move there. </p>
<p>Similarly, moderators on the quarantined r/MGTOW (an anti-feminist men’s rights subreddit) are now directing subscribers to a <a href="https://discord.com/login?redirect_to=%2Fchannels%2F%40me">Discord</a> channel – a community-based discussion app for private and public interaction.</p>
<p>Moderators of the quarantined r/TheRedPill (another anti-feminist men’s rights group) have been directing users to an external site for over a year.</p>
<p>Users leaving for external sites will reduce hateful content on Reddit, but will concentrate this hate elsewhere. And such sites are often far less regulated than larger platforms.</p>
<p>Conservatives increasingly complain <a href="https://www.theatlantic.com/ideas/archive/2019/07/conservatives-pretend-big-tech-biased-against-them/594916/">digital platforms are anti-conservative</a>. Reddit’s actions against r/The_Donald will likely increase calls for new, conservative-founded platforms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-just-blame-echo-chambers-conspiracy-theorists-actively-seek-out-their-online-communities-127119">Don't (just) blame echo chambers. Conspiracy theorists actively seek out their online communities</a>
</strong>
</em>
</p>
<hr>
<h2>How to prevent distilled anger</h2>
<p>Reddit’s move highlights the influence of economics in platform governance – and the vulnerabilities that arise from this. </p>
<p>Rather than individual moderation decisions, what’s needed is a broad regulatory framework that holds corporate bodies to account. We need to reconsider “<a href="https://www.reuters.com/article/us-twitter-trump-executive-order-explain/explainer-whats-in-the-law-protecting-internet-companies-and-can-trump-change-it-idUSKBN23434V">safe harbour</a>” laws that protect social media companies from legal liability. </p>
<p>More broadly, we need to recognise social media are entangled with civic society, and enact social policies that coincide with the weight of that responsibility. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1277652980527853568"}"></div></p><img src="https://counter.theconversation.com/content/141703/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The platform also took down another 2,000 communities, including left-leaning groups. The move comes just months ahead of the 2020 US presidential election.Simon Copland, PhD Student -- Sociology, Australian National UniversityJenny L. Davis, Lecturer in the School of Sociology, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1270272019-11-19T19:20:19Z2019-11-19T19:20:19ZInstead of showing leadership, Twitter pays lip service to the dangers of deep fakes<figure><img src="https://images.theconversation.com/files/302366/original/file-20191119-12535-1ibjq98.jpg?ixlib=rb-1.1.0&rect=33%2C22%2C3648%2C2047&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Neural networks can generate artificial representations of human faces, as well as realistic renderings of actual people.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/success?u=http%3A%2F%2Fdownload.shutterstock.com%2Fgatekeeper%2FW3siZSI6MTU3NDE2OTY2MiwiYyI6Il9waG90b19zZXNzaW9uX2lkIiwiZGMiOiJpZGxfMTQzMDU3MTg2OSIsImsiOiJwaG90by8xNDMwNTcxODY5L21lZGl1bS5qcGciLCJtIjoxLCJkIjoic2h1dHRlcnN0b2NrLW1lZGlhIn0sIjJXSFNVZFhvUDRWVnFjUHdSZE9VSis3MVFGOCJd%2Fshutterstock_1430571869.jpg&ir=true&pi=41133566&m=1430571869&src=1958ce5f-79dc-4c00-a7d9-a80249171913-1-0">Shutterstock</a></span></figcaption></figure><p>Fake videos and doctored photographs, often based on events such as the <a href="https://www.space.com/apollo-11-moon-landing-hoax-believers.html">Moon landing</a> and supposed UFO appearances, have been the subject of fascination for decades.</p>
<p>Such imagery is often <a href="https://www.forbes.com/sites/chenxiwang/2019/11/01/deepfakes-revenge-porn-and-the-impact-on-women/#4f721c5e1f53">deep fake content</a>, called so because it uses deep learning associated with neural networks and digital image processing. </p>
<p>Last week, Twitter <a href="https://www.reuters.com/article/us-twitter-deepfakes/twitter-wants-your-feedback-on-its-deepfake-policy-plans-idUSKBN1XL2C6">revealed</a> plans to introduce a <a href="https://blog.twitter.com/en_us/topics/company/2019/synthetic_manipulated_media_policy_feedback.html">new policy</a> governing deep fake videos on its platform. </p>
<p>The company proposed it would warn users about deep fake content by flagging tweets with “synthetic or manipulated media”. Twitter says media may be removed in cases where it could lead to serious harm, but has stopped short of enforcing a strict removal stance. Users have until November 27 to provide feedback. </p>
<p>In adopting this warning-only approach towards deep fakes, the social media giant has shown poor judgement. </p>
<h2>Why deep fakes are dangerous</h2>
<p>With advances in computer science, deep fakes are becoming an increasingly powerful tool to deceive people using social media.</p>
<p>Deep fake clips of celebrities and politicians are realistic enough to trick users into making financial, political and personal decisions based on the fake testimony of others. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/VWrhRBb-1Ig?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">This Youtube clip featuring actor Bill Hader shows how realistic deep fake content can be.</span></figcaption>
</figure>
<p>Whether it’s a David Koch <a href="https://www.dailymail.co.uk/tvshowbiz/article-6204111/David-Koch-unwillingly-face-erectile-dysfunction-advertising-scam.html">erectile dysfunction cream</a> scam, an announcement by Donald Trump that <a href="https://shots.net/news/view/has-donald-trump-eradicated-aids">AIDs has been eradicated</a>, or a fake interview with Andrew Forrest leading to a <a href="https://www.commerce.wa.gov.au/announcements/scammers-use-fake-twiggy-forrest-investment-fleece-woman-out-670000">finance scam</a>, deep fakes present a serious risk to our ability to trust what we view online. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/people-who-spread-deepfakes-think-their-lies-reveal-a-deeper-truth-119156">People who spread deepfakes think their lies reveal a deeper truth</a>
</strong>
</em>
</p>
<hr>
<p>Social media companies have so far taken a sloppy approach to this threat. They have even promoted the use of photo algorithms letting users experiment with animated face masks, and provided tutorials on how to use editing programs. </p>
<p>Deep fake production is the <a href="https://www.sciencealert.com/deepfake-ai-algorithms-can-now-take-text-and-turn-it-into-words-spoken-in-a-video">professional version</a> of this practice. At its worst, it can even <a href="https://intelligence.house.gov/news/documentsingle.aspx?DocumentID=657">threaten democracy</a>.</p>
<p>Twitter’s latest draft policy on deep fakes sets a dangerous precedent. It allows social media platforms to handball away their responsibility to protect customers from manipulated videos and imagery. </p>
<h2>Twitter should be just as accountable as television</h2>
<p>It’s time social media giants such as Twitter started seeing themselves as the 21st century version of free-to-air television. With TV, there are clear guidelines about what cannot be broadcast. </p>
<p>Since 1992, Australians have been protected by the <a href="https://www.legislation.gov.au/Details/C2018C00060">1992 Broadcasting Services Act</a>, ensuring what is shows in “fair and accurate coverage”. The act <a href="http://www5.austlii.edu.au/au/legis/cth/consol_act/cca1995115/sch1.html">protects</a> viewers in regards to the origin and authenticity of television content.</p>
<p>The same principles should apply to social media. Americans now spend <a href="https://www.socialmediatoday.com/news/people-are-now-spending-more-time-on-smartphones-than-they-are-watching-tv/556405/">more time on social media</a> than they do watching television, and Australia isn’t far behind.</p>
<p>By suggesting they only need to flag tweets with deep fake content, Twitter’s proposed policy downplays the seriousness of the threat. </p>
<h2>Sending the wrong message</h2>
<p>Twitter’s draft policy is dangerous on two fronts. </p>
<p>Firstly, it suggests the company is somehow doing its part in protecting its users. In reality, Twitter’s decision is akin to watching a child struggle to swim in heavy surf, while nearby authorities wave a sign saying: “some waves may be hard to judge” - instead of actually helping.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/lies-fake-news-and-cover-ups-how-has-it-come-to-this-in-western-democracies-102041">Lies, 'fake news' and cover-ups: how has it come to this in Western democracies?</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.theguardian.com/technology/2019/jun/23/what-do-we-do-about-deepfake-video-ai-facebook">Senior citizens</a> and inexperienced social media users are particularly vulnerable to deep fakes. This is because they’re predisposed to <a href="https://ro.ecu.edu.au/ecuworkspost2013/5709/">trust online content</a> that looks authentic.</p>
<p>The second reason Twitter’s proposition is dangerous is because social media trolls and <a href="https://ro.ecu.edu.au/ecuworkspost2013/665/">sock puppet armies</a> enjoy surprising online audiences. Sock puppets are specialists in deceiving users into believing they’re a single fake person (or multiple fake perople) by means of false posts and online identities.</p>
<p>Basically, content that has been signposted as deep fake will be exploited by people wanting to amplify its spread. It’s unrealistic to suppose this won’t happen. </p>
<p>If Twitter flags posts that are fake, yet leaves them up, the likely outcome will be a popularity surge in this content. As per social media algorithms, this means a greater number of fake videos and images will be “<a href="https://business.twitter.com/en/help/overview/what-are-promoted-tweets.html">promoted</a>” rather than retracted. </p>
<p>Twitter has an opportunity to take a leadership role in preventing the spread of deep fake content, by identifying and removing deep fakes from its platform. All major social media platforms have the responsibility to present a unified approach to the prevention and removal of manipulated and fake imagery.</p>
<p>The circulation of a <a href="https://fortune.com/2019/06/12/deepfake-mark-zuckerberg/">Nancy Pelosi deep fake</a> video earlier this year revealed social media’s inconsistency in the handling of deceitful imagery. YouTube removed the clip from its platform, Facebook flagged it as false, and Twitter let it remain. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-can-now-create-fake-porn-making-revenge-porn-even-more-complicated-92267">AI can now create fake porn, making revenge porn even more complicated</a>
</strong>
</em>
</p>
<hr>
<p>Twitter is in the business of helping users repost links and content as many times as possible. It creates profit by generating repeated referrals, commentary, and the acceptance of its content through <a href="https://fourweekmba.com/how-does-twitter-make-money/">promoted trends</a>. </p>
<p>If deep fakes aren’t removed from Twitter, their growth will be exponential. </p>
<h2>A looming threat</h2>
<p><a href="https://www.schneier.com/blog/archives/2018/10/detecting_fake_.html">Early versions</a> of such spurious content were relatively easy to spot. People in the first deep fake clips appeared unrealistic. Their eyes would’t blink and their facial gestures wouldn’t sync with the words being spoken. </p>
<p>There are also examples of harmless image manipulation. These include web apps on <a href="https://www.pocket-lint.com/apps/news/facebook/139756-facebook-messenger-here-s-how-to-use-those-new-snapchat-like-lenses">Snapchat and Facebook</a> that let users alter their photos (usually selfies) to add backgrounds, or resemble characters such as cute animals.</p>
<p>However, this new generation of altered imagery is often hard to distinguish from reality. And as criminals and pranksters improve their production of deep fakes, the other side of this double-edged sword could swing at any time.</p><img src="https://counter.theconversation.com/content/127027/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dr David Cook is affiliated with Edith Cowan University as a lecturer in the School of Science, and is a Fellow of the Australian Computer Society </span></em></p>Twitter’s proposed policy would result in the prolific spread of fabricated, but highly realistic images and videos. This could allow widespread misinformation on the platform.David Cook, Lecturer, Computer and Security Science,Edith Cowan University, Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1237672019-09-22T20:16:16Z2019-09-22T20:16:16ZUsers (and their bias) are key to fighting fake news on Facebook – AI isn’t smart enough yet<figure><img src="https://images.theconversation.com/files/293332/original/file-20190920-16165-xsg2z1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">On its own, human judgement can be subjective and skewed towards personal biases.</span> </figcaption></figure><p>The information we encounter online everyday can be misleading, incomplete or fabricated. </p>
<p>Being exposed to “fake news” on social media platforms such as Facebook and Twitter can influence our thoughts and decisions. We’ve already seen misinformation <a href="https://www.vox.com/2017/4/28/15476142/facebook-report-trump-clinton-russia-us-presidential-election">interfere with elections</a> in the United States.</p>
<p>Facebook founder Mark Zuckerberg has repeatedly <a href="https://www.vox.com/2017/2/16/14640460/mark-zuckerberg-facebook-manifesto-letter">proposed artificial intelligence</a> (AI) as the <a href="https://techcrunch.com/2016/11/14/facebook-fake-news/">solution</a> to the fake news dilemma. </p>
<p>However, the issue likely requires high levels of human involvement, as many experts agree that AI technologies <a href="https://www.forbes.com/sites/charlestowersclark/2018/10/04/can-ai-put-an-end-to-fake-news-dont-be-so-sure/#352bc4532f84">need further advancement</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-made-deceptive-robots-to-see-why-fake-news-spreads-and-found-a-weakness-104776">We made deceptive robots to see why fake news spreads, and found a weakness</a>
</strong>
</em>
</p>
<hr>
<p>I and two colleagues have <a href="https://research.fb.com/programs/research-awards/proposals/the-online-safety-benchmark-request-for-proposals/">received funding</a> from Facebook to independently carry out research on a “human-in-the-loop” AI approach that might help bridge the gap. </p>
<p>Human-in-the-loop refers to the involvement of humans (users or moderators) to support AI in doing its job. For example, by creating training data or manually validating the decisions made by AI.</p>
<p>Our approach combines AI’s ability to process large amounts of data with humans’ ability to understand digital content. This is a targeted solution to fake news on Facebook, given its massive scale and subjective interpretation.</p>
<p>The dataset we’re compiling can be used to train AI. But we also want all social media users to be more aware of their own biases, when it comes to what they dub fake news. </p>
<h2>Humans have biases, but also unique knowledge</h2>
<p>To eradicate fake news, asking Facebook employees to make controversial editorial decisions is problematic, as <a href="https://doi.org/10.1145/3209978.3210094">our research found</a>. This is because the way people perceive content depends on their cultural background, political ideas, biases, and stereotypes.</p>
<p>Facebook has employed <a href="https://www.facebook.com/zuck/posts/10103695315624661">thousands</a> of people for content moderation. These moderators spend eight to ten hours a day looking at explicit and violent material such as pornography, terrorism, and beheadings, to decide which content is acceptable for users to see. </p>
<p>Consider them cyber janitors who clean our social media by removing inappropriate content. They play an integral role in shaping what we interact with.</p>
<p>A similar approach could be adapted to fake news, by asking Facebook’s moderators which articles should be removed and which should be allowed.</p>
<p>AI systems could do this automatically at a large scale by learning what fake news is from manually annotated examples. But even when AI can detect “forbidden” content, human moderators are needed to flag content that is controversial or subjective.</p>
<p>A famous example is the Napalm Girl image.</p>
<p>The Pulitzer Prize-winning photograph shows children and soldiers escaping from a napalm bomb explosion during the Vietnam War. The image was posted on Facebook in 2016 and <a href="https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo">removed</a> because it showed a naked nine-year-old girl, contravening Facebook’s official <a href="https://www.facebook.com/communitystandards/">community standards</a>. </p>
<p>Significant community protest followed, as the iconic image had obvious historical value, and Facebook allowed the photo back on its platform.</p>
<h2>Using the best of brains and bots</h2>
<p>In the context of verifying information, human judgement can be subjective and skewed based on a person’s background and implicit bias. </p>
<p>In our <a href="https://www.damianospina.com/wp-content/uploads/2018/08/roitero2018how.pdf">research</a> we aim to collect multiple “truth labels” for the same news item from a few thousand moderators. These labels indicate the “fakeness” level of a news article.</p>
<p>Rather than simply collect the most popular labels, we also want to record moderators’ backgrounds and their specific judgements to <a href="https://doi.org/10.1145/3308560.3317307">track and explain ambiguity and controversy</a> in the responses.</p>
<p>We’ll compile results to generate a high-quality dataset, which may help us explain cases with high levels of disagreement among moderators. </p>
<p>Currently, Facebook content is treated as binary - it either complies with the standards or it doesn’t.</p>
<p>The dataset we compile can be used train AI to better identify fake news by teaching it which news is controversial and which news is plain fake. The data can also help evaluate how effective current AI is in fake news detection.</p>
<h2>Power to the people</h2>
<p>While benchmarks to evaluate AI systems that can detect fake news are significant, we want to go a step further.</p>
<p>Instead of only asking AI or experts to make decisions about what news is fake, we should teach social media users how to identify such items for themselves. We think an approach aimed at fostering information credibility literacy is possible.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/most-young-australians-cant-identify-fake-news-online-87100">Most young Australians can’t identify fake news online</a>
</strong>
</em>
</p>
<hr>
<p>In our ongoing <a href="https://www.uq.edu.au/news/article/2019/09/figuring-out-fake-news">research</a>, we’re collecting a vast range of user responses to identify credible news content. </p>
<p>While this can help us build AI training programs, it also lets us study the development of human moderator skills in recognising credible content, as they perform fake news identification tasks. </p>
<p>Thus, our research can help design online tasks or games aimed at training social media users to recognise trustworthy information.</p>
<h2>Other avenues</h2>
<p>The issue of fake news is being tackled in different ways across online platforms. </p>
<p>It’s quite often removed through a bottom-up approach, where users report inappropriate content, which is then reviewed and removed by the <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">platform’s employees.</a>.</p>
<p>The approach Facebook is taking is to <a href="https://www.theguardian.com/technology/2019/jul/31/facebook-says-it-was-not-our-role-to-remove-fake-news-during-australian-election">demote unreliable content</a> rather than remove it. </p>
<p>In each case, the need for people to make decisions on content suitability remains. The work of both users and moderators is crucial, as humans are needed to interpret guidelines and decide on the value of digital content, especially if it’s controversial. </p>
<p>In doing so, they must try to look beyond cultural differences, biases and borders.</p><img src="https://counter.theconversation.com/content/123767/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gianluca Demartini receives funding from the Australian Research Council and Facebook. </span></em></p>Sometimes it feels like everybody on social media is fighting about what’s “right” and what’s “wrong”. Well, figuring out why we all have such unique opinions is now helping experts tackle fake news.Gianluca Demartini, Associate professor, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1198362019-07-11T13:14:41Z2019-07-11T13:14:41ZAnonymous apps risk fuelling cyberbullying but they also fill a vital role<figure><img src="https://images.theconversation.com/files/283688/original/file-20190711-173334-p8y2rr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/woman-covering-her-face-blank-tablet-314384858?src=g8ZKOeS6RfQ1Vj7CeDN4qA-1-43&studio=1">Anotnio Guillem/Shutterstock</a></span></figcaption></figure><p>When the anonymous social media app YOLO was launched in May 2019, it <a href="https://www.bbc.co.uk/news/technology-48214413">topped the iTunes downloads chart</a> after just one week, despite the lack of a major marketing campaign. Designed to be used with social network Snapchat, YOLO lets users invite people to send them anonymous messages. Its viral popularity followed that of other apps, such as the now infamously defunct <a href="https://www.theverge.com/2017/4/28/15480052/yik-yak-shut-down-anonymous-messaging-app-square">Yik Yak</a> as well as Whisper, Secret, Spout, Swiflie and Sarahah. All these cater to a desire for anonymous interaction online. </p>
<p>The explosive popularity of YOLO has led <a href="https://www.independent.co.uk/life-style/gadgets-and-tech/news/yolo-app-most-popular-snapchat-danger-online-abuse-security-a8907836.html">to warnings</a> of the same problem that led to Yik Yak’s shutdown, namely that its anonymity could lead to cyberbullying <a href="https://www.afr.com/technology/social-media/time-to-end-the-age-of-anonymity-online-20190503-p51jyd">and hate speech</a>. </p>
<p>But in an age of online surveillance and <a href="https://www.independent.co.uk/life-style/gadgets-and-tech/the-spiral-of-silence-how-social-media-encourages-self-censorship-online-9693044.html">self-censorship</a>, proponents view anonymity as an essential component of <a href="https://privacyinternational.org/blog/1111/two-sides-same-coin-right-privacy-and-freedom-expression">privacy and free-speech</a>. And our <a href="https://www.emeraldinsight.com/doi/abs/10.1108/EJM-01-2017-0016">own research</a> on anonymous online interactions among teenagers in the UK and Ireland has revealed a wider range of interactions that extend beyond the toxic to the benign and even beneficial.</p>
<p>The problem with anonymous apps is the torrent of reports of <a href="https://www.washingtontimes.com/news/2019/jun/2/lizzie-pettinato-bullied-teen-pushes-ban-anonymous/">cyberbullying</a>, <a href="https://www.washingtonpost.com/local/education/millions-of-teens-are-using-a-new-app-to-post-anonymous-thoughts-and-most-parents-have-no-idea/2015/12/08/1532a98c-9907-11e5-8917-653b65c809eb_story.html?utm_term=.4acfdae1e7fe">harassment and threats</a> that appear to be even more of a feature than in regular social networks. Psychologist John Suler, who specialises in online behaviour, describes this phenomenon as the “<a href="https://pdfs.semanticscholar.org/c70a/ae3be9d370ca1520db5edb2b326e3c2f91b0.pdf">online disinhibition effect</a>”. This means people feel less accountable for their actions when they feel removed from their real identities.</p>
<p>The veil provided by anonymity enables people to become rude, critical, angry, hateful and threatening towards one another, without fear of repercussion. But this opportunity for uninhibited expression is also what makes anonymous apps both attractive to and beneficial for people who want to use them in a positive way. </p>
<h2>Freedom from social media’s tyranny</h2>
<p>Recent studies highlight that young people are becoming increasingly <a href="https://www.sciencedirect.com/science/article/pii/S0191886910004654">dissatisfied with the narcissistic culture</a> that dominates networks such as Facebook, Instagram and Snapchat. Due to the nature of their design, these platforms encourage people to present idealised versions of themselves. Not only is this emotionally taxing, but deploying the camera filters and other image augmentation tools involved in these idealised presentations means this process can involve a significant workload.</p>
<p>Young people <a href="http://time.com/4793331/instagram-social-media-mental-health">increasingly feel</a> that social media can lead to anxiety and feelings of inadequacy that they take from constantly comparing themselves to unrealistic images of other people. In light of these pressures, it’s less surprising that young people are increasingly turning to various forms of anonymous interaction that free them from the need to present a perfect avatar.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/283689/original/file-20190711-173360-ywrei0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">shutterstock.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/teenage-girl-being-bullied-by-text-268226264?src=FVmQaQjcLTiV7g6kuM0sVg-1-10&studio=1">SpeedKingz/Shutterstock</a></span>
</figcaption>
</figure>
<p>Instead, anonymous apps provide a forum for young people to engage in what they consider to be more authentic modes of interaction, expression and connection. This can take various forms. For some, anonymity opens up space to be honest about the problems they suffer and seek support for issues that carry stigma – such as anxiety, depression, self-harm, addiction and body dysphoria. It can provide an important <a href="https://www.denverpost.com/2014/08/01/anonymous-app-secret-has-defied-critics-to-make-catharsis-social">outlet for catharsis</a> and, at times, comfort.</p>
<p>For others, anonymity gives them a way to pronounce their harsh “truths” on important social issues without fear of retribution for going against popular opinions of their peers. One aspect of the idealised self-presentation of social media is supporting certain views because they are seen to be fashionable among a certain group of people, rather than because they are truly held beliefs. </p>
<p>This so-called “<a href="https://www.nytimes.com/2019/03/30/opinion/sunday/virtue-signaling.html">virtue signalling</a>” is part of the debate about the authenticity of interactions online. While anonymity doesn’t necessarily create more intellectual discussion, it does provide a more open forum where people can represent their true opinions without fear of being ostracised or harassed for saying the wrong thing.</p>
<h2>A ban would be shortsighted</h2>
<p>Anonymity is not perfect, it is not always good, but equally it is not always bad. Cyberbullying is an undoubtedly a serious issue that needs to be tackled. Yet content moderation and the determination of what can, and cannot, be said or shared online is subjective. It is an imperfect system, but calls for an outright ban on anonymity may be <a href="https://www.wired.co.uk/article/real-name-policies-anonymity-online-harassment">short-sighted</a>. They tend to underline the negative associations of anonymity without showing awareness of its positive potential. </p>
<p>What is truly needed is education. Certainly more needs to be done to educate young people about the perils of social media consumption. Updated curricula in schools, colleges and universities can, and should, do much more in this respect. </p>
<p>But equally, app designers and service providers need to become more aware of the negative effects that their offerings can have. Safeguarding should top the agendas of Silicon Valley companies, especially when they are targeting young people and freeing people to say whatever they like without fear of repercussions.</p><img src="https://counter.theconversation.com/content/119836/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Warning about anonymous messaging app YOLO miss the potential benefits it could have.Killian O'Leary, Lecturer in Consumer Behaviour, Lancaster UniversityStephen Murphy, Lecturer in Marketing, University of EssexLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1176512019-05-28T20:00:58Z2019-05-28T20:00:58Z6 ways to protect your mental health from social media’s dangers<figure><img src="https://images.theconversation.com/files/276838/original/file-20190528-42600-a887yc.jpg?ixlib=rb-1.1.0&rect=27%2C0%2C5990%2C2272&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Is social media helping you feel good?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/multicultural-group-young-people-men-women-651146170">pathdoc/Shutterstock.com</a></span></figcaption></figure><p>More than one-third of American adults view <a href="https://www.psychiatry.org/newsroom/news-releases/americans-are-concerned-about-potential-negative-impacts-of-social-media-on-mental-health-and-well-being">social media as harmful to their mental health</a>, according to a new survey from the American Psychiatric Association. Just 5% view social media as being positive for their mental health, the survey found. Another 45% say it has both positive and negative effects.</p>
<p>Two-thirds of the survey’s respondents believe that social media usage is related to social isolation and loneliness. There is a strong body of research linking social media use with <a href="https://doi.org/10.1016/j.jad.2019.01.026">depression</a>. Other studies have linked it to <a href="https://doi.org/10.1016/j.copsyc.2015.10.006">envy</a>, <a href="https://doi.org/10.1016/j.copsyc.2015.10.006">lower self-esteem</a> and <a href="https://doi.org/10.1089/cyber.2012.0291">social anxiety</a>. </p>
<p><iframe id="Rj65b" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/Rj65b/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>As a psychologist who has studied the perils of online interactions and has observed the effects of social media (mis)use on <a href="http://arlingtonbehaviortherapy.com/staff/drkpsychologist/">my clients’ lives</a>, I have six suggestions of ways people can reduce the harm social media can do to their mental health.</p>
<h2>1. Limit when and where you use social media</h2>
<p>Using social media can <a href="https://theconversation.com/to-improve-digital-well-being-put-your-phone-down-and-talk-to-people-82057">interrupt and interfere with in-person communications</a>. You’ll connect better with people in your life if you have certain times each day when your social media notifications are off – or your phone is even in airplane mode. Commit to not checking social media during meals with family and friends, and when playing with children or talking with a partner. Make sure social media doesn’t interfere with work, distracting you from demanding projects and conversations with colleagues. In particular, don’t keep your phone or computer in the bedroom – it <a href="https://doi.org/10.1016/j.ypmed.2016.01.001">disrupts your sleep</a>.</p>
<h2>2. Have ‘detox’ periods</h2>
<p>Schedule regular multi-day breaks from social media. Several studies have shown that even a five-day or weeklong break from Facebook can lead to <a href="https://doi.org/10.1080/00224545.2018.1453467">lower stress</a> and <a href="https://doi.org/10.1089/cyber.2016.0259">higher life satisfaction</a>. You can also cut back without going cold turkey: Using Facebook, Instagram and Snapchat just 10 minutes a day for three weeks resulted in <a href="https://doi.org/10.1521/jscp.2018.37.10.751">lower loneliness and depression</a>. It may be difficult at first, but seek help from family and friends by publicly declaring you are on a break. And delete the apps for your favorite social media services. </p>
<h2>3. Pay attention to what you do and how you feel</h2>
<p>Experiment with using your favorite online platforms at different times of day and for varying lengths of time, to see how you feel during and after each session. You may find that a few short spurts <a href="https://doi.org/10.1002/smi.2637">help you feel better</a> than spending 45 minutes exhaustively scrolling through a site’s feed. And if you find that going down a Facebook rabbit hole at midnight routinely leaves you depleted and feeling bad about yourself, eliminate Facebook after 10 p.m. Also note that people who use social media passively, just browsing and consuming others’ posts, <a href="https://doi.org/10.1089/cyber.2017.0668">feel worse than people who participate actively</a>, posting their own material and engaging with others online. Whenever possible, focus your online interactions on people you also know offline. </p>
<h2>4. Approach social media mindfully; ask ‘why?’</h2>
<p>If you look at Twitter first thing in the morning, think about whether it’s to get informed about breaking news you’ll have to deal with – or if it’s a mindless habit that <a href="https://doi.org/10.1002/jclp.20400">serves as an escape</a> from facing the day ahead. Do you notice that you get a craving to look at Instagram whenever you’re confronted with a difficult task at work? Be brave and brutally honest with yourself. Each time you reach for your phone (or computer) to check social media, answer the hard question: Why am I doing this now? Decide whether that’s what you want your life to be about.</p>
<h2>5. Prune</h2>
<p>Over time, you have likely accumulated many online friends and contacts, as well as people and organizations you follow. Some content is still interesting to you, but much of it might be boring, annoying, infuriating or worse. Now is the time to unfollow, mute or hide contacts; the vast majority won’t notice. And your life will be better for it. A recent study found that information about the lives of Facebook friends <a href="https://doi.org/10.1016/j.paid.2019.04.032">affects people more negatively</a> than other content on Facebook. People whose social media included inspirational stories <a href="http://thejsms.org/tsmri/index.php/TSMRI/article/view/381">experienced gratitude, vitality and awe</a>. Pruning some “friends” and adding a few motivational or funny sites is likely to decrease the negative effects of social media.</p>
<h2>6. Stop social media from replacing real life</h2>
<p>Using Facebook to keep abreast of your cousin’s life as a new mother is fine, as long as you don’t neglect to visit as months pass by. Tweeting with a colleague can be engaging and fun, but make sure those interactions don’t become a substitute for talking face to face. When used thoughtfully and deliberately, social media can be a useful addition to your social life, but only a flesh-and-blood person sitting across from you <a href="https://doi.org/10.1515/jcc-2013-0003">can fulfill the basic human need</a> for connection and belonging.</p>
<p>[ <em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/117651/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jelena Kecmanovic does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Just 5% of US adults say using social media is good for their mental health. A psychologist offers some tips to help the other 95%.Jelena Kecmanovic, Adjunct Professor of Psychology, Georgetown UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1172592019-05-16T08:04:41Z2019-05-16T08:04:41ZThe ‘Christchurch Call’ is just a start. Now we need to push for systemic change<figure><img src="https://images.theconversation.com/files/274826/original/file-20190516-69186-15si7o9.jpg?ixlib=rb-1.1.0&rect=73%2C172%2C3421%2C2389&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron at the "Christchurch Call" summit, which delivered an agreement signed by tech companies and world leaders.</span> <span class="attribution"><span class="source">EPA/Charles Platiau</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>The “Christchurch Call” <a href="https://www.stuff.co.nz/national/christchurch-shooting/112693273/live-jacinda-arderns-christchurch-call-summit-in-paris">summit</a> has made <a href="https://www.noted.co.nz/tech/tech-companies-and-17-govts-sign-up-to-christchurch-call/">specific progress</a>, with tech companies and world leaders signing an <a href="https://www.documentcloud.org/documents/6004545-Christchurch-Call.html">agreement</a> to eliminate terrorist and violent extremist content online. The question now is how we collectively follow up on its promise.</p>
<p>The summit in Paris began with the statement that the white supremacist terrorist attack in Christchurch two months ago was “unprecedented”. But one of the benefits of this conversation happening in such a prominent fashion is that it draws attention to the fact that this was not the first time social media platforms have been implicated in terrorism. </p>
<p>It was merely the first time that a terrorist attack in a western country was broadcast via the internet. Facebook played a significant role in the genocide of Rohingya Muslims in Myanmar, as covered in the Frontline documentary “<a href="https://www.youtube.com/watch?v=T48KFiHwexM">The Facebook Dilemma</a>”. And this <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3082972">study</a> demonstrated a link between <a href="https://www.nytimes.com/2018/08/21/world/europe/facebook-refugee-attacks-germany.html">Facebook use and violence against refugees</a> in Germany. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/its-vital-we-clamp-down-on-online-terrorism-but-is-arderns-christchurch-call-the-answer-117169">It's vital we clamp down on online terrorism. But is Ardern's 'Christchurch Call' the answer?</a>
</strong>
</em>
</p>
<hr>
<h2>Better than expected outcome</h2>
<p>I hope attention now turns to the fact that social media platforms profit from both an indifference to harassment and from harassment itself. It falls within the realms of corporate responsibility to deal with these problems, but they have done nothing to remedy their contributions to harassment campaigns in the past. </p>
<p>Online communities whose primary purpose is to terrorise the people they target have existed for many years, and social media companies have ignored them. <a href="https://feministfrequency.com/author/femfreq/">Anita Sarkeesian</a> was targeted by a harassment campaign in 2012 after drawing attention to the problems of how women are represented in videogames. She <a href="https://feministfrequency.com/2015/01/27/one-week-of-harassment-on-twitter/">chronicled the amount of abuse</a> she received on Twitter in just one week during 2015 (content warning, this includes threats of murder and rape). Twitter did nothing.</p>
<p>When the summit began, I hoped that pressure from governments and the threat of regulation would prompt some movement from social media companies, but I wasn’t optimistic. I expected that social media companies would claim that technological solutions based on algorithms would magically fix everything without human oversight, despite the fact that they can be and are gamed by bad actors. </p>
<p>I also thought the discussion might turn to removing anonymity from social media services or the internet, despite the evidence that many people involved in online abuse are <a href="https://www.wired.co.uk/article/real-name-policies-anonymity-online-harassment">comfortable doing so under their own names</a>. Mainly, I thought that there would be some general, positive-sounding statements from tech companies about how seriously they were taking the summit, without many concrete details to their plans.</p>
<p>I’m pleased to be wrong. The discussion has already raised <a href="https://www.documentcloud.org/documents/6004545-Christchurch-Call.html">specific and vital elements</a>. The New Zealand Herald <a href="https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12231337">reports</a> that:</p>
<blockquote>
<p>… tech companies have pledged to review their business models and take action to stop users being funnelled into extremist online rabbit holes that could lead to radicalisation. That includes sharing the effects of their commercially sensitive algorithms to develop effective ways to redirect users away from dark, single narratives.</p>
</blockquote>
<h2>Algorithms for profit</h2>
<p>The underlying business model of social media platforms has been part of the problem with abuse and harassment on their services. A great deal of evidence suggests that algorithms designed in pursuit of profit are also fuelling radicalisation towards white supremacy. Rebecca Lewis highlights that <a href="https://datasociety.net/output/alternative-influence/">YouTube’s business model</a> is fundamental to the ways the platform pushes people towards more extreme content.</p>
<p>I never expected the discussions to get so specific that tech companies would explicitly put their business models on the table. That is promising, but the issue will be what happens next. Super Fund chief executive Matt Whineray has said that an international investor group of 55 funds, worth a US$3.3 trillion will put their <a href="https://www.rnz.co.nz/news/national/389297/tech-companies-and-17-govts-sign-up-to-christchurch-call">financial muscle to the task of following up these initiatives</a> and ensuring accountability. My question is how solutions and progress are going to be defined.</p>
<p>Social media companies have committed to greater public transparency about their setting of community standards, particularly around how people uploading terrorist content will be handled. But this commitment in the <a href="https://www.documentcloud.org/documents/6004545-Christchurch-Call.html">Christchurch Call</a> agreement doesn’t carry through to discussions of algorithms and business models. </p>
<p>Are social media companies going to make their recommendation algorithms open source and allow scrutiny of their behaviour? That seems very unlikely, given how fundamental they are to their individual business models. They are likely to be seen as vital corporate property. Without that kind of openness it’s not clear how the investor group will judge whether any progress towards accountability is being made. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/as-responsible-digital-citizens-heres-how-we-can-all-reduce-racism-online-114619">As responsible digital citizens, here's how we can all reduce racism online</a>
</strong>
</em>
</p>
<hr>
<p>While the Christchurch Call has made concrete progress, it is important to make sure that we collectively keep up the pressure. We need to make sure this rare opportunity for important systemic changes doesn’t fall by the wayside. That means pursuing transparent accountability through whatever means we can, and not losing sight of fundamental problems like the underlying business model of social media companies.</p>
<p>One example of a specific step would be more widespread adoption of best ethical practice for <a href="https://datasociety.net/output/oxygen-of-amplification/">covering extremist content in the news</a>. There is evidence that <a href="https://www.rnz.co.nz/national/programmes/mediawatch/audio/2018687741/to-name-or-not-to-name-the-evidence">not naming the perpetrator</a> makes a difference, and the <a href="https://www.stuff.co.nz/national/christchurch-shooting/112352367/christchurch-terror-attack-how-new-zealand-media-will-report-the-trial">guidelines New Zealand media adopted</a> for the coverage of the trial are another step in the right direction. A recent <a href="https://www.digitaldemocracy.nz/">article from authors</a> investigating the impact of digital media on democracy in New Zealand also points out <a href="https://thespinoff.co.nz/politics/16-05-2019/the-christchurch-call-is-a-small-welcome-step-heres-what-needs-to-come-next/">concrete steps</a>.</p>
<p>The Christchurch Call has made excellent progress as a first step to change, but we need to take this opportunity to push for systemic change in what has been a serious, long-term problem.</p><img src="https://counter.theconversation.com/content/117259/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kevin Veale does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>While the “Christchurch Call” summit has made concrete progress, we need to keep up the pressure on social media companies to become more transparent and accountable.Kevin Veale, Lecturer in Media Studies, Massey UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1113812019-02-07T16:21:59Z2019-02-07T16:21:59ZSelf-harm and social media: a knee-jerk ban on content could actually harm young people<figure><img src="https://images.theconversation.com/files/257755/original/file-20190207-174894-1az2n7i.jpg?ixlib=rb-1.1.0&rect=104%2C121%2C3761%2C2451&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">In search of support?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/teen-girl-excessively-sitting-phone-home-530481523?src=ucPqk8DXXr6mTcfRTLMi1w-1-7">Shutterstock.</a></span></figcaption></figure><p>Instagram has <a href="https://www.bbc.co.uk/news/uk-47160460">announced it will ban</a> graphic self-harm images from the platform. The social media company has been under pressure from the UK government and health professionals – including Dame Sally Davies, England’s chief medical officer, who <a href="https://www.bbc.co.uk/news/health-47150658">recently argued</a> that social media companies have a duty of care to keep children safe.</p>
<p>There is <a href="https://www.bbc.co.uk/news/uk-47127208">growing concern that</a> such content “has the effect of grooming people to take their own lives”, <a href="https://inews.co.uk/news/politics/jackie-doyle-price-refuses-to-rule-out-charging-social-media-execs-over-harmful-content/">according to Jackie Doyle-Price</a>, the government’s undersecretary for mental health, inequalities and suicide prevention.</p>
<p>Much of this concern hinges on the assumption that self-harm content causes, encourages or glorifies acts such as self-cutting and burning. But participants in <a href="https://www.birmingham.ac.uk/research/activity/applied-health/research/ethics-of-care-on-social-media/exploring-the-ethics-of-care-on-social-media-an-interdisciplinary-network.aspx">our ongoing Wellcome Trust study</a> at the University of Birmingham have challenged us to reconsider this. Ignoring their words could lead to politicians or platforms introducing safeguarding measures that unintentionally cause harm.</p>
<p>It’s key to recognise that young people who search for self-harm discussions and imagery online are likely to be harming themselves already. As this research participant suggests:</p>
<blockquote>
<p>I started to self-harm at the age of 14, it began as a nervous habit of mine, when I was stressed. Someone noticed me scratching my hand up, which I didn’t know was self-harm at the time, and this person told me that a pencil sharpener blade worked better. I couldn’t get it out of my head, so I googled it and it was a spiral from there.</p>
</blockquote>
<p>Seeking understanding and support for their self-harm is a key reason why a young person may turn to social media, as another participant told us:</p>
<blockquote>
<p>As a teenager I spent all my free time searching for help and support online because I just didn’t have a healthy outlet or anyone to talk to. I was desperate to find people who could explain what was going on and tell me what I needed to do because I felt so lost and had no idea.</p>
</blockquote>
<p>For a young person in distress, posting a seemingly “graphic” photograph of their cut or burnt arm can be a way of reaching out for help and understanding. Friends and strangers respond with offers to talk or keep the person company, and with requests for updates on how they are.</p>
<p>When young people express a need to harm themselves on social media, this is often met with advice on how to resist such urges, and offers of alternative coping strategies alongside “hugs and love”. Participants may also congratulate one another for not self-harming and encourage each other to “remain clean”. </p>
<p>There is <a href="https://link.springer.com/article/10.1007%2Fs40894-018-0080-9">evidence which shows</a> that self-harm functions as a way to cope with ongoing distress. In this case, social media <a href="https://www.sciencedirect.com/science/article/pii/S0165032717315227">can offer</a> non-judgemental understanding. </p>
<h2>The darker places</h2>
<p>Research into mental health has <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0155813">long acknowledged</a> the internet’s potential to offer support, including in relation to self-harm. For example, analyses of pro-anorexia websites have explored <a href="https://sheu.org.uk/sheux/EH/eh342al.pdf">the complex ways</a> that the internet <a href="https://vc.bridgew.edu/jiws/vol4/iss2/4/">can be a “sanctuary”</a> for those in distress – but it also has the potential to normalise such behaviours, in a way that can stop people from seeking help offline. </p>
<p>While this is a clear danger of content about self-harm, it’s crucial to distinguish between continuation and causation, as our participant suggests: </p>
<blockquote>
<p>I think there’s a misconception that people will get into self-harm because they see these pictures… But I think it can make it worse… There’s a lot of things about social media that are really helpful and it’s often the only place that people can go to talk about what they’re experiencing and to get support for. But there’s also some pretty dark places online.</p>
</blockquote>
<p>It is important to note that both supportive content and “graphic” self-harm content are attached to the same hashtags, or found within the same online spaces. So Instagram’s jump to ban such content could be dangerous. For example, Instagram has recently altered its search engine so that it is no longer possible to search for hashtags relating to self-harm. This means that currently searching for #selfharmsupport also returns no results.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/257773/original/file-20190207-174890-1a34xpq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Searching for support.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/lonely-young-boy-university-sport-place-513377491?src=PEm5xB19AaPVi1LpRh7DQQ-1-0">Shutterstock.</a></span>
</figcaption>
</figure>
<p>It’s crucial to consider how to improve young people’s online safety, without pushing them further towards “darker places” in their search for support. Self-harm content on social media must be considered in context – and decision makers need to recognise how it relates to what’s going on in society. </p>
<h2>Seeking support</h2>
<p>Although our research prompts us to question the assumption that engaging with self-harm imagery or discussions can cause a young person to harm themselves, it does echo <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0077555">other research</a> which suggests that cyberbullying and trolling <a href="https://academic.oup.com/eurpub/article/27/suppl_3/ckx187.581/4556547">contribute to the development</a> of mental health problems, including self-harm. </p>
<p>If politicians and health professionals adopt a narrow focus on self-harm content, they could lose sight of the more pernicious but widespread dangers of social media. What’s more, they risk overlooking the fact that young people can be propelled towards online spaces by encountering stigma and misunderstanding when they attempt to talk about self-harm with people in their lives offline.</p>
<blockquote>
<p>I reject the idea that it [social media] holds the blame for self-harm or related behaviour or illnesses. The real responsibility lies with what attitudes, decisions and opinions we put out into society, which we have been doing long before social media.</p>
</blockquote>
<p>Rather than focusing on shutting down discussion on social media, current and previous research indicates that politicians and health professionals should question what’s happening at a societal level to lead young people to self-harm. It’s also crucial that across society we respond appropriately and with compassion, so that a young person struggling with self-harm is not compelled to seek support on social media. </p>
<p><em>In the UK, <a href="https://www.samaritans.org/how-we-can-help-you/contact-us?gclid=EAIaIQobChMIj7S40uLL3AIV7bftCh2DMw35EAAYASAAEgLI-_D_BwE">Samaritans</a> can be contacted on 116 123 or by email – jo@samaritans.org. Other similar international helplines can be found <a href="https://www.befrienders.org/">here</a>.</em></p><img src="https://counter.theconversation.com/content/111381/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anna Lavis received funding from the Wellcome Trust to undertake the research on which this article draws. </span></em></p><p class="fine-print"><em><span>Rachel Winter contributed to the project funded by the Wellcome Trust </span></em></p>Young people turn to social media for support and encouragement, when society fails to help.Anna Lavis, Lecturer in Medical Sociology, University of BirminghamRachel Winter, Research Fellow in Applied Health, University of BirminghamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1112132019-02-06T15:30:24Z2019-02-06T15:30:24ZFacebook ten year challenge: how our need to belong trumps our distrust of social media<figure><img src="https://images.theconversation.com/files/257239/original/file-20190205-86205-vljz8c.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">ra2studio via Shutterstock</span></span></figcaption></figure><p>When the ten year challenge began doing the rounds on social media, people rushed to post profile pictures of themselves from 2009, side by side with one from 2019, to highlight how much they had changed (or not) in the meantime. It is estimated that more than 5.2m social media users participated in this challenge.</p>
<p>It started on Facebook towards the end of January 2019, and it didn’t take long for experts like <a href="https://www.amazon.com/dp/B07GBHTX9K/ref=cm_sw_r_cp_ep_dp_gjcGBbB9YVX43">tech author Kate O’Neil</a> to suggest that the trend could have harmful consequences. Specifically, by posting the now-and-then photos with the #10yearchallenge hashtag, social media users were, possibly, helping to train facial recognition software to recognise – or predict – age-related changes. Facebook <a href="https://www.wired.com/story/facebook-10-year-meme-challenge/">has denied</a> that it is behind the viral trend or that it had anything to gain from it. The company highlighted that, in most cases, the photos used were already available on Facebook. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1084199700427927553"}"></div></p>
<p>As O’Neil and other experts mentioned, the meme provides a quick way of finding and pairing profile photos of the same person, exactly ten years apart. And with trust in Facebook at a low following a number of negative press it’s probably no surprise that the company has had to deny its involvement. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=551&fit=crop&dpr=1 600w, https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=551&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=551&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=693&fit=crop&dpr=1 754w, https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=693&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/257233/original/file-20190205-86202-e7g2yc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=693&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Nothing sinister, honestly.</span>
<span class="attribution"><span class="source">sFwFun</span></span>
</figcaption>
</figure>
<p>The 2019 release of <a href="https://www.edelman.com/trust-barometer">Edelman’s Trust Barometer</a> reveals that many people do not trust social media, particularly in Europe and North America. In fact, globally, many people <a href="https://www.edelman.com/sites/g/files/aatuss191/files/2019-01/2019_Edelman_Trust_Barometer_Global_Report.pdf?utm_source=website&utm_medium=global_report&utm_campaign=downloads">do not trust institutions in general</a>, not just media (including social media) but also governments, NGOs and businesses.</p>
<h2>Taken on trust</h2>
<p>The Oxford dictionary <a href="https://en.oxforddictionaries.com/definition/trust">defines trust</a> as the “firm belief in the reliability, truth, or ability of someone or something”. Not trusting an institution, such as social media, means that we no longer rely on it to tell us the truth, look after us, or work properly. So if we distrust social media in general – and are so <a href="https://www.forbes.com/sites/ryanerskine/2019/01/31/in-an-era-of-social-media-distrust-some-brands-are-finding-ways-to-get-intimate/">suspicious of Facebook</a> in particular — why did <a href="https://www.forbes.com/sites/nicolemartin1/2019/01/17/was-the-facebook-10-year-challenge-a-way-to-mine-data-for-facial-recognition-ai/">more than 5.2m</a> people jump on the #10YearChallenge bandwagon?</p>
<p>Helen Kennedy, professor of digital society at the University of Sheffield, <a href="http://blogs.lse.ac.uk/impactofsocialsciences/2019/02/04/what-does-facebooks-tenyearchallenge-tell-us-about-the-public-awareness-of-data-and-algorithms/">argues that</a> by and large, the public does not understand how data collection systems and algorithms work and <a href="https://attitudes.doteveryone.org.uk/">cites research</a> suggesting that many people are unaware of the extent to which data is shared and used beyond the initial purpose for data collection. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1090398654140014592"}"></div></p>
<p>People are also confused by what is covered by the term “personal data” and have a cavalier attitude to the role of algorithms in determining choices in their lives. This means that the problem might be solved or, at least ameliorated, if social media users were more educated on these matters.</p>
<p>Research suggests that increasing the knowledge of social media users about invasive data collection practices and about the consequences of algorithmic decision making in daily life, might not be enough. For instance, <a href="https://sloanreview.mit.edu/article/your-customers-may-be-the-weakest-link-in-your-data-privacy-defenses/?utm_source=twitter&utm_medium=social&utm_campaign=sm-direct">a 2018 study</a> by academics Bernadette Kamleitner, Vincent W. Mitchell, Andrew Stephen and Ardi Kolah showed that mobile app users would still sign up for an app that accessed their list of contacts (names, addresses, phone numbers and so on) – even after users had been made explicitly aware that, by doing so, they were sharing other people’s personal data without their consent and so infringing on their privacy. </p>
<p>Likewise, a <a href="https://www.nominet.uk/parents-oversharing-family-photos-online-lack-basic-privacy-know/">study by Nominet</a> – the UK domain name manager – revealed that many parents had uploaded a photo of someone else’s child to social media without asking the parents’ permission, even though they themselves expected others to ask their permission before posting a photo of their child.</p>
<h2>Why people use Facebook</h2>
<p>To understand lax attitudes and behaviour of social media users, it might help to go back to the reasons why people use Facebook. As <a href="https://trove.nla.gov.au/work/6008649">has been demonstrated</a> in relation to other media – television, the press – the mode of engagement with a medium is shaped by the specific need driving the use of that medium. You may need to access a site to find out some specific information (“instrumental” use) or you might browse a website for entertainment purposes, or view content because it has been presented to you, often by someone you trust (“hedonistic” use). </p>
<p>Various <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3335399/">authors have proposed</a> that Facebook use is largely motivated by people’s desire to belong – for example to family groups, social groups and the like. Users are also often motivated by a desire to present themselves in positive ways to shape a deliberate public persona. Not only do these factors lead us to use social media, but they predispose us to join memes such as the ten year challenge. Specifically, by joining our friends in taking part in something like this, we strengthen our social bonds with the group, enhance our image and feed our narcissism – all the while helping the ten year challenge spread quickly and wildly.</p>
<p>So this is where the problem lies: a lot of people say they don’t trust social media – but when we use these tools it’s all too easy to forget about the company collecting and monetising our data through ever more predatory and questionable methods. Instead, we focus on the social side – our friends, relatives and peer groups who we trust and whose approval we seek. But if you look behind all the people who “like” your posts, you might catch a glimpse of the calculating minds who don’t care about how well you aged, except as a way of turning that into an algorithm. We can’t say we haven’t been warned.</p><img src="https://counter.theconversation.com/content/111213/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ana Isabel Domingos Canhoto does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Research shows that the sense of belonging provided by platforms like Facebook trumps our distrust of social media.Ana Isabel Domingos Canhoto, Reader in Marketing, Brunel University LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1101532019-01-31T11:43:33Z2019-01-31T11:43:33ZFacebook at 15: It’s not all bad, but now it must be good<figure><img src="https://images.theconversation.com/files/256152/original/file-20190129-108364-1ljvmw1.jpg?ixlib=rb-1.1.0&rect=343%2C17%2C5535%2C3895&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Doth the CEO protest too much?</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Facebook-Privacy-Scandal-Congress/5122dc257cb64d2198e92691210420d9/54/0">AP Photo/Andrew Harnik</a></span></figcaption></figure><p>It is almost too easy to bash Facebook these days. Nearly <a href="https://www.theverge.com/2018/2/6/16976328/facebook-mark-zuckerberg-pollster-tavis-mcginn-honest-data">a third of Americans</a> feel the country’s <a href="https://www.statista.com/statistics/248074/most-popular-us-social-networking-apps-ranked-by-audience/">most popular social media platform</a> is bad for society. As the company approaches its 15th birthday, <a href="https://www.theverge.com/2018/2/6/16976328/facebook-mark-zuckerberg-pollster-tavis-mcginn-honest-data">Americans rate its social benefit</a> as <a href="https://twitter.com/benioff/status/1062578525377425408?lang=en">better than Marlboro cigarettes</a>, but worse than McDonald’s. </p>
<p>Yet as a <a href="https://fletcher.tufts.edu/people/bhaskar-chakravorti">scholar of digital technologies</a> and their effects on society – and even though I am not on Facebook – I worry that public perception has become overly critical of Facebook. It’s true that the company has been behaving like many 15-year-old adolescents, acting <a href="https://www.nbcnews.com/tech/tech-news/facebook-s-2018-timeline-scandals-hearings-security-bugs-n952796">irresponsibly and selfishly</a>, and making <a href="http://fortune.com/2019/01/20/sheryl-sandberg-facebook-five-step-plan/">endless promises</a> to do better, at least until the next mess is uncovered. However, as talk grows of <a href="https://www.nytimes.com/2019/01/18/technology/facebook-ftc-fines.html">fines</a> and <a href="https://investorplace.com/2019/01/facebook-stock-is-immune-to-regulation/">regulations</a>, it’s worth remembering there is such a thing as overregulation, which would respond to the urgency and charged political climate of the current moment but hurt the public interest in the long run.</p>
<p>Official action to rein in Facebook’s power should reflect on the bad and ugly things the company has done and allowed to happen. But the debate shouldn’t forget some things about Facebook that would qualify as “great,” which may have been missed in the avalanche of negative sentiment toward the company and its leaders.</p>
<h2>The bad stuff</h2>
<p>The individual and social harms due to Facebook are many, including contributing to <a href="https://www.ft.com/content/02b6d334-8c2d-11e8-b18d-0181731a0340">concentration in the online advertising market</a>, <a href="https://blocnotesdeleco.banque-france.fr/billet-de-blog/les-monopoles-un-danger-pour-les-etats-unis">with negative impact on productivity and wage growth</a>, <a href="https://doi.org/10.1016/j.chb.2011.08.026">distracting</a> <a href="https://www.sciencedirect.com/science/article/pii/S0747563210000646">students</a> and <a href="http://doi.org/10.1093/aje/kww189">potentially causing users</a> <a href="https://munews.missouri.edu/news-releases/2015/0203-if-facebook-use-causes-envy-depression-could-follow/">mental distress</a> and <a href="https://www.bloomberg.com/news/articles/2019-01-10/facebook-junkies-are-similar-to-drug-addicts-study-finds">giving rise to symptoms akin to substance abuse</a>.</p>
<p>The bottom line is clear: Spending too much time on Facebook may be bad for you. </p>
<h2>Things get ugly</h2>
<p>All technology companies have been experiencing some <a href="https://theconversation.com/us/topics/technology-backlash-47393">heightened skepticism</a>. However, more <a href="https://www.cbinsights.com/research/facebook-fares-very-poorly-in-this-survey">Americans felt negatively toward Facebook</a> than those who felt similarly about Amazon, Google, Microsoft and Apple combined, according to a 2017 poll. Facebook’s place in the public perception has only deteriorated since then. </p>
<p>The <a href="https://www.nbcnews.com/tech/tech-news/facebook-s-2018-timeline-scandals-hearings-security-bugs-n952796">company’s violations of user trust</a> are legion, including <a href="https://www.cnn.com/2018/12/19/tech/facebook-user-data-big-tech-companies/index.html">ignoring its own privacy policies</a>, <a href="https://www.nytimes.com/2018/12/19/technology/facebook-data-sharing.html">sharing data without permission</a>, <a href="https://www.usatoday.com/story/tech/2019/01/25/facebook-duped-kids-into-spending-games-without-parents-permission/2679250002/">tricking children into spending their parents’ money</a>, <a href="https://www.nytimes.com/2018/11/04/us/politics/election-misinformation-facebook.html">allowing disinformation campaigns</a> that affect elections in the U.S. and elsewhere, and – perhaps worst of all – magnifying propaganda that has <a href="https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html">sparked violence</a> around the world.</p>
<p>In the U.S., the company’s services have allowed bias and discrimination to take root. In early 2018, the National Fair Housing Alliance and affiliated groups sued Facebook, alleging that its advertising platform let <a href="https://www.nytimes.com/2018/03/27/nyregion/facebook-housing-ads-discrimination-lawsuit.html">landlords and real-estate brokers discriminate</a> against women, disabled veterans and single mothers, among other groups. The company’s own civil-rights audit found it <a href="https://www.cnbc.com/2018/12/18/facebooks-sheryl-sandberg-on-civil-right-abuses.html">contributed to voter suppression</a> and targeted manipulative advertising to impressionable groups. That report came on the heels of two comprehensive reports compiled for the U.S. Senate detailing how <a href="https://comprop.oii.ox.ac.uk/research/ira-political-polarization/">Russian government agents used Facebook</a> and other social media sites to <a href="https://www.newknowledge.com/articles/the-disinformation-report/">influence Americans’ thinking</a>.</p>
<p>The company’s rap sheet is long and growing. Its <a href="https://techcrunch.com/2019/01/20/stung-by-criticism-facebooks-sandberg-outlines-new-plans-to-tackle-misinformation/">repeated assurances that it will fix</a> the problems are now roundly assumed to be empty promises.</p>
<h2>But wait, there is great stuff, too</h2>
<p>With this much going wrong, it is easy to forget that the company has shown great technological and business sophistication in connecting people like never before. Facebook combined innovative <a href="https://medium.com/s/a-brief-history-of-attention/how-likes-went-bad-b094ddd07d4">social-networking ideas</a> from others and <a href="https://moneyinc.com/10-largest-facebook-acquisitions-record/">bought up potential competitors</a> like Instagram and WhatsApp. This itself constitutes an innovation in creating a connectivity platform like no other.</p>
<p>In terms of contribution to the economy, the company is right – if a tad self-serving – to note that it has <a href="https://www.wsj.com/articles/the-facts-about-facebook-11548374613">helped small businesses</a> reach new customers and build relationships with both existing and prospective clients. The value of those connections is unclear – a single “like” could be worth <a href="https://www.businessinsider.com/what-is-a-facebook-like-actually-worth-in-dollars-2013-3">anywhere between nothing and US$214.81</a>, depending on the type of business and what it’s looking for Facebook users to do. An independent study from the U.S. Bureau of Economic Analysis found that from 2005 to 2015, U.S. <a href="https://www.philadelphiafed.org/-/media/research-and-data/publications/working-papers/2017/wp17-37.pdf?la=en">gross domestic product grew one-tenth of 1 percent faster</a> than it would have if Facebook hadn’t existed.</p>
<p>In terms of how connectivity helps advance other innovations, Facebook is a key contributor to <a href="https://thenewstack.io/a-reason-to-not-hate-facebook-open-source-contributions/">leading-edge open-source coding projects</a> in a range of applications, such as machine learning, gaming, 3D printing, home automation, scientific programming and data analysis, among others. The company has also leveraged its huge network of users to help <a href="https://www.fastcompany.com/40546380/facebooks-disaster-maps-helps-rescuers-know-where-theyre-needed-most">authorities</a>, <a href="https://mashable.com/2017/11/29/facebook-community-help-api-fundraising/">communities</a> and <a href="http://fortune.com/2015/11/16/facebook-safety-check/">families</a> respond efficiently to natural and human-caused disasters.</p>
<p>Particular groups of Facebook users may also see distinct benefits from being connected. Elderly people may get a <a href="https://uanews.arizona.edu/story/should-grandma-join-facebook-it-may-give-her-a-cognitive-boost-study-finds">cognitive boost</a>; people who <a href="https://doi.org/10.1089/cyber.2009.0411">seek a self-esteem boost</a> from viewing their own profiles, <a href="https://doi.org/10.1089/cpb.2008.0214">shy people</a>, <a href="https://doi.org/10.1007/s11606-010-1526-3">people with diabetes</a> and <a href="https://doi.org/10.1352/1934-9556-52.6.456">people on the autism spectrum</a> have all felt more support and improved well-being from using the site. </p>
<h2>Can Facebook turn great to good?</h2>
<p>As Facebook turns 15, the company faces a critical set of challenges. U.S. officials will be scrutinizing its activities and seeking ways to curb its power in society. Regulating Facebook itself will <a href="https://www.vox.com/technology/2018/4/12/17224096/regulating-facebook-problems">not be easy</a>, and will generate endless debate. The company will also have to contend with covert online agents <a href="https://thehill.com/policy/cybersecurity/427430-intel-leaders-warn-of-russian-influence-threat-ahead-of-2020-election">seeking to undermine democracy</a> by using Facebook to influence elections in India, Europe, Nigeria and Poland, among other places – not to mention the 2020 U.S. presidential election.</p>
<p>The company’s management will have to take bold steps, not only to defend Facebook’s positive features, but to eliminate – or at least reduce – the harm the company’s products and services do to people and society. Most companies aspire to go from “<a href="https://www.harpercollins.com/9780066620992/good-to-great/">good to great</a>”; Facebook’s challenge at 15 is a bit more complicated: It must convince a skeptical public and regulators chomping at the bit that it can mitigate the effects of its bad and the ugly sides – and go from being great to being a <a href="https://www.newyorker.com/magazine/2018/09/17/can-mark-zuckerberg-fix-facebook-before-it-breaks-democracy">force for good in the world</a>.</p><img src="https://counter.theconversation.com/content/110153/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bhaskar Chakravorti has founded and directs the Institute for Business in the Global Context at Fletcher/Tufts that has received funding from Mastercard, Microsoft, the Gates Foundation and the Onassis Foundation. He is a Non-Resident Senior Fellow at Brookings India and a Senior Advisor on Digital Inclusion at the Mastercard Center for Inclusive Growth.</span></em></p>Facebook has been acting irresponsibly and selfishly, and promising to do better without actually improving. But that’s not the whole story: The company has some positive qualities, too.Bhaskar Chakravorti, Dean of Global Business, The Fletcher School, Tufts UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1012922018-08-10T10:41:35Z2018-08-10T10:41:35ZProfit, not free speech, governs media companies’ decisions on controversy<figure><img src="https://images.theconversation.com/files/231184/original/file-20180808-191013-13uar8y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What causes a media business to bar the door?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/blocked-door-abandoned-house-background-597346598">yanin kongurai/Shutterstock.com</a></span></figcaption></figure><p>For decades, U.S. media companies have limited the content they’ve offered based on what’s good for business. The decisions by <a href="https://www.vox.com/policy-and-politics/2018/8/6/17655516/infowars-ban-apple-youtube-facebook-spotify">Apple, Spotify, Facebook and YouTube</a> to <a href="https://theconversation.com/audiences-love-the-anger-alex-jones-or-someone-like-him-will-be-back-101168">remove content from commentator Alex Jones and his InfoWars platform</a> follow this same pattern.</p>
<p>My <a href="https://global.oup.com/ushe/product/understanding-media-industries-9780190215323?cc=us&lang=en&">research on media industries</a> makes clear that government rules and regulations do little to limit what television shows, films, music albums, video games and social media content are available to the public. Business concerns about profitability are much stronger restrictions. Movies are given ratings based on their content not by government officials but by the <a href="https://www.mpaa.org/film-ratings/">Motion Picture Association of America</a>, an industry group. Television companies, for their part, often have departments handling what are called “<a href="http://www.museum.tv/eotv/standardsand.htm">standards and practices</a>” – reviewing content and suggesting or demanding changes to avoid offending audiences or advertisers.</p>
<p>The self-policing by movie studios and TV networks is very similar to YouTube’s and Facebook’s actions: Distributing extremely controversial content is bad for business. Offended viewers will turn away from the program and may choose to <a href="https://global.oup.com/academic/product/target-prime-time-9780195063202?cc=us&lang=en&">boycott the network or service</a> – reducing the size of audiences that can be sold to advertisers. Some alarmed viewers may even urge boycotts of the advertisers <a href="https://www.nytimes.com/2017/04/04/business/media/sexual-harassment-bill-oreilly-fox.html">whose messages air during controversial programming</a>. </p>
<p>Over the decades, television networks have internalized feedback from advertisers and unintended controversies to try to steer clear of negative attention. <a href="https://slate.com/technology/2018/08/facebook-and-apple-moved-the-goal-posts-to-ban-alex-jones-thats-encouraging.html">Social media companies</a> <a href="https://arstechnica.com/tech-policy/2018/08/youtube-bans-alex-jones-following-facebook-and-apples-lead/">are just beginning</a> to understand <a href="https://apnews.com/6d0a9467a997409cafd70a86d01e7093">these forces are at work</a> in their own industries as well. </p>
<h2>Self-regulation to avoid government intrusion</h2>
<p>The practices of media industries to police themselves arose over many years, as companies tried to appease public concern without triggering formal government supervision. This pleased all sides: Elected and appointed officials avoided having to do much of anything that might look like squashing free speech, companies avoided formal restrictions that might be quite severe, and concerned citizens had their objections heard and acted upon.</p>
<p>When concerns about the amount of sex and violence on broadcast television developed in the 1970s, the networks agreed – with strong encouragement from the federal government – to establish a “<a href="http://www.museum.tv/eotv/familyviewin.htm">Family Hour</a>” during the first hour of prime-time programming that was monitored by the National Association of Broadcasters. Music labels agreed to place “Parental Advisory” labels on <a href="https://www.npr.org/sections/therecord/2010/10/29/130905176/you-ask-we-answer-parental-advisory---why-when-how">albums with explicit lyrics</a>. Inspired by moviemakers, video game developers adopted ratings based on evaluations by an industry group, the <a href="https://www.esrb.org/about/">Entertainment Software Ratings Board</a>.</p>
<p>There is, though, a key difference between those industries and the situation of YouTube and Facebook. Movie studios, record labels and TV companies are responsible for making their content as well as distributing it – and are legally liable for any problems that might arise. </p>
<p>Online media companies, though, typically don’t create most of what appears on their platforms, and are <a href="https://theconversation.com/the-law-that-made-facebook-what-it-is-today-93931">expressly protected from legal responsibility</a> for the content of the messages others post. But hosting information publicly viewed as hateful can damage a business, even if it doesn’t run afoul of government rules.</p>
<h2>Challenges of social media content regulation</h2>
<p>Social media companies have achieved their <a href="https://techcrunch.com/2018/07/25/facebook-2-5-billion-people/">ubiquity</a> and <a href="https://www.thestreet.com/technology/sun-may-be-setting-on-social-media-stocks-14676996">high profits</a> because they do not have to pay for creating the content that attracts attention to their services. They reap the financial rewards of a technological advantage in which billions of users can create, share and look at different messages and pieces of content every day.</p>
<p>They are just beginning to understand the downside to that technological advantage, which is that the public – even if not the law – considers them at least somewhat responsible for what is said on their sites. And it’s <a href="https://www.cnbc.com/2018/03/23/facebook-privacy-scandal-has-a-plus-thousands-of-new-jobs-ai-cant-do.html">extremely difficult to sort through</a>, classify and police all those billions of posts – much less to figure out how to <a href="https://theconversation.com/can-facebook-use-ai-to-fight-online-abuse-95203">automate some of those tasks</a>. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/231180/original/file-20180808-191025-1ic26j7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Alex Jones, banned from many social media platforms.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Alex_Jones_Portrait.jpg">Michael Zimmermann</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>So far, social media sites have avoided limiting content except in the most extreme cases, because it is difficult to draw lines of acceptability that don’t produce more controversy themselves. Their decision likely included weighing the effects of the objections that would erupt if they did ban Jones against what might happen to their brands <a href="https://apnews.com/6d0a9467a997409cafd70a86d01e7093">if they didn’t</a>. </p>
<p>In the past, self-regulation often allowed media companies to evade governmental action. It is unclear whether these latest moves by social media companies are the start of lasting self-regulation or a one-off effort to quell current concern. Either way, their decisions are all about what is good for business. </p>
<p>Their response to outcry may be craven, but it might suggest these companies are recognizing the cultural power of their products. Ultimately, social media companies – like other media companies – are showing that they will respond to pressure from their audiences and the marketplace. In the absence of regulation, consumers will encourage companies to change policies by opting out of social media that enable cesspools of trolling and hate.</p>
<p>Users who want changes made should take note of how audiences have pressured other media industries to make changes in the past. Consumers who want greater privacy controls, environments free of hate speech, and different kinds of algorithms could demand them by leaving flawed services or boycotting the advertisers that support them. As demand for alternatives becomes clearer, services will change or a competitor will arise.</p><img src="https://counter.theconversation.com/content/101292/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amanda Lotz does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>While they may talk about ‘free speech,’ businesses make decisions about their content based on a very different set of principles.Amanda Lotz, Fellow, Peabody Media Center; Professor of Media Studies, University of MichiganLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/991962018-07-02T20:25:23Z2018-07-02T20:25:23ZWhat happens when people lose trust in the Internet?<figure><img src="https://images.theconversation.com/files/225595/original/file-20180701-117389-1ezmw1u.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1500%2C819&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Does the Internet bring people together or isolate them? </span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/bAPQgfthcrI">Rawpixel/Unsplash</a></span></figcaption></figure><p>An <a href="http://www.pewinternet.org/2018/04/30/declining-majority-of-online-adults-say-the-Internet-has-been-good-for-society/">April 2018 survey</a> by the Pew Research Centre has found that fewer people believe that “the Internet has been mostly a good thing for society” as compared to four years ago. This worsening perspective on the social benefits of the Internet contrasts with the view that these same respondents believed that the Internet continued to be a good thing for them individually.</p>
<p>Experts are even more <a href="http://www.pewinternet.org/2018/04/17/the-future-of-well-being-in-a-tech-saturated-world/">pessimistic</a> when it comes to the Internet. They not only see the decreasing social benefits of the Internet, but 32% of experts also felt that people’s well-being would be more harmed than helped by the Internet.</p>
<h2>The dark side of the network</h2>
<p>Those who held the view that the Internet was overall not a good thing for society cited the fact that people spent too much time on their devices and were being isolated by the Internet rather than brought together. The growth of fake news and false information was another concern expressed by those with a negative view of the social benefits of the Internet. Interestingly, diminishing privacy was only a concern for a small number of those surveyed.</p>
<p>The association between the use of the Internet and a negative social impact has been documented for some time. A <a href="http://paedpsych.jk.uni-linz.ac.at/PAEDPSYCH/NETSCHULE/NETSCHULELITERATUR/KRAUTetal98/Krautetal98.html">1998 study</a> by researchers from Carnegie Mellon University found that increased use of the Internet was associated with statistically significant declines in social involvement and increases in loneliness. A <a href="http://www.uh.edu/news-events/stories/2015/April/040415FaceookStudy">2015 study</a> found that there was a strong association between the amount of time spent on Facebook with symptoms of depression.</p>
<p>It is not necessarily the case that social media and the Internet in general causes loneliness or decreases social interactions. However, people who are lonely can potentially <a href="http://journals.sagepub.com/doi/abs/10.1177/1745691617713052">escape from social interactions</a> by spending more time on the Internet.</p>
<p>However, many of these studies were done prior to the rise in <a href="https://theconversation.com/advertising-is-driving-social-media-fuelled-fake-news-and-it-is-here-to-stay-68458">fake news</a> and the pervasive problems of trolls and bad behaviour on social networks, especially Twitter. It is also clear that a broad question about “the Internet” masks the global public’s concerns about specific issues with online life and with particular platforms.</p>
<p>In a large <a href="https://www.cigionline.org/Internet-survey-2018">2018 survey</a> of Internet users in 25 countries conducted by CIGI-Ipsos, over 30% reported that social media made their lives worse and 63% said that social media companies “have too much power”.</p>
<p>The consequence of this was that people reported to be changing their behaviour online, being more defensive with e-mail, sites visited, increasing security measures and engaging in less activity overall online, including the sharing of personal information. 12% of the people surveyed claimed that they were making fewer online purchases. This in turn has a large impact on the digital economy as a whole because the entire system depends on there being a base level of trust of the online public.</p>
<h2>The economic impact of the loss of confidence</h2>
<p>The responsibility for the overall level of trust in the online environment lays in large part with cybercriminals and the larger Internet companies and less so with the government. In the CIGI-Ipsos survey, cybercriminals and Internet companies were the two leading sources of concern with respect to online privacy and more than 80% of people surveyed were concerned about cybercrime in general.</p>
<p>For governments, it is clear that maintaining trust in the digital economy is a prime concern. For starters, the digital economy is often the fastest-growing part of a country’s economy. In the United States, the digital economy has been <a href="https://www.bea.gov/digital-economy/_pdf/defining-and-measuring-the-digital-economy.pdf">growing</a> on average at three times the rate of the overall economy over the last 10 years and now contributes over 6% of the total GDP. In <a href="https://www.huffingtonpost.fr/ingrid-nappi-choulet/leconomie-numerique-en-fr_b_9467274.html">France</a> that number is just 5% but to illustrate how important it could be, China’s digital economy represents <a href="http://french.xinhuanet.com/2017-12/04/c_136800444.htm">30% of its GDP</a>.</p>
<p>Clearly, the importance of a country’s digital economy is not the focus or overriding concern to individual Internet firms, especially if they are based elsewhere. Ironically, companies such as Facebook and Google can negatively impact a country’s digital economy by eroding trust in the Internet, even as they avoid paying taxes outside the countries in which they’re based – and sometimes even <a href="https://www.theguardian.com/business/2017/sep/21/tech-firms-tax-eu-turnover-google-amazon-apple">within the countries they’re based</a>. </p>
<p>At present, there are few incentives for such companies to act in the interests of the digital economy as a whole. The European Union has started creating regulation around privacy through the <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2016.119.01.0001.01.ENG">General Data Protection Regulation</a> and <a href="https://www.reuters.com/article/us-eu-tax-digital/eu-proposes-online-turnover-tax-for-big-tech-firms-idUSKBN1GX00J">proposed taxes</a> (GDPR) on the turnover of online services.</p>
<p>The potential for large fines from the GDPR has had some impact on the behaviour of companies like Facebook with regard to privacy. The ongoing threat of increased taxes has led to a slew of tech companies proposing various <a href="https://information.tv5monde.com/info/les-geants-du-numerique-font-des-promesses-emmanuel-macron-240125">projects and concessions</a> to French President Emmanuel Macron recently. These concessions are oriented more at lobbying the government, however, and not doing what they should be doing which is increasing the trust of the public.</p><img src="https://counter.theconversation.com/content/99196/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Glance ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Trust is the keystone of the entire Internet system: without it more connection and therefore more commerce. How to restore it?David Glance, Director of UWA Centre for Software Practice, The University of Western AustraliaLicensed as Creative Commons – attribution, no derivatives.