tag:theconversation.com,2011:/nz/topics/content-moderation-38415/articlesContent moderation – The Conversation2024-03-22T02:10:41Ztag:theconversation.com,2011:article/2261182024-03-22T02:10:41Z2024-03-22T02:10:41ZConspiracy theorist tactics show it’s too easy to get around Facebook’s content policies<figure><img src="https://images.theconversation.com/files/583342/original/file-20240321-26-joql1y.jpg?ixlib=rb-1.1.0&rect=40%2C148%2C4257%2C2849&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/kuala-lumpur-malaysia-august-25-2013-1168328122">MavardiBahar/Shutterstock</a></span></figcaption></figure><p>During the COVID pandemic, social media platforms were swarmed by far-right and anti-vaccination communities that spread dangerous conspiracy theories.</p>
<p>These included the false claims that <a href="https://www.bbc.com/news/54893437">vaccines are a form of population control</a>, and that the virus was a <a href="https://theconversation.com/qanon-conspiracy-theories-about-the-coronavirus-pandemic-are-a-public-health-threat-135515">“deep state” plot</a>. Governments and the World Health Organization redirected precious resources from vaccination campaigns to debunk these falsehoods. </p>
<p>As the tide of misinformation grew, platforms were accused of not doing enough to stop the spread. To address these concerns, Meta, the parent company of Facebook, made several policy announcements in 2020–21. However, it hesitated to remove “<a href="https://www.facebook.com/notes/751449002072082/?hc_location=ufi">borderline</a>” content, or content that didn’t cause direct physical harm, save for one <a href="https://about.fb.com/news/2020/04/covid-19-misinfo-update/">policy change</a> in February 2021 that expanded the content removal lists.</p>
<p>To stem the tide, Meta continued to rely more heavily on algorithmic moderation techniques to reduce the visibility of misinformation in users’ feeds, search and recommendations – known as shadowbanning. They also used fact-checkers to label misinformation.</p>
<p>While shadowbanning is widely seen as a <a href="https://theconversation.com/what-is-shadowbanning-how-do-i-know-if-it-has-happened-to-me-and-what-can-i-do-about-it-192735">concerningly opaque technique</a>, our <a href="https://journals.sagepub.com/doi/10.1177/1329878X241236984">new research</a>, published in the journal Media International Australia, instead asks: was it effective?</p>
<h2>What did we investigate?</h2>
<p>We used two measures to answer this question. First, after identifying 18 Australian far-right and anti-vaccination accounts that consistently shared misinformation between January 2019 and July 2021, we analysed the performance of these accounts using key metrics.</p>
<p>Second, we mapped this performance against five content moderation policy announcements for Meta’s flagship platform, Facebook.</p>
<p>The findings revealed two divergent trends. After March 2020 the <em>overall</em> performance of the accounts – that is, their <em>median</em> performance – suffered a decline. And yet their <em>mean</em> performance shows increasing levels after October 2020. </p>
<p>This is because, while the majority of the monitored accounts underperformed, a few accounts overperformed instead, and strongly so. In fact, they continued to overperform and attract new followers even after the alleged policy change in February 2021.</p>
<hr>
<p><iframe id="85UaE" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/85UaE/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<hr>
<h2>Shadowbanning as a badge of pride</h2>
<p>To examine why, we scraped and thematically analysed comments and user reactions from posts on these accounts. We found users had a high motivation to stay engaged with problematic content. Labelling and shadowbanning were viewed as motivating challenges.</p>
<p>Specifically, users frequently used “<a href="https://doi.org/10.1177/01634437221111923">social steganography</a>” – using deliberate typos or code words for key terms – to evade algorithmic detection. We also saw <a href="https://www.tandfonline.com/doi/full/10.1080/21670811.2021.1938165">conspiracy “seeding”</a> where users add links to archiving sites or less moderated sites in comments to re-distribute content Facebook labelled as misinformation, and to avoid detection.</p>
<p>In one example, a user added a link to a <a href="https://www.pewresearch.org/short-reads/2023/02/17/key-facts-about-bitchute/">BitChute</a> video with keywords that dog-whistled support for QAnon style conspiracies. As terms such as “vaccine” were believed to trigger algorithmic detection, emoji or other code names were used in their place:</p>
<blockquote>
<p>A friend sent me this link, it’s [sic.] refers to over 4000 deaths of individuals after getting 💉 The true number will not come out, it’s not in the public’s interest to disclose the amount of people that have died within day’s [sic.] of jab.</p>
</blockquote>
<p>While many conspiracy theories were targeted at government and public health authorities, platform suppression of content fuelled further conspiracies regarding big tech and their complicity with “Big Pharma” and governments.</p>
<p>This was evident in the use of keywords such as MSM (“mainstream media”) to reference QAnon style agendas: </p>
<blockquote>
<p>MSM are in on this whole thing, only report on what the elites tell them to. Clearly you are not doing any research but listening to msm […] This is a completely experimental ‘vaccine’.</p>
</blockquote>
<p>Another comment thread showed reactions to Meta’s <a href="https://about.fb.com/news/2020/08/addressing-movements-and-organizations-tied-to-violence/">dangerous organisations policy update</a>, where accounts that regularly shared QAnon-content were labelled “extremist”. In the reactions, MSM and “the agenda” appeared frequently. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/qanon-is-spreading-outside-the-us-a-conspiracy-theory-expert-explains-what-that-could-mean-198272">QAnon is spreading outside the US – a conspiracy theory expert explains what that could mean</a>
</strong>
</em>
</p>
<hr>
<p>Some users recommended that sensitive content be moved to alternative platforms. We observed one anti-vaccination influencer complain that their page was being shadowbanned by Facebook, and calling on their followers to recommend a “good, censorship free, livestreaming platform”.</p>
<p>The replies suggested moderation-lite sites such as <a href="https://rumble.com/">Rumble</a>. Similar recommendations were made for Twitch, a livestreaming site popular with gamers which has since attracted <a href="https://www.nytimes.com/2021/04/27/us/politics/twitch-trump-extremism.html">far-right political influencers</a>.</p>
<p>As one user said:</p>
<blockquote>
<p>I know so many people who get censored on so many apps especially Facebook and Twitch seems to work for them. </p>
</blockquote>
<h2>How can content moderation fix the problem?</h2>
<p>These tactics of coordination to detect shadowbans, resist labelling and fight the algorithm provide some insight into why engagement didn’t dim on some of these “overperforming” accounts despite all the policies Meta put in place. </p>
<p>This shows that Meta’s suppression techniques, while partially effective in containing the spread, do nothing to prevent those invested in sharing (and finding) misinformation from doing so.</p>
<p>Firmer policies on content removal and user banning would help address the problem. However, <a href="https://about.fb.com/news/2022/07/oversight-board-advise-covid-19-misinformation-measures/">Meta’s announcement last year suggests</a> the company has little appetite for this. Any loosening of policy changes will all but ensure this misinformation playground will continue to thrive.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-researcher-asked-covid-anti-vaxxers-how-they-avoid-facebook-moderation-heres-what-they-found-186406">A researcher asked COVID anti-vaxxers how they avoid Facebook moderation. Here's what they found</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/226118/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amelia Johns has received funding from Meta content policy award for some of the research presented in this article. She has also received funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Emily Booth is supported by funding from the Australian Department of Home Affairs and the Defence Innovation Network.</span></em></p><p class="fine-print"><em><span>Francesco Bailo has received funding from Meta content policy award for some of the research presented in this article. He receives funding from the Defence Innovation Network. </span></em></p><p class="fine-print"><em><span>Marian-Andrei Rizoiu receives funding from the Australian Department of Home Affairs, the Defence Science and Technology Group, the Defence Innovation Network and the Australian Academy of Science.</span></em></p>New research shows that even after Facebook made changes to stem the tide of dangerous pandemic misinformation, some accounts continued to thrive.Amelia Johns, Associate Professor, Digital and Social Media, School of Communication, University of Technology SydneyEmily Booth, Research assistant, University of Technology SydneyFrancesco Bailo, Lecturer, Digital and Social Media, University of SydneyMarian-Andrei Rizoiu, Associate Professor in Behavioral Data Science, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2177602023-11-27T13:41:34Z2023-11-27T13:41:34ZSupreme Court to consider giving First Amendment protections to social media posts<figure><img src="https://images.theconversation.com/files/560784/original/file-20231121-4426-i5zrwh.jpg?ixlib=rb-1.1.0&rect=0%2C22%2C3706%2C3084&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Citizens have sometimes been surprised to find public officials blocking people from viewing their social media feeds.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/businessman-standing-in-front-of-a-big-smart-royalty-free-illustration/1025098142">alashi/DigitalVision Vectors via Getty Images</a></span></figcaption></figure><p>The First Amendment does not protect messages posted on social media platforms. </p>
<p>The companies that own the platforms can – and do – remove, promote or limit the distribution of any posts <a href="https://www.freedomforum.org/free-speech-on-social-media/">according to corporate policies</a>. But all that might soon change.</p>
<p>The Supreme Court has agreed to <a href="https://www.nytimes.com/2023/10/31/opinion/social-media-supreme-court-democracy.html">hear five cases</a> during this current term, which ends in June 2024, that collectively give the court the opportunity to reexamine the nature of content moderation – the rules governing discussions on social media platforms such as Facebook and X, formerly known as Twitter – and the constitutional limitations on the government to affect speech on the platforms.</p>
<p>Content moderation, whether done manually by company employees or automatically by a platform’s software and algorithms, affects what viewers can see on a digital media page. Messages that are promoted garner greater viewership and greater interaction; those that are deprioritized or removed will obviously receive less attention. Content moderation policies reflect decisions by digital platforms about the relative value of posted messages.</p>
<p>As an attorney, <a href="https://lynngreenky.com/">professor</a> and author of a book about the <a href="https://press.uchicago.edu/ucp/books/book/distributed/W/bo156864042.html">boundaries of the First Amendment</a>, I believe that the constitutional challenges presented by these cases will give the court the occasion to advise government, corporations and users of interactive technologies what their rights and responsibilities are as communications technologies continue to evolve.</p>
<h2>Public forums</h2>
<p>In late October 2023, the Supreme Court heard oral arguments on two related cases in which both sets of plaintiffs argued that elected officials who use their social media accounts either exclusively or partially to promote their politics and policies <a href="https://www.nytimes.com/2023/04/24/us/elected-officials-social-media-supreme-court.html">cannot constitutionally block constituents</a> from posting comments on the officials’ pages.</p>
<p>In one of those cases, <a href="https://www.oyez.org/cases/2023/22-324">O’Connor-Radcliff v. Garnier</a>, two school board members from the Poway Unified School District in California blocked a set of parents – who frequently posted repetitive and critical comments on the board members’ Facebook and Twitter accounts – from viewing the board members’ accounts. </p>
<p>In the other case heard in October, <a href="https://www.oyez.org/cases/2023/22-611">Lindke v. Freed</a>, the city manager of Port Huron, Michigan, apparently angered by critical comments about a posted picture, blocked a constituent from viewing or posting on the manager’s Facebook page. </p>
<p>Courts have long held that public spaces, like parks and sidewalks, are public forums, which must <a href="https://www.oyez.org/cases/1900-1940/307us496">remain open to free and robust conversation and debate</a>, subject only to neutral rules <a href="https://firstamendment.mtsu.edu/article/time-place-and-manner-restrictions/">unrelated to the content of the speech expressed</a>. The silenced constituents in the current cases insisted that in a world where a lot of public discussion is conducted in interactive social media, digital spaces used by government representatives for <a href="https://www.nytimes.com/2023/04/24/us/elected-officials-social-media-supreme-court.html">communicating with their constituents</a> are also public forums and should be subject to the same First Amendment rules as their physical counterparts.</p>
<p>If the Supreme Court rules that public forums can be both physical and virtual, government officials will not be able to arbitrarily block users from viewing and responding to their content or remove constituent comments with which they disagree. On the other hand, if the Supreme Court rejects the plaintiffs’ argument, the only recourse for frustrated constituents will be to create competing social media spaces where they can criticize and argue at will.</p>
<h2>Content moderation as editorial choices</h2>
<p>Two other cases – <a href="https://www.oyez.org/cases/2023/22-555">NetChoice LLC v. Paxton</a> and <a href="https://www.oyez.org/cases/2023/22-277">Moody v. NetChoice LLC</a> – also relate to the question of how the government should regulate online discussions. <a href="https://perma.cc/YHK2-WVWS">Florida</a> and <a href="https://perma.cc/B2WU-M3CK">Texas</a> have both passed laws that modify the internal policies and algorithms of large social media platforms by regulating how the platforms can promote, demote or remove posts.</p>
<p>NetChoice, a tech industry trade group representing a <a href="https://netchoice.org/about/#association-members">wide range of social media platforms</a> and online businesses, including Meta, Amazon, Airbnb and TikTok, contends that the platforms are not public forums. The group says that the Florida and Texas legislation unconstitutionally restricts the social media companies’ First Amendment right to make their own <a href="https://www.oyez.org/cases/1973/73-797">editorial choices</a> about what appears on their sites.</p>
<p>In addition, NetChoice alleges that by limiting Facebook’s or X’s ability to rank, repress or even remove speech – whether manually or with algorithms – the Texas and Florida laws amount to government requirements that the <a href="https://www.oyez.org/cases/1994/94-749">platforms host speech they didn’t want to</a>, which is also unconstitutional. </p>
<p>NetChoice is asking the Supreme Court to rule the laws unconstitutional so that the platforms remain free to make their own independent choices regarding when, how and whether posts will remain available for view and comment.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A man in a military uniform stands at a lectern looking out at a group of people sitting in chairs." src="https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/560786/original/file-20231121-15-1e40j1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In 2021, U.S. Surgeon General Vivek Murthy declared misinformation on social media, especially about COVID-19 and vaccines, to be a public health threat.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/surgeon-general-vivek-murthy-and-white-house-press-news-photo/1328901388">Chip Somodevilla/Getty Images</a></span>
</figcaption>
</figure>
<h2>Censorship</h2>
<p>In an effort to reduce harmful speech that proliferates across the internet – speech that supports criminal and terrorist activity as well as misinformation and disinformation – the federal government has engaged in wide-ranging discussions with internet companies about their <a href="https://www.nytimes.com/2023/07/04/business/federal-judge-biden-social-media.html">content moderation policies</a>.</p>
<p>To that end, the Biden administration has regularly advised – <a href="https://www.nytimes.com/2023/07/04/business/federal-judge-biden-social-media.html">some say strong-armed</a> – social media platforms to deprioritize or remove posts the government had flagged as misleading, false or harmful. Some of the posts <a href="https://www.nytimes.com/2023/07/04/business/federal-judge-biden-social-media.html">related to misinformation</a> about COVID-19 vaccines or promoted human trafficking. On several occasions, the officials would suggest that platform companies ban a user who posted the material from making further posts. Sometimes, the corporate representatives themselves would ask the government what to do with a particular post.</p>
<p>While the public might be generally aware that content moderation policies exist, people are not always aware of how those policies affect the information to which they are exposed. Specifically, audiences have no way to measure how content moderation policies affect the marketplace of ideas or influence debate and discussion about public issues.</p>
<p>In <a href="https://www.scotusblog.com/case-files/cases/missouri-v-biden/">Missouri v. Biden</a>, the plaintiffs argue that government efforts to persuade social media platforms to publish or remove posts were so relentless and invasive that the moderation policies no longer reflected the companies’ own editorial choices. Rather, they argue, the policies were in reality government directives that effectively silenced – <a href="https://www.oyez.org/cases/1970/1873">and unconstitutionally censored</a> – speakers with whom the government disagreed. </p>
<p>The court’s decision in this case could have wide-ranging effects on the manner and methods of government efforts to influence the information that guides the public’s debates and decisions.</p><img src="https://counter.theconversation.com/content/217760/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lynn Greenky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Supreme Court will hear five cases this term that will examine the nature of online discussion spaces run by social media platforms.Lynn Greenky, Professor Emeritus of Communication and Rhetorical Studies, Syracuse UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2161142023-11-19T18:59:58Z2023-11-19T18:59:58ZTikTok has a startling amount of sexual content – and it’s way too easy for children to access<p>Explicit content has long been a feature of the internet and social media, and young people’s exposure to it has been a persistent concern.</p>
<p>This issue has taken centre stage again with the meteoric rise of TikTok. Despite efforts to moderate content, it seems TikTok’s primary focus remains <a href="https://www.theverge.com/2021/12/6/22820305/tiktok-algorithm-explained-leak-how-it-works">on maximising user engagement and traffic</a>, rather than creating a safe environment for users.</p>
<p>As the top <a href="https://www.pewresearch.org/internet/2022/08/10/teens-social-media-and-technology-2022/">social media app used by teens</a>, the presence of explicit content on TikTok can put young users in harm’s way. And while TikTok and regulators scramble to catch up with moderation needs, it’s ultimately up to parents and users to navigate these harms online.</p>
<h2>TikTok’s content moderation maze</h2>
<p>TikTok relies on both <a href="https://www.tiktok.com/transparency/en-us/content-moderation/">automated and human moderation</a> to identify and remove content violating its community guidelines. <a href="https://www.tiktok.com/community-guidelines/en/safety-civility/#5">This includes</a> nudity, pornography, sexually explicit content, non-consensual sexual acts, the sharing of non-consensual intimate imagery and sexual solicitation. TikTok’s community guidelines say:</p>
<blockquote>
<p>We do not allow seductive performances or allusions to sexual activity by young people, or the use of sexually explicit narratives by anyone.</p>
</blockquote>
<p>However, Tiktok’s automated moderation system isn’t always precise. This means beneficial material such as LGBTQ+ content and healthy <a href="https://mashable.com/article/tiktok-sex-education-content-removal">sex education content may be incorrectly removed</a> while explicit, harmful content <a href="https://www.instagram.com/p/Cwr4gvXIncI/?hl=en">slips through the cracks</a>.</p>
<p>Although TikTok has a human review process to compensate for algorithmic shortcomings, this is slow and time-consuming, which causes delays. Young people may be exposed to <a href="https://www.theguardian.com/australia-news/2022/feb/08/most-australian-teens-have-viewed-harmful-content-online-but-parents-in-dark-safer-internet-day">explicit and harmful content</a> before it is removed. </p>
<p>Content moderation is further complicated by user tactics such as “<a href="https://www.9news.com.au/national/tiktok-explained-algospeak-shadowbanning-everything-to-know/24ea8123-7e4b-4d3c-9efc-48ea375d048b">algospeak</a>”, which is used to avoid triggering algorithmic filters put in place to detect inappropriate content. In this case, algospeak may involve using internet slang, codes, euphemisms or emojis to replace words and phrases commonly associated with explicit content. </p>
<p>Many users also resort to algospeak because they feel TikTok’s algorithmic moderation is biased and unfair to marginalised communities. Users have reported on <a href="https://journals.sagepub.com/doi/10.1177/20563051231194586#bibr25-20563051231194586">a double standard</a>, wherein TikTok has suppressed educational content related to the LGBTQ+ community, while allowing harmful content to remain visible. </p>
<h2>Harmful content slips through the cracks</h2>
<p><a href="https://support.tiktok.com/community-guidelines#30">TikTok’s guidelines</a> on sexually explicit stories and sexualised posing are ambiguous. And its age-verification process relies on self-reported age, which users can easily bypass. </p>
<p>Many TikTok creators, including creators of pornography, use the platform to promote themselves and their content on other platforms such as PornHub or OnlyFans. For example, creator @jennyxrated posts suggestive and hypersexual content. She calls herself a “daddy’s girl” and presents as younger than she is.</p>
<p>Such content is popular on TikTok. It promotes unhealthy attitudes to sex and consent and perpetuates harmful gender stereotypes, such as suggesting women should be submissive to men.</p>
<p>Young boys struggling with mental health issues and loneliness are particularly vulnerable to <a href="https://www.theguardian.com/commentisfree/2021/aug/17/incel-movement-extremism-internet-community-misogyny">“incel” rhetoric and misogynistic views</a> amplified through TikTok. Controversial figures such as Andrew Tate and <a href="https://www.intheknow.com/post/problematic-tiktok-dating-coach-branded-as-misogynist-of-the-year/">Russell Hartley</a> continue to be promoted by algorithms, driving traffic and supporting TikTok’s commercial interests. </p>
<p><a href="https://www.insider.com/andrew-tate-tiktok-ban-fanpages-misogynistic-content-circulating-2022-8">According to Business Insider</a>, videos featuring Tate had been viewed more than 13 billion times as of August 2022. This content <a href="https://www.theguardian.com/technology/2022/nov/06/tiktok-still-hosting-toxic-posts-of-banned-influencer-andrew-tate">continues to circulate</a> even though Tate has been banned. </p>
<p>Self-proclaimed men’s rights advocates centre their content on anti-feminist discourse, hyper-masculinity and hierarchical gender roles. What may seem like memes and “entertainment” can <a href="https://link.springer.com/article/10.1007/s10610-023-09559-5">desensitise young boys</a> to rape culture, domestic violence and toxic masculinity. </p>
<p>TikTok’s promotion of idealistic and sexualised content is also harmful for the self-perception of young women and queer youth. This content portrays unrealistic body standards, which leads to comparison, <a href="https://newsroom.unsw.edu.au/news/health/tiktok-and-body-image-idealistic-content-may-be-detrimental-mental-health">increased body dissatisfaction</a> and a higher risk of developing eating disorders.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/anorexia-coach-sexual-predators-online-are-targeting-teens-wanting-to-lose-weight-platforms-are-looking-the-other-way-162938">'Anorexia coach': sexual predators online are targeting teens wanting to lose weight. Platforms are looking the other way</a>
</strong>
</em>
</p>
<hr>
<h2>Empowering sex education</h2>
<p>Due to its popularity, TikTok offers a unique opportunity to <a href="https://www.them.us/story/tiktok-sex-education-lgbtq-sexuality-online">help spread educational</a> content about sex. Doctors and gynaecologists use hashtags such as #obgyn to share content about sexual health, including topics such as consent, contraception and stigmas around sex. </p>
<p><a href="https://www.tiktok.com/@alirodmd">Dr Ali</a>, for instance, educates young women about periods and birth control, and is an advocate for women of colour. <a href="https://www.tiktok.com/@sexedu">Sriha Srinivasan</a> promotes sex education for high-school students and discusses sex myths, consent, STIs, periods and reproductive justice. </p>
<p><div data-react-class="TiktokEmbed" data-react-props="{"url":"https://www.tiktok.com/@sexedu/video/7013923045874126086?is_from_webapp=1\u0026sender_device=pc\u0026web_id=7247360749801375234"}"></div></p>
<p><a href="https://www.tiktok.com/@itsmillyevans">Milly Evans</a> is a queer, non-binary, autistic sex-ed content creator who uses TikTok to advocate for inclusive sex education. They cover topics such as domestic abuse, consent in queer relationships, gender and sexual identities, body-safe sex toys and trans and non-binary rights.</p>
<p>These are just some examples of how TikTok can be a space for informative, inclusive and sex-positive content. However, such content may not receive the same engagement as more lewd and attention-grabbing videos since, like most social media apps, TikTok is optimised for engagement.</p>
<p><div data-react-class="TiktokEmbed" data-react-props="{"url":"https://www.tiktok.com/@itsmillyevans/video/7231574247104138523?is_from_webapp=1\u0026sender_device=pc\u0026web_id=7247360749801375234"}"></div></p>
<h2>A bird’s eye view</h2>
<p>Social media platforms face significant challenges in moderating harmful content effectively. Relying on platforms to self-regulate isn’t enough, so regulatory bodies need to step in.</p>
<p>Australia’s eSafety Commissioner has taken an active role by providing guidelines and resources for parents and users, and by pressuring platforms such as <a href="https://www.smh.com.au/technology/cyber-bullying-content-targeting-children-pulled-from-tiktok-20200713-p55bjt.html">TikTok to remove harmful content</a>. They’re also leading the way in addressing <a href="https://www.reuters.com/technology/tiktok-snapchat-others-sign-pledge-tackle-ai-generated-child-sex-abuse-images-2023-10-30/">AI-generated child sex abuse material</a> on social media.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/australia-has-fined-x-australia-over-child-sex-abuse-material-concerns-how-severe-is-the-issue-and-what-happens-now-215696">Australia has fined X Australia over child sex abuse material concerns. How severe is the issue – and what happens now?</a>
</strong>
</em>
</p>
<hr>
<p>When it comes to TikTok, our efforts should be poured into equipping young users with media literacy skills that can help keep them safe.</p>
<p>For children under 13, it’s up to parents to decide whether they allow access. It’s worth noting TikTok itself has an age limit of 13 years, and Common Sense Media <a href="https://www.commonsensemedia.org/articles/parents-ultimate-guide-to-tiktok">doesn’t encourage</a> use by children under 15. If parents do decide to allow access for a child under 13, they should actively monitor the child’s activity.</p>
<p>While restricting apps’ use might seem like a quick fix, <a href="https://journals.sagepub.com/doi/abs/10.1177/1329878X211046396">our research</a> has found social media restrictions can strain parent-child relationships. Parents are better off taking proactive steps such as having open discussions, building trust, and educating themselves and their children about online risk.</p>
<hr>
<p><em>The Conversation reached out to TikTok for comment but did not receive a response before the deadline.</em></p><img src="https://counter.theconversation.com/content/216114/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Milovan Savic receives funding from the Australian Research Council. </span></em></p><p class="fine-print"><em><span>Sonja Petrovic does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Many TikTok creators, including creators of pornography, use the platform to promote themselves and their explicit content on other platforms.Sonja Petrovic, Assistant Lecturer in Media and Communications, The University of MelbourneMilovan Savic, Research Fellow, ARC Centre of Excellence for Automated Decision-Making and Society, Swinburne University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2168092023-11-03T21:14:23Z2023-11-03T21:14:23ZIt’s not just about facts: Democrats and Republicans have sharply different attitudes about removing misinformation from social media<figure><img src="https://images.theconversation.com/files/557492/original/file-20231103-28-dk0wt0.jpg?ixlib=rb-1.1.0&rect=0%2C10%2C7217%2C4808&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Your political leanings go a long way to determine whether you think it's a good or bad idea to take down misinformation.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/man-reading-fake-news-on-laptop-royalty-free-image/1441611425">Johner Images via Getty Images</a></span></figcaption></figure><p>Misinformation is a key <a href="https://www3.weforum.org/docs/WEF_Global_Risks_Report_2023.pdf">global threat</a>, but Democrats and Republicans disagree about how to address the problem. In particular, Democrats and Republicans diverge sharply on removing misinformation from social media.</p>
<p>Only three weeks after the Biden administration announced the Disinformation Governance Board in April 2022, the <a href="https://www.washingtonpost.com/technology/2022/05/18/disinformation-board-dhs-nina-jankowicz/">effort to develop best practices for countering disinformation was halted</a> because of Republican concerns about its mission. Why do Democrats and Republicans have such different attitudes about content moderation?</p>
<p>My colleagues <a href="https://scholar.google.com/citations?hl=en&user=5EIL7zMAAAAJ&view_op=list_works&sortby=pubdate">Jennifer Pan</a> and <a href="https://scholar.google.com/citations?hl=en&user=KfipOeoAAAAJ&view_op=list_works&sortby=pubdate">Margaret E. Roberts</a> and <a href="https://scholar.google.com/citations?hl=en&user=3flEE1wAAAAJ&view_op=list_works&sortby=pubdate">I</a> found in a study published in the journal Science Advances that Democrats and Republicans not only disagree about what is true or false, they also <a href="https://doi.org/10.1126/sciadv.adg6799">differ in their internalized preferences</a> for content moderation. Internalized preferences may be related to people’s moral values, identities or other psychological factors, or people internalizing the preferences of party elites. </p>
<p>And though people are sometimes strategic about wanting misinformation that counters their political views removed, internalized preferences are a much larger factor in the differing attitudes toward content moderation. </p>
<h2>Internalized preferences or partisan bias?</h2>
<p>In our study, we found that Democrats are about twice as likely as Republicans to want to remove misinformation, while Republicans are about twice as likely as Democrats to consider removal of misinformation as censorship. Democrats’ attitudes might depend somewhat on whether the content aligns with their own political views, but this seems to be due, at least in part, to different perceptions of accuracy.</p>
<p>Previous research showed that Democrats and Republicans <a href="https://doi.org/10.1073/pnas.2210666120">have different views</a> about content moderation of misinformation. One of the most prominent explanations is the “fact gap”: the difference in what Democrats and Republicans believe is true or false. For example, a study found that both Democrats and Republicans were more likely to believe news headlines <a href="https://doi.org/10.1257/jep.31.2.211">that were aligned with their own political views</a>.</p>
<p>But it is unlikely that the fact gap alone can explain the huge differences in content moderation attitudes. That’s why we set out to study two other factors that might lead Democrats and Republicans to have different attitudes: preference gap and party promotion. A preference gap is a difference in internalized preferences about whether, and what, content should be removed. Party promotion is a person making content moderation decisions based on whether the content aligns with their partisan views. </p>
<p>We asked 1,120 U.S. survey respondents who identified as either Democrat or Republican about their opinions on a set of political headlines that we identified as misinformation based on a bipartisan fact check. Each respondent saw one headline that was aligned with their own political views and one headline that was misaligned. After each headline, the respondent answered whether they would want the social media company to remove the headline, whether they would consider it censorship if the social media platform removed the headline, whether they would report the headline as harmful, and how accurate the headline was.</p>
<h2>Deep-seated differences</h2>
<p>When we compared how Democrats and Republicans would deal with headlines overall, we found strong evidence for a preference gap. Overall, 69% of Democrats said misinformation headlines in our study should be removed, but only 34% of Republicans said the same; 49% of Democrats considered the misinformation headlines harmful, but only 27% of Republicans said the same; and 65% of Republicans considered headline removal to be censorship, but only 29% of Democrats said the same.</p>
<p>Even in cases where Democrats and Republicans agreed that the same headlines were inaccurate, Democrats were nearly twice as likely as Republicans to want to remove the content, while Republicans were nearly twice as likely as Democrats to consider removal censorship. </p>
<p><iframe id="GJnyn" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/GJnyn/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>We didn’t test explicitly why Democrats and Republicans have such different internalized preferences, but there are at least two possible reasons. First, Democrats and Republicans might differ in factors like their <a href="https://doi.org/10.1037/a0015141">moral values</a> or <a href="https://doi.org/10.1016/j.jesp.2020.104031">identities</a>. Second, Democrats and Republicans might internalize what the elites in their parties signal. For example, Republican elites have recently framed content moderation as a <a href="https://www.rubio.senate.gov/rubio-introduces-sec-230-legislation-to-crack-down-on-big-tech-algorithms-and-protect-free-speech/">free speech</a> and <a href="https://www.flgov.com/2021/05/24/governor-ron-desantis-signs-bill-to-stop-the-censorship-of-floridians-by-big-tech/">censorship</a> issue. Republicans might use these elites’ preferences to inform their own.</p>
<p>When we zoomed in on headlines that are either aligned or misaligned for Democrats, we found a party promotion effect: Democrats were less favorable to content moderation when misinformation aligned with their own views. Democrats were 11% less likely to want the social media company to remove headlines that aligned with their own political views. They were 13% less likely to report headlines that aligned with their own views as harmful. We didn’t find a similar effect for Republicans. </p>
<p>Our study shows that party promotion may be partly due to different perceptions of accuracy of the headlines. When we looked only at Democrats who agreed with our statement that the headlines were false, the party promotion effect was reduced to 7%.</p>
<h2>Implications for social media platforms</h2>
<p>We find it encouraging that the effect of party promotion is much smaller than the effect of internalized preferences, especially when accounting for accuracy perceptions. However, given the huge partisan differences in content moderation preferences, we believe that social media companies should look beyond the fact gap when designing content moderation policies that aim for bipartisan support.</p>
<p>Future research could explore whether getting Democrats and Republicans to agree on <a href="https://dx.doi.org/10.2139/ssrn.4005326">moderation processes</a> – rather than moderation of individual pieces of content – could reduce disagreement. Also, other types of content moderation such as downweighting, which involves platforms reducing the virality of certain content, might prove to be less contentious. Finally, if the preference gap – the differences in deep-seated preferences between Democrats and Republicans – is rooted in value differences, platforms could try to use <a href="https://doi.org/10.1111/spc3.12501">different moral framings</a> to appeal to people on both sides of the partisan divide.</p>
<p>For now, Democrats and Republicans are likely to continue to disagree over whether removing misinformation from social media improves public discourse or amounts to censorship.</p><img src="https://counter.theconversation.com/content/216809/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ruth Elisabeth Appel has been supported by an SAP Stanford Graduate Fellowship in Science and Engineering, a Stanford Center on Philanthropy and Civil Society PhD Research Fellowship and a Stanford Impact Labs Summer Collaborative Research Fellowship. She has interned at Google in 2020 and attended an event where food was paid for by Meta.</span></em></p>One person’s content moderation is another’s censorship when it comes to Democrats’ and Republicans’ views on handling misinformation.Ruth Elisabeth Appel, Ph.D. Candidate in Communication, Stanford UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2132092023-10-24T12:25:09Z2023-10-24T12:25:09ZLet the community work it out: Throwback to early internet days could fix social media’s crisis of legitimacy<figure><img src="https://images.theconversation.com/files/555410/original/file-20231023-15-otewua.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3489%2C2331&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Content moderators like these workers make decisions about online communities based on company dictates.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/content-moderators-work-at-a-facebook-office-in-austin-news-photo/1142321813">Ilana Panich-Linsman for The Washington Post via Getty Images</a></span></figcaption></figure><p>In the 2018 documentary “<a href="https://gebrueder-beetz.de/en/productions/the-cleaners/">The Cleaners</a>,” a young man in Manila, Philippines, explains his work as a content moderator: “We see the pictures on the screen. You then go through the pictures and delete those that don’t meet the guidelines. The daily quota of pictures is 25,000.” As he speaks, his mouse clicks, deleting offending images while allowing others to remain online.</p>
<p>The man in Manila is one of thousands of content moderators hired as contractors by social media platforms – <a href="https://www.npr.org/2023/03/31/1167246714/googles-ghost-workers-are-demanding-to-be-seen-by-the-tech-giant">10,000 at Google alone</a>. Content moderation on an industrial scale like this is part of the everyday experience for users of social media. Occasionally a post someone makes is removed, or a post someone thinks is offensive is allowed to go viral. </p>
<p>Similarly, platforms add and remove features without input from the people who are most affected by those decisions. Whether you are outraged or unperturbed, most people don’t think much about the history of a system in which people in conference rooms in Silicon Valley and Manila determine your experiences online.</p>
<p>But why should a few companies – or a few billionaire owners – have the power to decide everything about online spaces that billions of people use? This unaccountable model of governance has led stakeholders of all stripes to criticize platforms’ decisions as <a href="https://www.brennancenter.org/sites/default/files/2021-08/Double_Standards_Content_Moderation.pdf">arbitrary</a>, <a href="https://nymag.com/intelligencer/2022/12/twitter-files-explained-elon-musk-taibbi-weiss-hunter-biden-laptop.html">corrupt</a> or <a href="https://www.oxfordstrategyreview.com/content/social-irresponsibility-how-social-media-works-for-the-west-but-fails-the-rest">irresponsible</a>. In the early, pre-web days of the social internet, decisions about the spaces people gathered in online were often made by members of the community. Our <a href="https://doi.org/10.1177/20563051231196864">examination of the early history of online governance</a> suggests that social media platforms could return – at least in part – to models of community governance in order to address their crisis of legitimacy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/iGCGhD8i-o4?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">The documentary ‘The Cleaners’ shows some of the hidden costs of Big Tech’s customer service approach to content moderation.</span></figcaption>
</figure>
<h2>Online governance – a history</h2>
<p>In many early online spaces, governance was handled by community members, not by professionals. One early online space, <a href="https://thenewstack.io/a-look-back-in-time-the-forgotten-fame-of-lambdamoo/">LambdaMOO</a>, invited users to build their own governance system, which devolved power from the hands of those who technically controlled the space – administrators known as “wizards” – to members of the community. This was accomplished via a <a href="https://doi.org/10.1111/j.1083-6101.1996.tb00185.x">formal petitioning process and a set of appointed mediators</a> who resolved conflicts between users.</p>
<p>Other spaces had more informal processes for incorporating community input. For example, on bulletin board systems, users <a href="https://yalebooks.yale.edu/book/9780300248142/the-modem-world/">voted with their wallets</a>, removing critical financial support if they disagreed with the decisions made by the system’s administrators. Other spaces, like text-based Usenet newsgroups, gave users substantial power to shape their experiences. The newsgroups left obvious spam in place, but gave users tools to block it if they chose to. Usenet’s administrators argued that it was fairer to allow each user <a href="https://fishbowl.pastiche.org/2021/01/12/usenet_spam">to make decisions that reflected their individual preferences</a> rather than taking a one-size-fits-all approach.</p>
<p>The graphical web expanded use of the internet from <a href="https://www.internetworldstats.com/emarketing.htm">a few million users to hundreds of millions within a decade</a> from 1995 to 2005. During this rapid expansion, community governance was replaced with governance models inspired by customer service, which focused on scale and cost. </p>
<p>This switch from community governance to customer service made sense to the fast-growing companies that made up the late 1990s internet boom. Promising their investors that they could grow rapidly and make changes quickly, companies looked for approaches to the complex work of governing online spaces <a href="https://doi.org/10.1177/20563051231196864">that centralized power and increased efficiency</a>. </p>
<p>While this customer service model of governance allowed early user-generated content sites like Craigslist and GeoCities <a href="https://datasociety.net/library/origins-of-trust-and-safety/">to grow rapidly</a>, it set the stage for the crisis of legitimacy facing social media platforms today. Contemporary battles over social media are rooted in the sense that the people and processes governing online spaces are unaccountable to the communities that gather in them. </p>
<h2>Paths to community control</h2>
<p>Implementing community governance in today’s platforms could take a number of different forms, some of which are already being experimented with.</p>
<p>Advisory boards like Meta’s <a href="https://about.meta.com/actions/oversight-board-facts/">Oversight Board</a> are one way to involve outside stakeholders in platform governance, providing independent — albeit limited — review of platform decisions. X (formerly Twitter) is taking a more democratic approach with its <a href="https://help.twitter.com/en/using-x/community-notes">Community Notes</a> initiative, which allows users to contextualize information on the platform by crowdsourcing notes and ratings.</p>
<p>Some may question whether community governance can be implemented successfully in platforms that serve billions of users. In response, we point to Wikipedia. It is entirely community-governed and has created an open encyclopedia that’s become the foremost information resource in many languages. Wikipedia is surprisingly resilient to vandalism and abuse, with robust procedures that ensure a resource used by billions remains accessible, accurate and reasonably civil.</p>
<p>On a smaller scale, total self-governance – echoing early online spaces – could be key for communities that serve specific subsets of users. For example, <a href="https://archiveofourown.org/">Archive of Our Own</a> was created after fan-fiction authors – people who write original stories using characters and worlds from published books, television shows and movies – found existing platforms unwelcoming. For example, many fan-fiction authors were <a href="https://www.theverge.com/2022/8/15/23200176/history-of-ao3-archive-of-our-own-fanfiction">kicked off social media platforms</a> due to overzealous copyright enforcement or concerns about sexual content.</p>
<p>Fed up with platforms that didn’t understand their work or their culture, a group of authors designed and built their own platform specifically to meet the needs of their community. AO3, as it is colloquially known, serves millions of people a month, includes tools specific to the needs of fan-fiction authors, and is governed by the same people it serves.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="text above and below a photo of two people in lab coats standing in a hallway" src="https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=817&fit=crop&dpr=1 600w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=817&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=817&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1027&fit=crop&dpr=1 754w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1027&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/552396/original/file-20231005-25-mahqjw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1027&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">X, formerly Twitter, allows people to use Community Notes to append relevant information to posts that contain inaccuracies.</span>
<span class="attribution"><a class="source" href="https://twitter.com/kareem_carr/status/1709198073174311207/photo/1">Screen capture by The Conversation U.S.</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Hybrid models, like on Reddit, <a href="https://www.redditinc.com/policies/content-policy">mix centralized and self-governance</a>. Reddit hosts a collection of interest-based communities called subreddits that have their own rules, norms and teams of moderators. Underlying a subreddit’s governance structure is a set of rules, processes and features that apply to everyone. Not every subreddit is a sterling example of a healthy online community, but more are than are not.</p>
<p>There are also technical approaches to community governance. One approach would enable users to choose the algorithms that curate their social media feeds. Imagine that instead of only being able to use Facebook’s algorithm, you could choose from a suite of algorithms provided by third parties – for example, from The New York Times or Fox News.</p>
<p>More radically decentralized platforms like Mastodon devolve control to a network of servers that are similar in structure to email. This makes it easier to choose an experience that matches your preferences. You can choose which Mastodon server to use, and can switch easily – just like you can choose whether to use Gmail or Outlook for email – and can change your mind, all while maintaining access to the wider email network. </p>
<p>Additionally, advancements in generative AI – which shows <a href="https://doi.org/10.1109/MS.2023.3265877">early promise in producing computer code</a> – could make it easier for people, even those without a technical background, to build custom online spaces when they find existing spaces unsuitable. This would relieve pressure on online spaces to be everything for everyone and support a sense of agency in the digital public sphere.</p>
<p>There are also more indirect ways to support community governance. Increasing transparency – for example, by providing access to data about the impact of platforms’ decisions – can help researchers, policymakers and the public hold online platforms accountable. Further, encouraging ethical professional norms among engineers and product designers can make online spaces more respectful of the communities they serve.</p>
<h2>Going forward by going back</h2>
<p>Between now and the end of 2024, national elections are scheduled in many countries, including Argentina, Australia, India, Indonesia, Mexico, South Africa, Taiwan, the U.K. and the U.S. This is all but certain to lead to conflicts over online spaces. </p>
<p>We believe it is time to consider not just how online spaces can be governed efficiently and in service to corporate bottom lines, but how they can be governed fairly and legitimately. Giving communities more control over the spaces they participate in is a proven way to do just that.</p><img src="https://counter.theconversation.com/content/213209/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ethan Zuckerman receives funding from the MacArthur Foundation, the Ford Foundation, the Knight Foundation and the (US) National Science Foundation.</span></em></p><p class="fine-print"><em><span>Chand Rajendra-Nicolucci receives funding from the MacArthur Foundation and the Ford Foundation. </span></em></p>In the days of online bulletin board systems, community members decided what was acceptable. Reviving that approach to content moderation offers Big Tech a path to legitimacy as public spaces.Ethan Zuckerman, Associate Professor of Public Policy, Communication, and Information, UMass AmherstChand Rajendra-Nicolucci, Research Fellow, Initiative for Digital Public Infrastructure, UMass AmherstLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2098822023-08-14T19:59:42Z2023-08-14T19:59:42ZCan human moderators ever really rein in harmful online content? New research says yes<figure><img src="https://images.theconversation.com/files/542136/original/file-20230810-28-dmbb1y.jpg?ixlib=rb-1.1.0&rect=24%2C49%2C4065%2C4040&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/rhNff6hB41s">Bert B / Unsplash</a></span></figcaption></figure><p>Social media platforms have become the “digital town squares” of our time, enabling communication and the exchange of ideas on a global scale. However, the unregulated nature of these platforms has allowed the proliferation of harmful content such as misinformation, disinformation and hate speech. </p>
<p>Regulating the online world has proven difficult, but one promising avenue is suggested by the European Union’s Digital Services Act, passed in November 2022. This legislation mandates “trusted flaggers” to identify certain kinds of problematic content to platforms, who must then remove it within 24 hours.</p>
<p>Will it work, given the fast pace and complex viral dynamics of social media environments? To find out, we modelled the effect of the new rule, in <a href="https://doi.org/10.1073/pnas.2307360120">research</a> published in the Proceedings of the National Academy of Sciences.</p>
<p>Our results show this approach can indeed reduce the spread of harmful content. We also suggest some insights into how the rules can be implemented in the most effective way.</p>
<h2>Understanding the spread of harmful content</h2>
<p>We used a mathematical model of information spread to analyse how harmful content is disseminated through social networks. </p>
<p>In the model, each harmful post is treated as a “<a href="https://www.jstor.org/stable/2334319">self-exciting point process</a>”. This means it draws more people into the discussion over time and generates further harmful posts, similar to a word-of-mouth process. </p>
<p>The intensity of a post’s self-propagation decreases over time. However, if left unchecked, its “offspring” can generate more offspring, leading to exponential growth.</p>
<figure class="align-center ">
<img alt="A constellation of lights in a dark room, with a group of people silhouetted against the light." src="https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/542139/original/file-20230810-29-i09amm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Social media posts spread online through a process much like word of mouth.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/HOrhCnQsxnQ">Robynne Hu / Unsplash</a></span>
</figcaption>
</figure>
<h2>The potential for harm reduction</h2>
<p>In our study, we used two key measures to assess the effectiveness of the kind of moderation set out in the Digital Services Act: potential harm and content half-life.</p>
<p>A post’s <em>potential harm</em> represents the number of harmful offspring it generates. <em>Content half-life</em> denotes the amount of time required for half of all the post’s offspring to be generated. </p>
<p>We found moderation by the rules of the Digital Services Act can effectively reduce harm, even on platforms with short content half-lives, such as X (formerly known as Twitter). While faster moderation is always more effective, we found that moderating even after 24 hours could still reduce the number of harmful offspring by up to 50%.</p>
<h2>The role of reaction time and harm reduction</h2>
<p>The reaction time required for effective content moderation increases with both the content half-life and potential harm. To put it another way, for content that is longer-lived and generates large numbers of harmful offspring, intervening later can still prevent many harmful subsequent posts.</p>
<p>This suggests the approach of the Digital Services Act can effectively combat harmful content, even on fast-paced platforms like X. </p>
<p>We also found the amount of harm reduction <em>increases</em> for content with greater potential harm. While apparently counterintuitive, this indicates moderation is effective when it targets the offspring of offspring generation – that is, when it breaks the word-of-mouth cycle.</p>
<h2>Making the most of moderation efforts</h2>
<p>Prior research has shown tools based on artificial intelligence struggle to detect online harmful content. The authors of such content are aware of the detection tools, and adapt their language to avoid detection.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/can-ideology-detecting-algorithms-catch-online-extremism-before-it-takes-hold-200629">Can ideology-detecting algorithms catch online extremism before it takes hold?</a>
</strong>
</em>
</p>
<hr>
<p>The Digital Services Act moderation approach relies on manual tagging of posts by “trusted flaggers”, who will have limited time and resources. </p>
<p>To make the most of their efforts, flaggers should focus their efforts on content with high potential harm for which our research shows that moderation is most effective. We estimate the potential harm of a post at its creation by extrapolating its expected number of offspring from previously observed discussions.</p>
<h2>Implementing the Digital Services Act</h2>
<p>Social media platforms already employ content moderation teams, and our research suggests the major platforms at least already have enough staff to enforce the Digital Services Act legislation. There are, however, questions about the cultural awareness of the existing staff as some of these teams are based in different countries to the majority of content posters they are moderating.</p>
<p>The success of the legislation will lie in appointing trusted flaggers with sufficient cultural and language knowledge, developing practical reporting tools for harmful content, and ensuring timely moderation. </p>
<p>Our study’s framework will provide policymakers with valuable guidance in drafting mechanisms for content moderation that prioritise efforts and reaction times effectively.</p>
<h2>A healthier and safer digital public square</h2>
<p>As social media platforms continue to shape public discourse, addressing the challenges posed by harmful content is crucial. Our research on the effectiveness of moderating harmful online content offers valuable insights for policymakers. </p>
<p>By understanding the dynamics of content spread, optimising moderation efforts, and implementing regulations like the Digital Services Act, we can strive for a healthier and safer digital public square where harmful content is mitigated, and constructive dialogue thrives.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-digital-town-square-what-does-it-mean-when-billionaires-own-the-online-spaces-where-we-gather-182047">The 'digital town square'? What does it mean when billionaires own the online spaces where we gather?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/209882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marian-Andrei Rizoiu receives funding from Australian Department of Home Affairs, the Defence Science and Technology Group and the Defence Innovation Network</span></em></p><p class="fine-print"><em><span>Philipp Schneider does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New EU rules require social media platforms to take down flagged posts within 24 hours – and modelling shows that’s fast enough to have a dramatic effect on the spread of harmful content.Marian-Andrei Rizoiu, Senior Lecturer in Behavioral Data Science, University of Technology SydneyPhilipp Schneider, Doctoral Student, EPFL – École Polytechnique Fédérale de Lausanne – Swiss Federal Institute of Technology in LausanneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1993242023-02-08T10:58:09Z2023-02-08T10:58:09ZHow tech companies are failing women workers and social media users – and what to do about it<figure><img src="https://images.theconversation.com/files/508328/original/file-20230206-25-p6o6gc.jpg?ixlib=rb-1.1.0&rect=0%2C386%2C2928%2C1302&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Lerbank-bbk22/Shutterstock</span></span></figcaption></figure><p>From Elon Musk’s <a href="https://theconversation.com/why-elon-musks-first-week-as-twitter-owner-has-users-flocking-elsewhere-193857">erratic start</a> as Twitter’s new owner to <a href="https://www.nytimes.com/2022/11/09/technology/meta-layoffs-facebook.html">Meta’s recent decision</a> to layoff more than 11,000 employees, and an ongoing <a href="https://www.ft.com/content/28f7e49f-09b3-407f-82f8-56683f5d0663">downturn for tech stocks</a>, the social media sector is once again in turmoil. </p>
<p>But while these latest shockwaves have attracted a great deal of public attention, we talk considerably less of their repercussions on women. Big tech companies are failing women on both sides of the screen: their employees and the users of their services. This is why recent moves to <a href="https://www.theguardian.com/technology/2023/feb/04/online-safety-bill-needs-tougher-rules-on-misogyny-say-peers">regulate social media firms</a> should include specific protections for women.</p>
<p>Online abuse, as has been <a href="https://arxiv.org/abs/1902.03093">repeatedly confirmed by academic research</a> and <a href="https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1-1/">civil rights groups</a>, often targets women users. One of Musk’s first acts after buying Twitter was to introduce verification to reduce the number of fake accounts. Such accounts are <a href="https://www.compassioninpolitics.com/three_quarters_of_those_experiencing_online_abuse_say_it_comes_from_anonymous_accounts">often cited</a> among the main causes of social media violence. But the <a href="https://www.cleanuptheinternet.org.uk/post/what-do-elon-musk-s-blue-tick-experiments-mean-for-the-uk-s-online-safety-bill">authentication process</a> (since withdrawn after protests from the Twitter community) simply relied on “certified” profiles paying a monthly fee. </p>
<p>As such, the move seemed more like a way to raise revenues than an effective online safety strategy. To make things worse, and more or less simultaneously, Musk also controversially <a href="https://www.euronews.com/next/2023/01/18/which-controversial-figures-has-elon-musk-reinstated-on-twitter">restored the accounts</a> of several high-profile figures previously banned for misogynistic discourse. This included self-defined “sexist” influencer <a href="https://www.theguardian.com/technology/2022/aug/06/andrew-tate-violent-misogynistic-world-of-tiktok-new-star">Andrew Tate</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/elon-musks-twitter-blue-gives-verification-for-a-fee-this-could-make-twitter-even-less-safe-for-women-193967">Elon Musk's 'Twitter Blue' gives verification for a fee – this could make Twitter even less safe for women</a>
</strong>
</em>
</p>
<hr>
<p>Beyond the tycoon’s chaotic approach to leadership, these decisions indicate wider trends within the social media industry with far-reaching ramifications for women. </p>
<p>Over the last few years, in fact, platforms such as Twitter, Facebook, YouTube and TikTok have all responded to mounting public pressure by adopting more stringent guidelines against <a href="https://www.epe.admin.cam.ac.uk/five-things-you-should-know-about-digital-gender-based-violence-dgbv-and-ways-curb-it">gender-based hate speech</a>. These changes, however, have been mostly achieved through <a href="https://mckinneylaw.iu.edu/iiclr/pdf/vol32p97.pdf">self-regulation</a> and voluntary partnerships with the public sector. This approach leaves companies free to reverse previous decisions in the way Musk has.</p>
<p>Besides, censoring individual internet personalities and promoting account verification doesn’t actually address the core causes of social media violence. The actual design of these platforms and the business models these companies employ play a more central role. </p>
<p>Social media platforms want to keep us all online to produce profitable data and maintain audiences for advertisements. They do this with algorithms that create an echo chamber. This means we keep seeing content similar to whatever attracted our clicks in the first place. But research shows this also facilitates the <a href="https://intpolicydigest.org/how-social-media-is-fueling-divisiveness/">circulation of “divisive” messages</a>. It also supports the <a href="https://research-information.bris.ac.uk/en/publications/from-individual-perpetrators-to-global-mobilisation-strategies-th">spread of online sexism</a>, and pushes users that view problematic materials into a “<a href="https://www.theguardian.com/society/2022/oct/30/global-incel-culture-terrorism-misogyny-violent-action-forums">black hole</a>” of related updates.</p>
<p>While the platforms themselves have become problematic for women that use them, many of the companies behind them are also failing the women workers that build and manage online social media networks.</p>
<h2>Tech company redundancies</h2>
<p>Social media companies’ treatment of employees should also be examined through a gender lens, particularly more recently as they <a href="https://www.businessinsider.com/economic-downturn-tech-industry-layoffs-stock-plunge-funding-slowdown-2022-6?r=US&IR=T">react to a market downturn</a> with mass layoffs and other cost-cutting strategies.</p>
<p>A particularly at-risk category (which I have examined, among others, in my <a href="https://septemberpublishing.org/product/the-threat-why-digital-capitalism-is-sexist-and-how-to-resist/">recently published book</a>) is that of <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">social media moderators</a>. These employees are charged with the task of cleaning up platforms of content that violates community standards. They are constantly exposed to misogynistic hate speech, images of sexual violence and non-consensual pornography. <a href="https://www.washingtonpost.com/technology/2020/05/12/facebook-content-moderator-ptsd/">Female staff</a> tend to feel especially triggered and many <a href="https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health">develop mental health issues</a>, including depression, anxiety and post-traumatic stress syndrome as a result.</p>
<p>Social media firms and their international subcontractors (to which large part of moderation operations are outsourced) make other choices that also infringe on employees’ rights, particularly female moderators. One of the latest has been <a href="https://www.theguardian.com/business/2021/mar/26/teleperformance-call-centre-staff-monitored-via-webcam-home-working-infractions">placing AI-powered cameras</a> in the homes of moderators who work remotely. This is a particularly brutal intrusion for women since they already often face harassment or safety issues in more public spaces.</p>
<p>Online abuse and workers’ treatment concern people of all genders. Women, however, pay a unique price for social media violence. Recent <a href="https://onlineviolencewomen.eiu.com/">research from The Economist</a> shows fear of new aggressions pushed nine out of ten female victims surveyed to alter their digital habits – 7% even quit their jobs.</p>
<figure class="align-center ">
<img alt="Woman looks at laptop; home in background; remote working." src="https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Moderators delete posts that violate community standards on social media and so are regularly exposed to disturbing content.</span>
<span class="attribution"><span class="source">fizkes/Shutterstock</span></span>
</figcaption>
</figure>
<h2>Specific solutions to online hate</h2>
<p>Just as women workers and users encounter specific issues as a result of social media policies – or lack thereof – the interventions designed to improve their safety and wellbeing should also be specific.</p>
<p>My book looks at how digital capitalists – including but not limited to social media corporations – fail female users and workers, and <a href="https://gen-pol.org/2019/11/when-technology-meets-misogyny-multi-level-intersectional-solutions-to-digital-gender-based-violence/">how to remedy this</a>. Among the reforms I suggest are interventions to make platforms more accountable. </p>
<p>The <a href="https://bills.parliament.uk/bills/3137">UK Online Safety Bill</a> is set to give regulators the power to fine or prosecute companies that neglect to remove harmful materials, for example. It is important, though, that policy change in this area specifically identifies women as a protected category, which this bill <a href="https://demos.co.uk/blog/the-online-safety-bill-will-it-protect-women-online/">currently fails to do</a>. Transparency commitments for platforms’ algorithms and regulations around data-mining business models could also help but are so far not yet – or not fully – integrated into most national and international legislation.</p>
<p>And since workers must be protected as much as technology users, it is vital that <a href="https://www.wired.co.uk/article/facebook-content-moderators-ireland">they can organise via trade unions</a>, and that there is a push to ensure employers respect their duty of care towards the workforce. This might involve prohibiting invasive workplace surveillance, for example.</p>
<p>There is one solution to both issues: it is time for social media giants to implement specific strategies to safeguard women on both sides of the screen.</p><img src="https://counter.theconversation.com/content/199324/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lilia Giugni is affiliated with GenPol - Gender & Policy Insights, a UK-based feminist think tank, and with the Royal Society of Arts.</span></em></p>Women need better protection from online hate and misogyny, both while using social media and when working for technology companies.Lilia Giugni, Research Associate, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1951712023-01-04T13:28:55Z2023-01-04T13:28:55ZBeyond Section 230: A pair of social media experts describes how to bring transparency and accountability to the industry<figure><img src="https://images.theconversation.com/files/502488/original/file-20221221-22-xxcdb2.jpg?ixlib=rb-1.1.0&rect=0%2C4%2C3100%2C2023&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Social media regulation – and the future of Section 230 – are top of mind for many in Congress.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/in-this-photo-illustration-facebook-ceo-mark-zuckerberg-news-photo/1229337652">Pavlo Conchar/SOPA Images/LightRocket via Getty Images</a></span></figcaption></figure><p>One of Elon Musk’s stated reasons for purchasing Twitter was to use the social media platform to <a href="https://www.washingtonpost.com/technology/2022/04/14/elon-musk-twitter/">defend the right to free speech</a>. The ability to defend that right, or to abuse it, lies in a specific piece of legislation passed in 1996, at the pre-dawn of the modern age of social media. </p>
<p>The legislation, Section 230 of the <a href="https://www.mtsu.edu/first-amendment/article/1070/communications-decency-act-of-1996">Communications Decency Act</a>, gives social media platforms some truly astounding protections under American law. Section 230 has also been called <a href="https://theconversation.com/what-is-section-230-an-expert-on-internet-law-and-regulation-explains-the-legislation-that-paved-the-way-for-facebook-google-and-twitter-164993">the most important 26 words in tech</a>: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”</p>
<p>But the more that platforms like Twitter <a href="https://theconversation.com/twitter-in-2022-5-essential-reads-about-the-consequences-of-elon-musks-takeover-of-the-microblogging-platform-196550">test the limits of their protection</a>, the more American politicians on both sides of the aisle <a href="https://theconversation.com/trump-cant-beat-facebook-twitter-and-youtube-in-court-but-the-fight-might-be-worth-more-than-a-win-164146">have been motivated to modify or repeal Section 230</a>. As a <a href="https://scholar.google.com/citations?user=_TUaYW4AAAAJ&hl=en&oi=ao">social media media professor</a> and a <a href="https://seaver.pepperdine.edu/academics/faculty/jon-pfeiffer/">social media lawyer</a> with a long history in this field, we think change in Section 230 is coming – and we believe that it is long overdue.</p>
<h2>Born of porn</h2>
<p>Section 230 had its origins in the <a href="https://theconversation.com/what-is-section-230-an-expert-on-internet-law-and-regulation-explains-the-legislation-that-paved-the-way-for-facebook-google-and-twitter-164993">attempt to regulate online porn</a>. One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/FHTc6s5YTbU?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Section 230 explained.</span></figcaption>
</figure>
<p>But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls – which contains not just porn but also misinformation and hate speech – <a href="https://digiday.com/media/meet-the-absolutist-with-the-section-230-tattoo-on-googles-new-misinformation-policy-team/">the absolutist stance</a> that they have total protection and total legal “immunity” is untenable. </p>
<p><a href="https://www.eff.org/deeplinks/2020/12/section-230-good-actually">A lot of good has come from Section 230</a>. But the history of social media also makes it clear that it is far from perfect at balancing corporate profit with civic responsibility. </p>
<p>We were curious about how current thinking in legal circles and digital research could give a clearer picture about how Section 230 might realistically be modified or replaced, and what the consequences might be. We envision three possible scenarios to amend Section 230, which we call verification triggers, transparent liability caps and Twitter court.</p>
<h2>Verification triggers</h2>
<p>We support free speech, and we believe that everyone should have a right to share information. When people who oppose vaccines share their concerns about the rapid development of RNA-based COVID-19 vaccines, for example, <a href="https://www.bbc.com/future/article/20210720-the-complexities-of-vaccine-hesitancy">they open up a space for meaningful conversation and dialogue</a>. They have a right to share such concerns, and others have a right to counter them.</p>
<p>What we call a “verification trigger” should kick in when the platform begins to monetize content related to misinformation. Most platforms <a href="https://www.washingtonpost.com/business/twitter-musk-and-why-online-speech-gets-moderated/2022/10/03/0cb0ae68-434f-11ed-be17-89cbe6b8c0a5_story.html">try to detect misinformation</a>, and many label, moderate or remove some of it. But many monetize it as well through <a href="https://theconversation.com/if-big-tech-has-the-will-here-are-ways-research-shows-self-regulation-can-work-154248">algorithms that promote popular – and often extreme or controversial – content</a>. When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show. </p>
<p>Twitter began <a href="https://www.cnn.com/2022/12/12/tech/twitter-verification-relaunch/index.html">selling verification check marks</a> for user accounts in November 2022. By verifying a user account is a real person or company and charging for it, Twitter is both <a href="https://www.reuters.com/legal/transactional/would-twitter-get-online-publisher-immunity-fake-blue-check-suits-2022-11-14/">vouching for it and monetizing that connection</a>. Reaching a certain dollar value from questionable content should trigger the ability to sue Twitter, or any platform, in court. Once a platform begins earning money from users and content, including verification, it steps outside the bounds of Section 230 and into the bright light of responsibility – and into the world of tort, defamation and privacy rights laws.</p>
<h2>Transparent caps</h2>
<p>Social media platforms currently make their own rules about hate speech and misinformation. They also keep secret a lot of information about how much money the platform makes off of content, like a given tweet. This makes what isn’t allowed and what is valued opaque.</p>
<p>One sensible change to Section 230 would be to expand its 26 words to clearly spell out what is expected of social media platforms. The added language would specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it. We acknowledge that <a href="https://carnegieendowment.org/2022/11/10/problem-with-defining-disinformation-pub-88385">this definition isn’t easy</a>, that it’s dynamic, and that <a href="https://doi.org/10.1038/s41598-021-01487-w">researchers and companies are already struggling with it</a>. </p>
<p>But government can raise the bar by setting some coherent standards. If a company can show that it’s met those standards, the amount of liability it has could be limited. It wouldn’t have complete protection as it does now. But it would have a lot more transparency and public responsibility. We call this a “transparent liability cap.”</p>
<h2>Twitter court</h2>
<p>Our final proposed amendment to Section 230 already exists in a rudimentary form. Like Facebook and other social platforms, Twitter has content moderation panels that determine standards for users on the platform, and thus standards for the public that shares and is exposed to content through the platform. You can think of this as “Twitter court.”</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/uF0v_v4G-Vk?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Effective content moderation involves the difficult balance of restricting harmful content while preserving free speech.</span></figcaption>
</figure>
<p>Though Twitter’s content moderation <a href="https://www.wired.com/story/twitters-moderation-system-is-in-tatters/">appears to be suffering</a> from changes and staff reductions at the company, we believe that panels are a good idea. But keeping panels hidden behind the closed doors of profit-making companies is not. If companies like <a href="https://www.barrons.com/articles/elon-musk-twitter-takeover-transparency-51668465141">Twitter want to be more transparent</a>, we believe that should also extend to their own inner operations and deliberations. </p>
<p>We envision extending the jurisdiction of “Twitter court” to neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform. Rather than going to actual court for cases of defamation or privacy violation, Twitter court would suffice under many conditions. Again, this is a way to pull back some of Section 230’s absolutist protections without removing them entirely.</p>
<h2>How would it work – and would it work?</h2>
<p>Since 2018, platforms have had <a href="https://www.theverge.com/2021/6/24/22546984/fosta-sesta-section-230-carveout-gao-report-prosecutions">limited Section 230 protection in cases of sex trafficking</a>. A recent academic proposal suggests <a href="https://www.hks.harvard.edu/sites/default/files/centers/mrcbg/files/FWP_2021-02.pdf">extending these limitations</a> to incitement to violence, hate speech and disinformation. House Republicans have also suggested a <a href="https://republicans-energycommerce.house.gov/news/press-release/ec-republicans-announce-next-phase-of-their-effort-to-hold-big-tech-accountable/">number of Section 230 carve-outs</a>, including those for content relating to terrorism, child exploitation or cyberbullying.</p>
<p>Our three ideas of verification triggers, transparent liability caps and Twitter court may be an easy place to start the reform. They could be implemented individually, but they would have even greater authority if they were implemented together. The increased clarity of transparent verification triggers and transparent liability would help set meaningful standards balancing public benefit with corporate responsibility in a way that <a href="https://theconversation.com/if-big-tech-has-the-will-here-are-ways-research-shows-self-regulation-can-work-154248">self-regulation</a> has not been able to achieve. Twitter court would provide a real option for people to arbitrate rather than to simply watch misinformation and hate speech bloom and platforms profit from it. </p>
<p>Adding a few meaningful options and amendments to Section 230 will be difficult because defining hate speech and misinformation in context, and setting limits and measures for monetization of context, will not be easy. But we believe these definitions and measures are achievable and worthwhile. Once enacted, these strategies promise to make online discourse stronger and platforms fairer.</p><img src="https://counter.theconversation.com/content/195171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A key piece of federal law, Section 230, has been credited with fostering the internet and allowing misinformation and hate speech to flourish. Here’s how it could be reformed.Robert Kozinets, Professor of Journalism, USC Annenberg School for Communication and JournalismJon Pfeiffer, Adjunct Professor of Law, Pepperdine UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1943292022-11-11T13:12:09Z2022-11-11T13:12:09ZWhat is Mastodon? A computational social scientist explains how the ‘federated’ network works and why it won’t be a new Twitter<figure><img src="https://images.theconversation.com/files/494737/original/file-20221110-21-34eit8.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C5000%2C3323&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Twitter users who are fleeing to the social media platform Mastodon are finding it to be a different animal.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/in-this-photo-illustration-mastodon-homepage-is-seen-news-photo/1244585198">Davide Bonaldo/SOPA Images/LightRocket via Getty Images</a></span></figcaption></figure><p>In the wake of Elon Musk’s noisy takeover of Twitter, people have been looking for alternatives to the <a href="https://abcnews.go.com/Business/hate-speech-increased-twitter-elon-musk-takeover-study/story?id=92445797">increasingly toxic</a> microblogging social media platform. Many of those fleeing or hedging their bets have turned to <a href="https://joinmastodon.org/">Mastodon</a>, which has attracted <a href="https://www.bloomberg.com/news/articles/2022-11-07/mastodon-struggles-to-keep-up-with-surge-of-new-users-fleeing-twitter">hundreds of thousands of new users</a> since Twitter’s acquisition.</p>
<p>Like Twitter, Mastodon allows users to post, follow people and organizations, and like and repost others’ posts.</p>
<p>But while Mastodon supports many of the same social networking features as Twitter, it is not a single platform. Instead, it’s a federation of independently operated, interconnected <a href="https://instances.social/">servers</a>. Mastodon servers are based on <a href="https://opensource.com/resources/what-open-source">open-source software</a> developed by German nonprofit <a href="https://joinmastodon.org/de/about">Mastodon gGmbH</a>. The interconnected Mastodon servers, along with other servers that can “talk” to Mastodon servers, are collectively dubbed the “<a href="https://www.fediverse.to/">fediverse</a>.”</p>
<h2>Mastodon U.</h2>
<p>A key aspect of the fediverse is that each server is governed by rules set by the people who operate it. If you think of the fediverse as a university, each Mastodon server is like a dorm.</p>
<p>Which dorm you’re initially assigned to can be somewhat random but still profoundly shapes the kind of conversations you overhear and the relationships you form. You can still interact with people who live in other dorms, but the leaders and rules in your dorm shape what you can do. </p>
<p>If you’re particularly unhappy with your dorm, you can move to a new housing situation – another dorm, a sorority, an apartment – that is a better fit, and you bring your relationships with you. But you are then subject to the rules of the new place where you live. There are hundreds of Mastodon servers, called instances, where you can set up your account, and these instances have different rules and norms for who can join and what content is permitted. </p>
<p>In contrast, social media platforms like Twitter and Facebook put everyone in a single, gigantic dorm. As millions or billions of people joined, the companies running these platforms added more floors and bedrooms. Everyone could communicate with each other and theoretically join each other’s conversations within the dorm, but everyone also has to live under the same rules. </p>
<p>If you didn’t like or didn’t follow the rules, you had to leave the megadorm, but you were not able to bring your relationships with you to your new housing – a different social media platform – or talk to people who stayed in your original megadorm. These platforms tapped into the resulting fear of missing out to lock people into a <a href="https://apnews.com/article/media-data-privacy-social-media-6a66d543701f0c792eaf8907e34a7a52">highly surveilled</a> dorm where their otherwise private behavior was mined <a href="https://consumerfed.org/consumer_info/factsheet-surveillance-advertising-what-is-it/">to sell ads</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of a microblogging app" src="https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=474&fit=crop&dpr=1 600w, https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=474&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=474&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=596&fit=crop&dpr=1 754w, https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=596&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/494738/original/file-20221110-19-xykzp.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=596&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mastodon supports all the familiar social media functions: posting, liking, reposting and following.</span>
<span class="attribution"><a class="source" href="https://upload.wikimedia.org/wikipedia/commons/1/13/Mastodon_Single-column-layout.png">Eugen Rochko via Wikimedia Commons</a></span>
</figcaption>
</figure>
<h2>Incentives for good behavior</h2>
<p>The big social media companies sell ads to pay for two primary services: the technical infrastructure of hardware and software that lets users access the platform, and the social infrastructure of usability, policy and content moderation that keeps the platform in line with users’ expectations and rules. </p>
<p>In the Mastodon collection of servers, if you don’t like what someone is doing, you can cut ties and move to another server but keep the relationships you already made. This removes the fear of missing out that could otherwise lock users into a server with other people’s bad behavior. </p>
<p>There are a few factors that should put Mastodon servers under strong pressure to actively and responsibly moderate the behavior of their members. First, most servers don’t want other servers cutting ties entirely, so there is strong reputational pressure to police members’ behavior and not tolerate trolls and harassers. </p>
<p>Second, people can migrate between servers relatively easily, so the server administrators can compete to provide the best moderation experience that attracts and keeps people around. </p>
<p>Third, the technical and financial costs of creating a new server are much greater than the costs of moderating a server. This should limit the number of new servers cropping up to evade bans, which would avoid the endless “whack-a-mole” challenge of new spam and troll accounts that the big social media platforms have to deal with.</p>
<h2>Not all milk and honey</h2>
<p>The federated server model on Mastodon also has potential drawbacks. First, finding a server to join on Mastodon can be hard, especially when a flood of people trying to find servers leads to the creation of waitlists, and the rules and values of the people running a server aren’t always easy to find. </p>
<p>Second, there are significant financial and technical challenges with maintaining servers that grow with the number of members and their activity. After the honeymoon is over, Mastodon users should be prepared for membership fees, NPR-style fundraising campaigns or podcast-style promotional ads to cover server hosting costs that can go into the hundreds of dollars per month per server.</p>
<p>Third, despite calls for newspapers, universities and governments to host their own servers, there are complicated legal and professional questions that could severely limit public institutions’ abilities to moderate their “dorms” effectively. Professional societies with their own methods of verification and established codes of conduct and ethics may be better equipped to host and moderate Mastodon servers than other types of institutions. </p>
<p>Fourth, the current “nuclear option” of servers entirely cutting ties with other servers leaves little room for repairing relations and reengagement. Once the tie between two servers is severed, it would be difficult to renew it. This situation could drive destabilizing user migrations and reinforce polarizing echo chambers. </p>
<p>Finally, there are tensions between longtime Mastodon users and newcomers around content warnings, hashtags, post visibility, accessibility and tone that are different from what was popular on Twitter.</p>
<p>Still, with Twitter melting down and the long-standing issues with the major social media platforms, for many people the new land of Mastodon and the fediverse doesn’t have to be all milk and honey.</p>
<p><em><a href="https://newsie.social/@TheConversationUS">The Conversation U.S. is on Mastodon now</a> - please follow us if you’re trying it out.</em></p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/9ceO6N8OX_w?wmode=transparent&start=188" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Step-by-step instructions for joining Mastodon.</span></figcaption>
</figure><img src="https://counter.theconversation.com/content/194329/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brian C. Keegan receives funding from the National Science Foundation.</span></em></p>The turmoil at Twitter has many people turning to an alternative, Mastodon. The social media platform does a lot of what Twitter and Facebook do, but there are key differences.Brian C. Keegan, Assistant Professor of Information Science, University of Colorado BoulderLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1938772022-11-07T13:59:10Z2022-11-07T13:59:10ZTwitter and Elon Musk: why free speech absolutism threatens human rights<p>For a man who made a fortune from electric cars, the Twitter takeover has turned into a fairly bumpy ride so far. Soon after <a href="https://www.theguardian.com/technology/2022/oct/27/elon-musk-completes-twitter-takeover">buying the social media company</a> for US$44 billion (£38 billion), Elon Musk said he had “<a href="https://www.bbc.co.uk/news/business-63524219">no choice</a>” about laying off a large proportion of the company’s staff. </p>
<p>He has already faced a backlash over his move to charge Twitter users a monthly fee for their “blue tick” verified status. And those users should also be concerned about plans from the self-proclaimed “<a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">free speech absolutist</a>” to <a href="https://time.com/6227031/twitter-misinformation-midterm-elections/">reduce content moderation</a>. </p>
<p>Moderation, the screening and blocking of <a href="https://yalebooks.yale.edu/book/9780300261431/custodians-internet/">unacceptable online content</a>, has been in place for as long as the internet has existed. And after becoming an increasingly <a href="https://www.siliconrepublic.com/business/facebook-content-moderation-automated">important and sophisticated feature</a> against a rising tide of hate speech, misinformation and illegal content, it should not be undone lightly.</p>
<p>Anything which weakens filters, allowing more harmful content to reach our screens, could have serious implications for human rights, both online and offline. </p>
<p>For it is not just governments which are responsible for upholding human rights – <a href="https://www.ohchr.org/sites/default/files/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf">businesses are too</a>. And when different human rights clash, as they sometimes do, that clash needs to be managed responsibly.</p>
<p>Social media has proved itself to be an extremely powerful way for people around the world to assert their human right to freedom of expression – the freedom to seek, receive and impart all kinds of information and ideas.</p>
<p>But freedom of expression is not without limits. International human rights law prohibits propaganda for war, as well as advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence. It also allows for restrictions necessary to ensure that rights or reputations <a href="https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights">are respected</a>.</p>
<p>So Twitter, in common with other online platforms, has a responsibility to respect freedom of expression. But equally, it has a responsibility not to allow freedom of expression to <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/096/72/PDF/G1809672.pdf?OpenElement">override other human rights</a> completely. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1588591603622772736"}"></div></p>
<p>After all, harmful online content is often used to restrict the freedom of expression of others. Sometimes, online threats spill over to the offline world and cause irreparable <a href="https://www.ohchr.org/en/statements/2022/02/statement-irene-khan-special-rapporteur-promotion-and-protection-freedom-opinion">physical and emotional harm</a>. </p>
<p>Any moves to remove content moderation therefore risk breaching corporate human rights obligations. Unlimited freedom of expression for some almost inevitably results in the restriction elsewhere of that exact same freedom. And the harm is unlikely even to stop there.</p>
<p>Musk claims that Twitter will now become a more democratic “town square”. But without content moderation, his privately owned version of a town square could become dysfunctional and <a href="https://www.theguardian.com/commentisfree/2022/oct/01/molly-russell-was-trapped-by-the-cruel-algorithms-of-pinterest-and-instagram">dangerous</a>. </p>
<p>Twitter – again, like most other social media platforms – has long been linked to overt expressions of <a href="https://journals.sagepub.com/doi/full/10.1177/0170840621994501">racism and misogyny</a>, with a flood of racist tweets even surfacing after Musk <a href="https://www.washingtonpost.com/technology/2022/10/28/musk-twitter-racist-posts/">closed his deal</a>.</p>
<p>And while Musk reassures us that Twitter will not <a href="https://www.irishtimes.com/business/2022/11/03/twitter-wont-be-a-hellscape-musk-promises-advertisers-theyre-not-so-sure/">become a “hellscape”</a>, it is important to remember that content moderation is not the same as censorship. In fact, moderation may <a href="https://www.techdirt.com/2022/03/30/why-moderating-content-actually-does-more-to-support-the-principles-of-free-speech/">facilitate genuine dialogue</a> by cracking down on the spam and toxic talk which often disrupt communication on social media. </p>
<h2>User friendly?</h2>
<p>Moderation also offers reassurance. Without it, Twitter risks losing users who may leave for alternative platforms considered safer and a better <a href="https://www.forbes.com/sites/jemimamcevoy/2021/06/17/last-years-advertising-boycott-of-facebook-led-to-change-but-not-where-you-think-report-finds/?sh=34321b754522">ideological fit</a>. </p>
<p>Valuable advertisers are also quick to <a href="https://theconversation.com/tech-giants-need-to-take-more-responsibility-for-the-advertising-that-makes-them-billions-107025">move away</a> from online spaces they consider divisive and risky. General Motors was one of the first big brands to announce a <a href="https://www.cnbc.com/2022/10/28/gm-temporarily-suspends-advertising-on-twitter-following-elon-musk-takeover.html">temporary halt</a> on paid advertising on Twitter after Musk took over.</p>
<p>Of course, we do not yet know exactly what Musk’s version of Twitter will eventually look like. But there have <a href="https://www.nytimes.com/2022/10/28/technology/twitter-elon-musk-content-moderation.html">been suggestions</a> that content moderation teams may be disbanded in favour of a “moderation council”. </p>
<p>If it is similar to the “<a href="https://about.meta.com/actions/oversight-board-facts/">oversight board</a>” at Meta (formerly Facebook), content decisions are set to be outsourced to an external party representing diverse views. But if Twitter has less internal control and accountability, harmful content may become a harder beast to tame. </p>
<p>Such abdication of responsibility risks breaching Twitter’s human rights obligations, and having a negative impact both on individuals affected by harmful content, and on the overall approach to human rights adopted by other online platforms.</p>
<p>So as one (extremely) wealthy businessman claims to “free” the blue Twitter bird for the sake of humanity, he also gains commercial control of what has been conceived as being a relatively democratic social space until now. What he does next will have serious ramifications for our human rights <a href="https://profilebooks.com/work/the-age-of-surveillance-capitalism/">in a digital age</a>. </p>
<p>Content moderation is by no way a panacea and the claim that social media platforms are <a href="https://www.theguardian.com/technology/2020/may/28/zuckerberg-facebook-police-online-speech-trump">“arbiters of the truth”</a> is problematic for many reasons. We must also not forget the emotional and psychological toll of human content moderators having to view “<a href="https://yalebooks.yale.edu/book/9780300261479/behind-the-screen/">the worst of humanity</a>” to protect our screens. Yet, sanitisation of social platforms is also not the answer. The internet is a better place when the most successful platforms engage in human rights-focused screening – for everyone’s benefit.</p>
<p><a href="https://theconversation.com/au/topics/social-media-and-society-125586" target="_blank"><img src="https://images.theconversation.com/files/479539/original/file-20220817-20-g5jxhm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=144&fit=crop&dpr=1" width="100%"></a></p><img src="https://counter.theconversation.com/content/193877/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sarah Glozer has received funding from the Economic and Social Research Council.</span></em></p><p class="fine-print"><em><span>Emily Jane Godwin receives funding from the Engineering and Physical Sciences Research Council (EPSRC) for her position as a PhD candidate.</span></em></p><p class="fine-print"><em><span>Rita Mota does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Moderating content is a box that still needs to be ticked.Sarah Glozer, Senior Lecturer in Marketing & Society, University of BathEmily Jane Godwin, PhD Candidate in Cyber Security, University of BathRita Mota, Assistant Professor, Department of Society, Politics and Sustainability, ESADELicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1927352022-11-03T05:35:15Z2022-11-03T05:35:15ZWhat is shadowbanning? How do I know if it has happened to me, and what can I do about it?<figure><img src="https://images.theconversation.com/files/493217/original/file-20221103-21-erx0q.jpeg?ixlib=rb-1.1.0&rect=8%2C14%2C1988%2C1982&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Tech platforms use recommender algorithms to control society’s key resource: <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.287">attention</a>. With these algorithms they can quietly demote or hide certain content instead of just <a href="https://ieeexplore.ieee.org/abstract/document/9488792/">blocking or deleting it</a>. This opaque practice is called “shadowbanning”. </p>
<p>While platforms will often deny they engage in shadowbanning, there’s plenty of evidence it’s well and truly present. And it’s a problematic form of content <a href="https://law.yale.edu/sites/default/files/area/center/isp/documents/reduction_ispessayseries_jul2022.pdf">moderation</a> that desperately needs oversight.</p>
<h2>What is shadowbanning?</h2>
<p>Simply put, shadowbanning is when a platform reduces the visibility of content <a href="https://journals.sagepub.com/doi/full/10.1177/20563051221117552">without alerting the user</a>.
The content may still be potentially accessed, but with conditions on how it circulates. </p>
<p>It may no longer appear as a recommendation, in a search result, in a news feed, or in other users’ <a href="https://law.yale.edu/sites/default/files/area/center/isp/documents/reduction_ispessayseries_jul2022.pdf">content queues</a>. One example would be burying a comment underneath <a href="https://yahootechpulse.easychair.org/publications/preprint_download/z4jt">many</a> <a href="https://apo.org.au/node/308132">others</a>.</p>
<p>The term “shadowbanning” first appeared in 2001, when it referred to making posts invisible to everyone except the poster <a href="https://journals.sagepub.com/doi/full/10.1177/01634437221077174">in an online forum</a>. Today’s version of it (where content is demoted through algorithms) is <a href="https://law.yale.edu/sites/default/files/area/center/isp/documents/reduction_ispessayseries_jul2022.pdf">much more nuanced</a>.</p>
<p>Shadowbans are distinct from other moderation approaches in a number of ways. They are:</p>
<ul>
<li>usually algorithmically enforced</li>
<li>informal, in that they are <a href="https://journals.sagepub.com/doi/full/10.1177/01634437221111923">not explicitly communicated</a></li>
<li>ambiguous, since they don’t decisively punish users who violate platform policies.</li>
</ul>
<h2>Which platforms shadowban content?</h2>
<p>Platforms such as <a href="https://www.businessinsider.com/mark-zuckerberg-no-shadow-ban-facebook-but-mistakes-are-made-2022-8#:%7E:text=Rogan%20then%20asked%20Zuckerberg%20to,we're%20talking%20about.%22">Instagram, Facebook</a> and <a href="https://blog.twitter.com/en_us/topics/company/2018/Setting-the-record-straight-on-shadow-banning">Twitter</a> generally deny performing shadowbans, but typically do so by referring to the original <a href="https://law.yale.edu/sites/default/files/area/center/isp/documents/reduction_ispessayseries_jul2022.pdf">2001 understanding of it</a>.</p>
<p>When shadowbanning has been reported, platforms have explained this away by citing technical glitches, users’ failure to create engaging content, or as a matter of chance <a href="https://www.tandfonline.com/doi/abs/10.1080/1369118X.2021.1994624">through black-box algorithms</a>.</p>
<p>That said, most platforms will admit to <a href="https://ieeexplore.ieee.org/abstract/document/9488792/">visibility reduction</a> or “demotion” of content. And that’s still shadowbanning as the term is now used.</p>
<p>In 2018, Facebook and Instagram became the first major <a href="https://www.facebook.com/notes/751449002072082/">platforms to admit</a> they algorithmically reduced user engagement with “<a href="https://journals.sagepub.com/doi/full/10.1177/20563051221117552">borderline” content</a> – which in Meta CEO Mark Zuckerberg’s words included “sensationalist and provocative content”.</p>
<p>YouTube, Twitter, LinkedIn and <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.287">TikTok</a> have since announced similar strategies to deal with <a href="https://journals.sagepub.com/doi/full/10.1177/01634437221111923">sensitive content</a>. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1037399156460081152"}"></div></p>
<p><a href="https://cdt.org/insights/shedding-light-on-shadowbanning/">In one survey</a> of 1,006 social media users, 9.2% reported they had been shadowbanned. Of these 8.1% were on Facebook, 4.1% on Twitter, 3.8% on Instagram, 3.2% on TikTok, 1.3% on Discord, 1% on Tumblr and less than 1% on YouTube, Twitch, Reddit, NextDoor, Pinterest, Snapchat and LinkedIn.</p>
<p>Further evidence for shadowbanning comes from <a href="https://files.osf.io/v1/resources/xcz2t/providers/osfstorage/628662af52d1723f1080bc21?action=download&direct&version=1">surveys</a>, <a href="https://journals.sagepub.com/doi/full/10.1177/01634437221111923">interviews</a>, internal <a href="https://www.technologyreview.com/2019/11/25/102440/tiktok-content-moderation-politics-protest-netzpolitik/">whistle-blowers</a>, information <a href="https://journals.sagepub.com/doi/full/10.1177/20563051221117552">leaks</a>, <a href="https://www.vice.com/en/article/a3q744/where-did-shadow-banning-come-from-trump-republicans-shadowbanned">investigative</a> <a href="https://www.vice.com/en/article/v7gq4x/how-shadowbanning-went-from-a-conspiracy-theory-to-a-selling-point-v27n3">journalism</a> and empirical <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4087843">analyses</a> by <a href="https://ieeexplore.ieee.org/abstract/document/9488792/">researchers</a>.</p>
<h2>Why do platforms shadowban?</h2>
<p>Experts think shadowbanning by platforms likely increased in response to criticism of big tech’s <a href="https://journals.sagepub.com/doi/full/10.1177/20563051221117552">inadequate handling of misinformation</a>. Over time moderation has become an increasingly politicised issue, and shadowbanning offers an easy way out.</p>
<p>The goal is to mitigate content that’s “lawful but awful”. This content trades under different names across platforms, whether <a href="https://www.tandfonline.com/doi/abs/10.1080/1369118X.2021.1994624">it’s dubbed</a> “borderline”, “sensitive”, “harmful”, “undesirable” or “objectionable”.</p>
<p>Through shadowbanning, platforms can dodge accountability and avoid outcries over “censorship”. At the same time, they still benefit financially from shadowbanned content that’s perpetually <a href="https://journals.sagepub.com/doi/full/10.1177/20563051221117552">sought out</a>.</p>
<h2>Who gets shadowbanned?</h2>
<p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4087843">Recent</a> <a href="https://ieeexplore.ieee.org/abstract/document/9488792/">studies</a> have found between 3% and 6.2% of sampled Twitter accounts had been shadowbanned at least once.</p>
<p>The research identified specific characteristics that increased the likelihood of posts or accounts being shadowbanned: </p>
<ul>
<li>new accounts (less than two weeks old) with fewer followers (below 200)</li>
<li>uncivil language being used, such as negative or offensive terms</li>
<li>pictures being posted without text<br></li>
<li>accounts displaying bot-like behavior. </li>
</ul>
<p>On Twitter, having a verified account (a blue checkmark) reduced the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4087843">chances of being</a> <a href="https://www.washingtonpost.com/technology/2022/10/28/tiktok-suppression/">shadowbanned</a>.</p>
<p>Of particular concern is evidence that shadowbanning disproportionately targets people in marginalised groups. In 2020 TikTok had to apologise for marginalising the black community through its “Black Lives Matter” <a href="https://newsroom.tiktok.com/en-us/a-message-to-our-black-community">filter</a>. In 2021, TikTok users reported that using the word “Black” in their bio page would lead to their content being flagged as “<a href="https://www.nbcnews.com/news/us-news/tiktok-algorithm-prevents-user-declaring-support-black-lives-matter-n1273413">inappropriate</a>”. And in February 2022, keywords related <a href="https://www.dw.com/en/tiktok-censoring-lgbtq-nazi-terms-in-germany-report/a-61237610">to the LGBTQ+ movement</a> were found to be shadowbanned. </p>
<p>Overall, Black, LQBTQ+ and Republican users report more frequent and harsher content moderation across Facebook, Twitter, Instagram and <a href="https://files.osf.io/v1/resources/xcz2t/providers/osfstorage/628662af52d1723f1080bc21?action=download&direct&version=1">TikTok</a>.</p>
<h2>How can you know if you’ve been shadowbanned?</h2>
<p>Detecting shadowbanning is difficult. However, there are some ways you can try to figure out if it has happened to you:</p>
<ul>
<li><p>rank the performance of the content in question against your “normal” <a href="https://files.osf.io/v1/resources/xcz2t/providers/osfstorage/628662af52d1723f1080bc21?action=download&direct&version=1">engagement</a> <a href="https://law.yale.edu/sites/default/files/area/center/isp/documents/reduction_ispessayseries_jul2022.pdf">levels</a> – if a certain post has greatly under-performed for no obvious reason, it may have been shadowbanned</p></li>
<li><p>ask others to use their accounts to search for your content – but keep in mind if they’re a “friend” or “follower” they may still be able to see your shadowbanned content, whereas other users may not </p></li>
<li><p>benchmark your content’s reach against content from others who have comparable engagement – for instance, a black content creator can compare their TikTok views to those of a white creator with a similar following</p></li>
<li><p>refer to shadowban detection tools available for different platforms such as <a href="https://www.online-tech-tips.com/computer-tips/3-ways-to-find-out-if-youre-shadowbanned-on-reddit/">Reddit</a> (r/CommentRemovalChecker) or Twitter (<a href="https://files.osf.io/v1/resources/xcz2t/providers/osfstorage/628662af52d1723f1080bc21?action=download&direct&version=1">hisubway</a>). </p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/deplatforming-online-extremists-reduces-their-followers-but-theres-a-price-188674">Deplatforming online extremists reduces their followers – but there's a price</a>
</strong>
</em>
</p>
<hr>
<h2>What can users do about shadowbanning?</h2>
<p>Shadowbans last for varying amounts of time depending on the demoted content and platform. On TikTok, they’re <a href="https://blog.hootsuite.com/tiktok-shadowban/">said to</a> last about two weeks. If your account or content is shadowbanned, there aren’t many options to immediately reverse this.</p>
<p>But some strategies can help reduce the chance of it happening, <a href="https://journals.sagepub.com/doi/full/10.1177/01634437221111923">as researchers have found</a>. One is to self-censor. For instance, users may avoid ethnic identification labels such as “AsianWomen”. </p>
<p>Users can also experiment with external tools that estimate the likelihood of content being flagged, and then manipulate the content so it’s less likely to be picked up by algorithms. If certain terms are likely to be flagged, they’ll use phonetically similar alternatives, like “S-E-G-G-S” instead of “sex”.</p>
<p>Shadowbanning impairs the free exchange of ideas and excludes minorities. It can be exploited by trolls falsely flagging content. It can cause financial harm to users trying to monetise content. It can even <a href="https://files.osf.io/v1/resources/xcz2t/providers/osfstorage/628662af52d1723f1080bc21?action=download&direct&version=1">trigger emotional distress</a> through isolation. </p>
<p>As a first step, we need to demand transparency from platforms on their shadowbanning policies and enforcement. This practice has potentially severe ramifications for individuals and society. To fix it, we’ll need to scrutinise it with the thoroughness it deserves. </p>
<p><a href="https://theconversation.com/au/topics/social-media-and-society-125586" target="_blank"><img src="https://images.theconversation.com/files/479539/original/file-20220817-20-g5jxhm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=144&fit=crop&dpr=1" width="100%"></a></p><img src="https://counter.theconversation.com/content/192735/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award (project number DE220101597) funded by the Australian Government.
Financial support for Marten Risius and Kevin M. Blasiak from The University of Queensland School of Business in the Research Start-up Support Funding is gratefully acknowledged.
Marten Risius work is performed with support from the Algorand Centres of Excellence (ACE) Programme <a href="https://www.algorand.foundation/ace-university/monash-university">https://www.algorand.foundation/ace-university/monash-university</a>.</span></em></p><p class="fine-print"><em><span>Kevin M. Blasiak is a PhD candidate at The University of Queensland School of Business. His PhD research scholarship funding from The University of Queensland School of Business is greatly acknowledged. </span></em></p>Platforms have started a silent censorship war through this opaque (and often harmful) approach to content moderation.Marten Risius, Senior Lecturer in Business Information Systems, The University of QueenslandKevin Marc Blasiak, PhD Candidate in Information Systems, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1823172022-05-09T12:05:17Z2022-05-09T12:05:17ZElon Musk is wrong: Research shows content rules on Twitter help preserve free speech from bots and other manipulation<figure><img src="https://images.theconversation.com/files/461823/original/file-20220506-2469-drz1yk.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3307%2C2106&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Elon Musk claims to champion free speech, but his plans for Twitter could stifle the free exchange of ideas.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/april-2022-bavaria-kempten-twitter-account-of-elon-musk-news-photo/1240439365">Karl-Josef Hildenbrand/picture alliance via Getty Images</a></span></figcaption></figure><p>Elon Musk’s <a href="https://www.washingtonpost.com/technology/2022/10/27/twitter-elon-musk/">acquisition of Twitter</a> on Oct. 27, 2022 has triggered renewed debate about what it means for the future of the social media platform, which plays an important role in determining the news and information many people – <a href="https://www.pewresearch.org/journalism/2021/09/20/news-consumption-across-social-media-in-2021/">especially Americans</a> – are exposed to.</p>
<p>In addition to <a href="https://www.theverge.com/2022/10/5/23389159/elon-musk-twitter-future">expanding Twitter’s features</a>, Musk has said he wants to <a href="https://twitter.com/TEDTalks/status/1514739086908555272">make it an arena for free speech</a>. What that means has fueled much speculation and raised concerns about <a href="https://www.washingtonpost.com/technology/2022/10/27/musk-twitter-trump-midterms/">the effect the acquisition will have</a> on the 2022 midterm elections – and use of the platform by politicians more generally going forward. Musk sought to <a href="https://www.wsj.com/articles/elon-musk-will-face-an-early-twitter-challenge-preventing-advertiser-flight-11666871828">allay fears</a> the day the acquisition closed in a message to advertisers saying he recognized that the platform cannot become “a free-for-all hellscape.”</p>
<p>As a corporation, Twitter can regulate speech on its platform as it chooses. There are bills being considered in the <a href="https://cdt.org/insights/independent-researcher-access-to-social-media-data-comparing-legislative-proposals/">U.S. Congress</a> and by the <a href="https://www.washingtonpost.com/business/why-eu-decided-tech-giants-cant-police-social-media/2022/04/25/6a8131fc-c465-11ec-8cff-33b059f4c1b7_story.html">European Union</a> that address social media regulation, but these are about transparency, accountability, illegal harmful content and protecting users’ rights, rather than regulating speech.</p>
<p>Musk’s calls for free speech on Twitter focus on two allegations: <a href="https://twitter.com/elonmusk/status/1519363666377908225">political bias</a> and <a href="https://twitter.com/elonmusk/status/1519073003933515776">excessive moderation</a>. As <a href="https://scholar.google.com/citations?user=f_kGJwkAAAAJ&hl=en">researchers of online misinformation and manipulation</a>, my colleagues and I at the <a href="https://osome.iu.edu/">Indiana University Observatory on Social Media</a> study the dynamics and impact of Twitter and its abuse. To make sense of Musk’s statements and the possible outcomes of his acquisition, let’s look at what the research shows. </p>
<h2>Political bias</h2>
<p>Many <a href="https://thehill.com/homenews/house/509619-jordan-confronts-tech-ceos-with-claims-of-anti-conservative-bias/">conservative politicians</a> and <a href="https://twitter.com/benshapiro/status/1316740671441711104">pundits</a> have <a href="https://www.nytimes.com/2018/09/05/technology/lawmakers-facebook-twitter-foreign-influence-hearing.html">alleged</a> <a href="https://www.cjr.org/the_media_today/tech-biased-against-conservatives.php">for years</a> that major social media platforms, including Twitter, have a <a href="https://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/">liberal political bias</a> amounting to <a href="https://www.washingtonpost.com/technology/2020/05/27/trump-twitter-label/">censorship of conservative opinions</a>. These claims are based on anecdotal evidence. For example, many partisans whose tweets were labeled as misleading and downranked, or whose accounts were suspended for violating the platform’s terms of service, claim that Twitter targeted them because of their political views.</p>
<p>Unfortunately, Twitter and other platforms often <a href="https://www.theatlantic.com/ideas/archive/2022/04/elon-musk-buy-twitter-free-speech/629571/">inconsistently enforce their policies</a>, so it is easy to find examples supporting one conspiracy theory or another. A review by the Center for Business and Human Rights at New York University has found <a href="https://www.stern.nyu.edu/experience-stern/faculty-research/false-accusation-unfounded-claim-social-media-companies-censor-conservatives">no reliable evidence</a> in support of the claim of anti-conservative bias by social media companies, even labeling the claim itself a form of disinformation. </p>
<p>A more direct evaluation of political bias by Twitter is difficult because of the complex interactions between people and algorithms. People, of course, have political biases. For example, <a href="https://doi.org/10.1177%2F1461444820942744">our experiments with political social bots</a> revealed that Republican users are more likely to mistake conservative bots for humans, whereas Democratic users are more likely to mistake conservative human users for bots.</p>
<p>To remove human bias from the equation in our experiments, we deployed a bunch of benign social bots on Twitter. Each of these bots started by following one news source, with some bots following a liberal source and others a conservative one. After that initial friend, all bots were left alone to “drift” in the information ecosystem for a few months. They could gain followers. They acted according to an identical algorithmic behavior. This included following or following back random accounts, tweeting meaningless content and retweeting or copying random posts in their feed. </p>
<p>But this behavior was politically neutral, with no understanding of content seen or posted. We tracked the bots to probe political biases emerging from how Twitter works or how users interact. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a graphic showing dots of different colors in clusters" src="https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/460882/original/file-20220502-17-ael4ee.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Neutral bots (yellow nodes) and a sample of their friends and followers in an experiment to measure partisan bias on Twitter. Node color indicates political alignment of an account: red for conservative, blue for liberal, black for unknown. Node size is proportional to share of links to low-credibility sources. The closely connected red clusters indicate conservative echo chambers.</span>
<span class="attribution"><span class="source">Filippo Menczer</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Surprisingly, our research provided <a href="https://doi.org/10.1038/s41467-021-25738-6">evidence that Twitter has a conservative, rather than a liberal bias</a>. On average, accounts are drawn toward the conservative side. Liberal accounts were exposed to moderate content, which shifted their experience toward the political center, while the interactions of right-leaning accounts were skewed toward posting conservative content. Accounts that followed conservative news sources also received more politically aligned followers, becoming embedded in denser echo chambers and gaining influence within those partisan communities. </p>
<p>These differences in experiences and actions can be attributed to interactions with users and information mediated by the social media platform. But we could not directly examine the possible bias in Twitter’s news feed algorithm, because the actual ranking of posts in the “home timeline” is not available to outside researchers. </p>
<p>Researchers from Twitter, however, were able to audit the effects of their ranking algorithm on political content, unveiling that <a href="https://doi.org/10.1073/pnas.2025334119">the political right enjoys higher amplification</a> compared to the political left. Their experiment showed that in six out of seven countries studied, conservative politicians enjoy higher algorithmic amplification than liberal ones. They also found that algorithmic amplification favors right-leaning news sources in the U.S. </p>
<p>Our research and the research from Twitter show that Musk’s <a href="https://twitter.com/elonmusk/status/1519363666377908225">apparent concern about bias</a> on Twitter against conservatives is unfounded.</p>
<h2>Referees or censors?</h2>
<p>The other allegation that Musk seems to be making is that excessive moderation stifles free speech on Twitter. The concept of a free marketplace of ideas is rooted in John Milton’s centuries-old reasoning that truth prevails in a free and open exchange of ideas. This view is often cited as the basis for arguments against moderation: accurate, relevant, timely information should emerge spontaneously from the interactions among users. </p>
<p>Unfortunately, <a href="https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/">several aspects of modern social media</a> hinder the free marketplace of ideas. <a href="http://dx.doi.org/10.1038/srep00335">Limited attention</a> and <a href="https://doi.org/10.1177/1745691618803647">confirmation bias</a> increase vulnerability to misinformation. <a href="http://doi.org/10.1038/s41598-018-34203-2">Engagement-based ranking</a> can amplify noise and manipulation, and the structure of information networks can <a href="https://doi.org/10.1038/s41467-020-14394-x">distort perceptions</a> and be <a href="https://doi.org/10.1038/s41586-019-1507-6">“gerrymandered” to favor one group</a>.</p>
<p>As a result, social media users have in past years become victims of manipulation by <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/14127">“astroturf” causes</a>, <a href="https://doi.org/10.1145/3292522.3326016">trolling</a> and <a href="http://doi.org/10.1126/science.aao2998">misinformation</a>. Abuse is facilitated by <a href="http://doi.org/10.1145/2818717">social bots</a> and <a href="https://ojs.aaai.org/index.php/ICWSM/article/view/18075">coordinated networks</a> that create the appearance of human crowds. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bTj664taegw?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">How disinformation works on social media and how to spot it.</span></figcaption>
</figure>
<p>We and other researchers have observed these inauthentic accounts <a href="https://doi.org/10.1038/s41467-018-06930-7">amplifying disinformation</a>, <a href="https://doi.org/10.1371/journal.pone.0214210">influencing elections</a>, <a href="https://doi.org/10.1109/TCSS.2021.3059286">committing financial fraud</a>, <a href="https://doi.org/10.1073/pnas.1803470115">infiltrating vulnerable communities</a> and <a href="https://doi.org/10.1007/978-3-319-47874-6_19">disrupting communication</a>. Musk has tweeted that he wants to <a href="https://twitter.com/elonmusk/status/1517215066550116354">defeat spam bots and authenticate humans</a>, but these are neither easy nor necessarily effective solutions. </p>
<p>Inauthentic accounts are used for malicious <a href="https://www.snopes.com/news/2019/09/18/malicious-bots-and-trolls-spread-vaccine-misinformation/">purposes beyond spam</a> and are <a href="https://cacm.acm.org/magazines/2016/7/204021-the-rise-of-social-bots/fulltext">hard to detect</a>, especially when they are operated by people in conjunction with software algorithms. And removing anonymity may <a href="https://theconversation.com/elon-musks-comments-about-twitter-dont-square-with-the-social-media-platforms-reality-182023">harm vulnerable groups</a>. In recent years, Twitter has enacted policies and systems to moderate abuses by aggressively suspending accounts and networks displaying inauthentic coordinated behaviors. A weakening of these moderation policies may make abuse rampant again. </p>
<h2>Manipulating Twitter</h2>
<p>Despite Twitter’s recent progress, integrity is still a challenge on the platform. Our lab has found new types of sophisticated manipulation, which we presented at the <a href="https://www.icwsm.org/2022/index.html/">International AAAI Conference on Web and Social Media</a> in June 2022. Malicious users exploit so-called “<a href="https://www.followchain.org/follow-train/">follow trains</a>” – groups of people who follow each other on Twitter – to rapidly boost their followers and <a href="https://arxiv.org/abs/2010.13691">create large, dense hyperpartisan echo chambers</a> that amplify toxic content from low-credibility and conspiratorial sources. </p>
<p>Another effective malicious technique is to post and then <a href="https://arxiv.org/abs/2203.13893">strategically delete content that violates platform terms</a> after it has served its purpose. Even Twitter’s high limit of 2,400 tweets per day can be circumvented through deletions: We identified many accounts that flood the network with tens of thousands of tweets per day. </p>
<p>We also found coordinated networks that engage in repetitive likes and unlikes of content that is eventually deleted, which can manipulate ranking algorithms. These techniques enable malicious users to inflate content popularity while evading detection.</p>
<p>Musk’s plans for Twitter are unlikely to do anything about these manipulative behaviors.</p>
<h2>Content moderation and free speech</h2>
<p>Musk’s likely acquisition of Twitter raises concerns that the social media platform could decrease its content moderation. This body of research shows that stronger, not weaker, moderation of the information ecosystem is called for to combat harmful misinformation. </p>
<p>It also shows that weaker moderation policies would ironically hurt free speech: The voices of real users would be drowned out by malicious users who manipulate Twitter through inauthentic accounts, bots and echo chambers. </p>
<p><em>This article has been update to include the completion of Elon Musk’s acquisition of Twitter.</em></p><img src="https://counter.theconversation.com/content/182317/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Filippo Menczer receives funding from Knight Foundation, Craig Newmark Philanthropies, Open Technology Fund, and DoD. He owns a Tesla.</span></em></p>Elon Musk said he wants to make Twitter a platform for free speech. Here is what research shows about claims of political bias and excessive moderation.Filippo Menczer, Professor of Informatics and Computer Science, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1820472022-04-27T07:58:09Z2022-04-27T07:58:09ZThe ‘digital town square’? What does it mean when billionaires own the online spaces where we gather?<figure><img src="https://images.theconversation.com/files/459987/original/file-20220427-22-13nffb.jpg?ixlib=rb-1.1.0&rect=25%2C0%2C3376%2C1960&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption"></span> </figcaption></figure><p>The world’s richest man, Elon Musk, seems set to <a href="https://www.bbc.com/news/business-61222470">purchase</a> the social media platform Twitter for around US$44 billion. He says he’s not doing it to make money (which is good, because Twitter has rarely turned a profit), but rather because, among other things, he <a href="https://techcrunch.com/2022/04/14/elon-musk-buying-twitter-ted-talk/">believes in free speech</a>.</p>
<p>Twitter might seem an odd place to make a stand for free speech. The service has <a href="https://www.thetimes.co.uk/article/elon-musk-and-the-44-billion-twitter-question-fkcxk2x7k">around 217 million daily users</a>, only a fraction of the 2.8 billion who log in each day to one of the Meta family (Facebook, Instagram and WhatsApp). </p>
<p>But the platform plays a disproportionately large role in society. It is essential infrastructure for journalists and academics. It has been used to coordinate emergency information, to build up <a href="https://www.theguardian.com/technology/2019/dec/23/ten-years-black-twitter-watchdog">communities of solidarity and protest</a>, and to share global events and media rituals – from presidential elections to mourning celebrity deaths (and <a href="https://twitter.com/i/events/1508302482895605763?lang=en">unpredictable moments at the Oscars</a>). </p>
<p>Twitter’s unique role is a result of the way it combines personal media use with public debate and discussion. But this is a fragile and volatile mix - and one that has become increasingly difficult for the platform to manage.</p>
<p><a href="https://twitter.com/elonmusk/status/1518677066325053441">According to Musk</a>, “Twitter is the digital town square, where matters vital to the future of humanity are debated”. Twitter cofounder Jack Dorsey, in approving Musk’s takeover, <a href="https://twitter.com/jack/status/1518772753460998145">went further</a>, claiming “Twitter is the closest thing we have to a global consciousness”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1518677066325053441"}"></div></p>
<p>Are they right? Does it make sense to think of Twitter as a town square? And if so, do we want the town square to be controlled by libertarian billionaires?</p>
<h2>What is a town square for?</h2>
<p>As my coauthor Nancy Baym and I have detailed in our book <a href="https://nyupress.org/9781479811069/twitter/">Twitter: A Biography</a>, Twitter’s culture emerged from the interactions between a fledgling platform with shaky infrastructure, an avid community of users who made it work for them, and the media who found in it an endless source of news and other content.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/friday-essay-twitter-and-the-way-of-the-hashtag-141693">Friday essay: Twitter and the way of the hashtag</a>
</strong>
</em>
</p>
<hr>
<p>Is it a town square? When Musk and some other commentators use this term, I think they are invoking the traditional idea of the “<a href="https://en.wikipedia.org/wiki/Public_sphere">public sphere</a>”: a real or virtual place where everyone can argue rationally about things, and everyone is made aware of everyone else’s arguments. </p>
<p>Some critics think we should get rid of the idea of the “digital town square” altogether, or at least think more deeply about <a href="https://www.yalelawjournal.org/forum/beyond-the-public-square-imagining-digital-democracy">how it might reinforce existing divisions and hierarchies</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/459993/original/file-20220427-25-mcn0qf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The ‘town square’ can be much more than just a soapbox for sounding off about the issues of the day.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/london-june-2-speakers-hyde-park-142049512">Shutterstock</a></span>
</figcaption>
</figure>
<p>I think the idea of the “digital town square” can be much richer and more optimistic than this, and that early Twitter was a pretty good, if flawed, example of it. </p>
<p>If I think of my own ideal “town square”, it might have market stalls, quiet corners where you can have personal chats with friends, alleyways where strange (but legal!) niche interests can be pursued, a playground for the kids, some roving entertainers – and, sure, maybe a central agora with a soapbox that people can gather around when there’s some issue we all need to hear or talk about. That, in fact, is very much what early Twitter was like for me and my friends and colleagues.</p>
<p>I think Musk and his legion of fans have something different in mind: a free speech free-for-all, a nightmarish town square where everyone is shouting all the time and anyone who doesn’t like it just stays home. </p>
<h2>The free-for-all is over</h2>
<p>In recent years, the increasing prevalence of disinformation and abuse on social media, as well as their growing power over the media environment in general, has prompted governments around the world to intervene. </p>
<p>In Australia alone, we have seen the <a href="https://www.acma.gov.au/news-media-bargaining-code">News Media Bargaining Code</a> and the ACCC’s <a href="https://www.accc.gov.au/focus-areas/inquiries-ongoing/digital-platform-services-inquiry-2020-2025">Digital Platform Services Inquiry</a> asking tougher questions, making demands, and exerting more pressure on platforms. </p>
<p>Perhaps more consequentially for global players like Twitter, the European Union is set to introduce a <a href="https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package">Digital Services Act</a> which aims “to create a safer digital space in which the fundamental rights of all users of digital services are protected”. </p>
<p>This will prohibit harmful advertising and “dark patterns”, and require more careful (and complex) content moderation, particularly on of the larger companies. It will also require platforms to be more transparent about how they use algorithms to filter and curate the content their users see and hear. </p>
<p>Such moves are just the beginning of states imposing both limits and positive duties on platform companies. </p>
<p>So while Musk will likely push the boundaries of what he can get away with, the idea of a global platform that allows completely unfettered “free speech” (even within the limits of “the law”, as he <a href="https://twitter.com/elonmusk/status/1519036983137509376">tweeted</a> earlier today) is a complete fantasy. </p>
<h2>What are the alternatives?</h2>
<p>If for-profit social media services are run not in the public interest, but to serve the needs of advertisers – or, even worse, the whims of billionaires – then what are the alternatives? </p>
<p>Small alternative social media platforms (such as <a href="https://diasporafoundation.org/">Diaspora</a> and <a href="https://joinmastodon.org/">Mastodon</a>), built on decentralised infrastructure and collective ownership, have been around for a while, but they haven’t really taken off yet. Designing and attracting users to viable alternatives at a global scale is really hard. </p>
<p><a href="https://australiainstitute.org.au/report/the-public-square-project/">Proposals</a> for completely separate, publicly supported social media platforms created by non-profits and/or governments, even if we could get them to work together, are unlikely to work. They would be hugely expensive, and will ultimately encounter similar governance challenges to the existing platforms, if they are to achieve any scale and to operate across national boundaries. </p>
<p>Of course, it is still possible Musk will discover running Twitter is much harder than it looks. The company is to some extent responsible for what is published on its platform, which means it has no choice but to engage in the messy world of content moderation, and balancing free speech with other concerns (and other human rights). </p>
<p>While Musk’s other companies (such as Tesla) operate in heavily regulated environments already, the “global social media platform” business is likely to be far more complex and challenging. </p>
<p>Twitter has already been looking at ways out of this situation. Since 2019, it has been investing in an initiative called <a href="https://blueskyweb.org">Bluesky</a>, which aims to develop an open, decentralised standard for social media which could be used by multiple platforms including Twitter itself.</p>
<p></p>
<p>Facebook’s attempt to move into the “<a href="https://www.washingtonpost.com/technology/2021/12/30/metaverse-definition-facebook-horizon-worlds/">metaverse</a>” is a similar maneouvre: avoid having to deal with content and restrictions by building the (proprietary) infrastructure for others to create applications and social spaces. </p>
<p>To try out another “blue-sky” idea for just a moment: if the existing corporate giants were to vacate the social media space, it might leave room for a publicly funded and governed option. </p>
<p>In an ideal world, <a href="https://www.uwestminsterpress.co.uk/site/books/e/10.16997/book60/">public service media organisations</a> might collaborate to build international social media services using shared infrastructure and protocols that enable their services to talk to and share content with each other. Or they might build out new social media services on top of the internet we have now – requiring the commercial players to ensure their platforms are <a href="https://techpolicy.press/why-social-media-needs-mandatory-interoperability/">interoperable</a> would be an essential part of that. </p>
<p>Of course, either way, this model would ultimately require taxpayer support and serious, long-term investment. If that were to happen, we might have something even better than a digital town square: a <a href="https://theconversation.com/we-need-a-full-public-service-internet-state-owned-infrastructure-is-just-the-start-127458">public service internet</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-need-a-full-public-service-internet-state-owned-infrastructure-is-just-the-start-127458">We need a full public service internet – state-owned infrastructure is just the start</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/182047/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jean Burgess receives research funding from the Australian Research Council and the Social Sciences and Humanities Research Council (Canada). She has previously engaged with Facebook as an academic consultant in an advisory capacity. </span></em></p>The age of the free speech free-for-all is over – but public online spaces are possible.Jean Burgess, Professor and Associate Director, ARC Centre of Excellence for Automated Decision-Making and Society, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1816262022-04-26T19:55:56Z2022-04-26T19:55:56ZWhat will Elon Musk’s ownership of Twitter mean for ‘free speech’ on the platform?<figure><img src="https://images.theconversation.com/files/459708/original/file-20220426-22-k6moqy.jpeg?ixlib=rb-1.1.0&rect=22%2C61%2C3683%2C2411&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Eric Risberg/AP</span></span></figcaption></figure><p>In a surprise capitulation, the board of Twitter has announced it will support a <a href="https://www.ft.com/content/79e3bc48-96ef-4e62-b30b-d3ddb45d7a2f">takeover bid</a> by Elon Musk, the world’s richest person. But is it in the public interest? </p>
<p>Musk is offering US$54.20 a share. This values the company at US$44 billion (or A$61 billion) – making it one of the largest leveraged buyouts on record. </p>
<p><a href="https://www.sec.gov/Archives/edgar/data/1418091/000110465922048128/tm2213229d1_ex99-c.htm">Morgan Stanley and other large financial institutions</a> will lend him US$25.5 billion. Musk himself will put in around US$20 billion. This is about the size of a single <a href="https://www.theguardian.com/business/2022/apr/21/elon-musk-stands-to-collect-23bn-bonus-as-tesla-surges-ahead#:%7E:text=Elon%20Musk%2C%20chief%20executive%20of,company's%20reported%20record%20quarterly%20profits">bonus</a> he is expected to receive from Tesla. </p>
<p>In a <a href="https://www.sec.gov/Archives/edgar/data/0001418091/000110465922045641/tm2212748d1_sc13da.htm">letter</a> to the chair of Twitter, Musk claimed he would “unlock” Twitter’s “extraordinary potential” to be “the platform for free speech around the globe”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1507777261654605828"}"></div></p>
<p>But the idea that social media has the potential to represent an unbridled mode of public discourse is underpinned by an idealistic understanding that has <a href="https://doi.org/10.1177%2F14614440222226244">surrounded social media</a> technologies for <a href="https://www.wired.com/1995/11/poster-if/">some time</a>. </p>
<p>In reality, Twitter being owned by one person, some of whose own tweets have been <a href="https://www.sec.gov/news/press-release/2018-226">false</a>, <a href="https://news.yahoo.com/one-tweet-elon-musk-captures-201842976.html">sexist</a>, <a href="https://www.vox.com/recode/2021/5/18/22441831/elon-musk-bitcoin-dogecoin-crypto-prices-tesla">market-moving</a> and <a href="https://www.abc.net.au/news/2019-10-28/elon-musk-saya-pedo-guy-is-a-common-insult-in-south-africa/11639090">arguably defamatory</a> poses a risk to the platform’s future.</p>
<h2>Can Twitter expect a total overhaul?</h2>
<p>We see Musk’s latest move in a less-than-benign light, as it gives him unprecedented power and influence over Twitter. He has mused about making several potential changes to the platform, including:</p>
<ul>
<li><a href="https://www.vox.com/recode/23041717/twitter-musk-business-plan-peter-kafka-column">reshuffling</a> the current <a href="https://www.theguardian.com/technology/2022/apr/25/five-things-in-elon-musks-in-tray-after-twitter-takeover">management</a>, in which he says he doesn’t have confidence </li>
<li>adding an <a href="https://theconversation.com/why-an-edit-button-for-twitter-is-not-as-simple-as-it-seems-181623">edit button</a> on tweets</li>
<li>weakening the current content moderation approach - including through supporting temporary suspensions on users rather than outright bans, and</li>
<li>potentially moving to a “freemium” model similar to Spotify’s, whereby users can <a href="https://www.theguardian.com/technology/2022/apr/25/five-things-in-elon-musks-in-tray-after-twitter-takeover">pay to avoid more intrusive advertisements</a>. </li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-an-edit-button-for-twitter-is-not-as-simple-as-it-seems-181623">Why an edit button for Twitter is not as simple as it seems</a>
</strong>
</em>
</p>
<hr>
<p>Shortly after becoming Twitter’s largest individual shareholder earlier this month, Musk <a href="https://www.thestreet.com/markets/elon-musk-ted-talk">said</a> “I don’t care about the economics at all”.</p>
<p>But the bankers who lent him US$25.5 billion to eventually acquire the platform probably do. Musk may come under pressure to lift Twitter’s profitability. He claims his top priority is free speech – but potential advertisers may not want their products featured next to an extremist rant.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1518681666633486341"}"></div></p>
<p>In recent years, Twitter has implemented a range of <a href="https://help.twitter.com/en/rules-and-policies#platform-integrity-and-authenticity">governance and content moderation</a> policies. For example, in 2020 it broadened its “<a href="https://blog.twitter.com/en_us/topics/company/2020/An-update-on-our-continuity-strategy-during-COVID-19">definition of harm</a>” to address COVID-19 content contradicting guidance from authoritative sources. </p>
<p>Twitter claims developments in its content moderation approach have been to “<a href="https://about.twitter.com/en">serve the public conversation</a>” and address <a href="https://help.twitter.com/en/rules-and-policies/medical-misinformation-policy">disinformation and misinformation</a>. It also claims to respond to user experiences <a href="https://about.twitter.com/en/our-priorities/healthy-conversations">of abuse</a> and general <a href="https://journals.sagepub.com/doi/10.1177/13548565211036797">incivility users must navigate</a>. </p>
<p>Taking a longer-term view, however, it seems Twitter’s bolstering of content moderation could be seen as an effort to save its reputation following <a href="https://www.nytimes.com/2020/11/17/technology/lawmakers-drill-down-on-how-facebook-and-twitter-moderate-content.html">extensive backlash</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/instead-of-showing-leadership-twitter-pays-lip-service-to-the-dangers-of-deep-fakes-127027">Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes</a>
</strong>
</em>
</p>
<hr>
<h2>Musk’s ‘town square’ idea doesn’t hold up</h2>
<p>Regardless of Twitter’s motivations Musk has openly challenged the growing number of moderation tools employed by the platform. </p>
<p>He has even labelled Twitter a “de facto public square”. This statement appears naïve at best. As communications scholar and Microsoft researcher <a href="https://yalebooks.yale.edu/book/9780300261431/custodians-internet/">Tarleton Gillespie</a> argues, the notion that social media platforms can operate as truly open spaces is fantasy, given how platforms must moderate content while also disavowing this process. </p>
<p>Gillespie goes on to suggest platforms are obliged to moderate, to protect users from their antagonists, to remove offensive, vile, or illegal content and to ensure they can present their best face to new users, advertisers, partners, and the public more generally. He <a href="https://yalebooks.yale.edu/book/9780300261431/custodians-internet/">says</a> the critical challenge then “is exactly when, how, and why to intervene”. </p>
<p>Platforms such as Twitter can’t represent “town squares” – especially as, in Twitter’s case, only a small proportion of the town is using the service.</p>
<p>Public squares are <a href="https://www.google.com.au/books/edition/Behavior_in_Public_Places/HM1kAAAAIAAJ?hl=en">implicitly</a> and explicitly regulated through social behaviours associated with <a href="https://www.routledge.com/Relations-in-Public-Microstudies-of-the-Public-Order/Goffman/p/book/9781412810067">relations in public</a>, backed by the capacity to defer to an authority to restore public order should disorder arise. In the case of a private business, which Twitter now is, the final say will largely default to Musk. </p>
<p>Even if Musk were to implement his own town square ideal, it would presumably be a particularly free-wheeling version. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1518704771372240896"}"></div></p>
<p>Providing users with more leeway in what they can say might contribute to increased polarity and further coarsen discourse on the platform. But this would again discourage advertisers – which would be an issue under Twitter’s current economic model (wherein <a href="https://www.theguardian.com/technology/2022/apr/25/five-things-in-elon-musks-in-tray-after-twitter-takeover">90% of revenue comes from advertising</a>).</p>
<h2>Free speech (but for all?)</h2>
<p>Twitter is considerably <a href="https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/">smaller than other</a> major social media networks. However, research has found it does have a disproportionate influence as tweets can proliferate with <a href="https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1449883">speed and virality, spilling over to traditional media</a>. </p>
<p>The viewpoints users are exposed to are determined by algorithms geared towards maximising exposure and clicks, rather than enriching users’ lives with <a href="https://theconversation.com/what-elon-musks-us-3-billion-twitter-deal-means-for-him-and-for-social-media-180742">thoughtful or interesting points of view</a>.</p>
<p>Musk has suggested he may make Twitter’s algorithms open source. This would be a welcome increase in transparency. But once Twitter becomes a private company, how transparent it is about operations will largely be up to Musk’s sole discretion. </p>
<p>Ironically, <a href="https://www.theguardian.com/technology/2022/apr/15/elon-musk-mark-zuckerberg-sun-king-louis-xiv">Musk has accused Meta</a> (previously Facebook) CEO Mark Zuckerberg of having too much control over public debate.</p>
<p>Yet Musk himself has a history of trying <a href="https://www.cnbc.com/2022/04/25/elon-musk-and-free-speech-track-record-not-encouraging.html">to stifle</a> <a href="https://www.theatlantic.com/technology/archive/2022/04/elon-musk-twitter-free-speech/629479/">his critics’</a> <a href="https://www.bloomberg.com/news/articles/2022-04-21/elon-musk-wants-free-speech-at-twitter-twtr-after-years-silencing-critics">points of view</a>. There’s little to suggest his actions are truly to create an open and inclusive town square through Twitter — and less yet to suggest it will be in the public interest.</p><img src="https://counter.theconversation.com/content/181626/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Musk has long touted Twitter’s potential as an open and inclusive ‘town square’ for public discourse – but the reality is social media platforms were never meant to fulfil this role.John Hawkins, Senior Lecturer, Canberra School of Politics, Economics and Society and NATSEM, University of CanberraMichael James Walsh, Associate Professor in Social Sciences, University of CanberraLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1815762022-04-21T19:12:38Z2022-04-21T19:12:38ZIf Elon Musk succeeds in his Twitter takeover, it would restrict, rather than promote, free speech<figure><img src="https://images.theconversation.com/files/459170/original/file-20220421-16-bc4elu.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4626%2C3074&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A lawsuit filed on April 12 alleges that Tesla CEO Elon Musk illegally delayed disclosing his stake in Twitter so he could buy more shares at lower prices.</span> <span class="attribution"><span class="source">(AP Photo/Susan Walsh, File)</span></span></figcaption></figure><p>On April 25, following several weeks of speculation, Twitter announced that <a href="https://www.theguardian.com/technology/2022/apr/25/twitter-elon-musk-buy-takeover-deal-tesla">it had reached an agreement to sell the company to Tesla CEO and multi-billionaire Elon Musk</a>. In mid-April, Musk made public <a href="https://www.bloomberg.com/news/articles/2022-04-21/musk-is-exploring-launching-a-tender-offer-for-twitter">his desire to acquire Twitter</a>, make it a private company, and <a href="https://www.bloomberg.com/news/articles/2022-04-14/elon-musk-launches-43-billion-hostile-takeover-of-twitter">overhaul its moderation policies</a>. </p>
<p>Citing ideals of free speech, Musk claimed that “<a href="https://www.washingtonpost.com/technology/2022/04/18/musk-twitter-free-speech/">Twitter has become kind of the de facto town square, so it’s just really important that people have the, both the reality and the perception that they are able to speak freely within the bounds of the law</a>.”</p>
<p>While making Twitter free for all “within the bounds of the law” seems like a way to ensure free speech in theory, in practice, this action would actually serve to suppress the speech of Twitter’s most vulnerable users.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_NbpH9GdBcQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC’s The National looks at Elon Musk’s attempt at a hostile takeover of Twitter.</span></figcaption>
</figure>
<p>My team’s research into online harassment shows that when platforms fail to moderate effectively, the most marginalized people may withdraw from posting to social media as a way to keep themselves safe.</p>
<h2>Withdrawal responses</h2>
<p>In <a href="https://harassment.thedlrgroup.com/">various research projects since 2018</a>, we have interviewed scholars who have experienced online harassment, surveyed academics about their experiences with harassment, conducted in-depth reviews of literature detailing how knowledge workers experience online harassment, and reached out to institutions that employ knowledge workers who experience online harassment. </p>
<p>Overwhelmingly, throughout our various projects, we’ve noticed some common themes:</p>
<ul>
<li>Individuals are targeted for online harassment on platforms like Twitter simply because they are women or members of a minority group (racialized, gender non-conforming, disabled or otherwise marginalized). The topics people post about matter less than their identities in predicting the intensity of online harassment people are subjected to.</li>
<li>Men who experience online harassment, often experience a different type of harassment than women or marginalized people. Women, for example, tend to experience more sexualized harassment, such as rape threats.</li>
<li>When people experience harassment, they seek support from their organizations, social media platforms and law enforcement, but often find the support they receive is insufficient.</li>
<li>When people do not receive adequate support from their organizations, social media platforms and law enforcement, they adopt strategies to protect themselves, including withdrawing from social media.</li>
</ul>
<p>This last point is important, because our data shows that there is a very real risk of losing ideas in the unmoderated Twitter space that Musk says he wants to build in the name of free speech. </p>
<p>Or in other words, what Musk is proposing would likely make speech on Twitter less free than it is now, because people who cannot rely on social media platforms to protect them from online harassment tend to leave the platform when the consequences of online harassment become psychologically or socially destructive.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a woman holding a mobile phone has her forehead on a table" src="https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Research shows that when people receive online harassment on a social media platform, they are likely to withdraw from using it.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Arenas for debate</h2>
<p>Political economist John Stuart Mill famously wrote about <a href="https://www.michaelrectenwald.com/essays/john-stuart-mill-the-marketplace-of-ideas-and-minority-opinion">the marketplace of ideas</a>, suggesting that in an environment where ideas can be debated, the best ones will rise to the top. This is often used to justify opinions that social media platforms like Twitter should do away with moderation in order to encourage constructive debate. </p>
<p>This implies that bad ideas should be taken care of by a sort of invisible hand, in which people will only share and engage with the best content on Twitter, and the toxic content will be a small price to pay for a thriving online public sphere.</p>
<p>The assumption that good ideas would edge out the bad ones is both counter to Mill’s original writing, and the actual lived experience of posting to social media for people in minority groups. </p>
<p>Mill advocated that <a href="https://chicagounbound.uchicago.edu/roundtable/vol3/iss1/4">minority ideas be given artificial preference</a> in order to encourage constructive debate on a wide range of topics in the public interest. Importantly, this means that moderation of online harassment is key to a functioning marketplace of ideas.</p>
<h2>Regulation of harassment</h2>
<p>The idea that we need some sort of online regulation of harassing speech is borne out by our research. Our research participants repeatedly told us that the consequences of online harassment were extremely damaging. These consequences ranged from burnout or inability to complete their work, to emotional and psychological trauma, or even social isolation. </p>
<p>When targets of harassment experienced these outcomes, they often also experienced economic impacts, such as issues with career progression after being unable to complete work. Many of our participants tried reporting the harassment to social media platforms. If the support they received from the platform was dismissive or unhelpful, they felt less likely to engage in the future.</p>
<p>When people disengage from Twitter due to widespread harassment, we lose those voices from the very online public sphere that Musk says he wants to foster. In practice, this means that women and marginalized groups are most likely to be the people who are excluded from Musk’s free speech playground. </p>
<p>Given that our research participants have told us that they already feel Twitter’s approach to online harassment is limited at best, I would suggest that if we really want a marketplace of ideas on Twitter, we need more moderation, not less. For this reason, I’m happy that <a href="https://nypost.com/2022/04/15/twitter-pushes-back-on-elon-musks-takeover-with-poison-pill/">the Twitter Board of Directors is attempting to resist Musk’s hostile takeover</a>.</p><img src="https://counter.theconversation.com/content/181576/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jaigris Hodson receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) Canada Research Chairs Program. </span></em></p>Elon Musk’s attempt to take over Twitter uses free speech as the motivation, but research shows that unregulated online spaces result in increased harassment for marginalized users.Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1637232021-07-05T03:38:07Z2021-07-05T03:38:07ZFacebook’s failure to pay attention to non-English languages is allowing hate speech to flourish<figure><img src="https://images.theconversation.com/files/409607/original/file-20210705-42341-1moupdr.png?ixlib=rb-1.1.0&rect=0%2C0%2C723%2C632&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Facebook</span></span></figcaption></figure><p>If like <a href="https://www.theguardian.com/australia-news/2021/apr/22/facebook-accused-of-not-removing-hate-speech-in-complaint-under-australias-racial-discrimination-laws">many Australian Muslims</a> you have reported hate speech to Facebook and received an automated response saying it doesn’t breach the platform’s <a href="https://www.facebook.com/communitystandards/hate_speech">community standards</a>, you are not alone.</p>
<p>We and our team are the first Australian social scientists to receive funding through Facebook’s <a href="https://research.fb.com/blog/2019/05/announcing-the-winners-of-the-content-policy-research-on-social-media-platforms-research-awards/">content policy research awards</a>, which we used to investigate <a href="https://ses.library.usyd.edu.au/handle/2123/25116.3">hate speech</a> on LGBTQI+ community pages in five Asian countries: India, Myanmar, Indonesia, the Philippines and Australia.</p>
<p>We looked at three aspects of hate speech regulation in the Asia Pacific region over 18 months. First we mapped hate speech law in our case study countries, to understand how this problem might be legally countered. We also looked at whether Facebook’s definition of “hate speech” included all recognised forms and contexts for this troubling behaviour.</p>
<p>In addition, we mapped Facebook’s content regulation teams, speaking to staff about how the company’s policies and procedures worked to identify emerging forms of hate.</p>
<p>Even though Facebook funded our study, it said for privacy reasons it could not give us access to a dataset of the hate speech it removes. We were therefore unable to test how effectively its in-house moderators classify hate.</p>
<p>Instead, we captured posts and comments from the top three LGBTQI+ public Facebook pages in each country, to look for hate speech that had either been missed by the platform’s machine intelligence filters or human moderators. </p>
<h2>Admins feel let down</h2>
<p>We interviewed the administrators of these pages about their experience of moderating hate, and what they thought Facebook could do to help them reduce abuse.</p>
<p>They told us Facebook would often reject their reports of hate speech, even when the post clearly breached its <a href="https://www.facebook.com/communitystandards/hate_speech">Community Standards</a>. In some cases messages that were originally removed would be re-posted on appeal.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Hate speech complaint report rejected by Facebook" src="https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1235&fit=crop&dpr=1 600w, https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1235&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1235&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1552&fit=crop&dpr=1 754w, https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1552&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/409548/original/file-20210704-27-r7c0ms.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1552&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An example of a hate speech complaint report rejected by Facebook.</span>
<span class="attribution"><span class="source">Queerala Facebook site</span></span>
</figcaption>
</figure>
<p>Most page admins said the so-called “flagging” process rarely worked, and they found it disempowering. They wanted Facebook to consult with them more to get a better idea of the types of abuse they see posted and why they constitute hate speech in their cultural context. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/revenge-of-the-moderators-facebooks-online-workers-are-sick-of-being-treated-like-bots-125127">Revenge of the moderators: Facebook's online workers are sick of being treated like bots</a>
</strong>
</em>
</p>
<hr>
<h2>Defining hate speech is not the problem</h2>
<p>Facebook has long had a problem with the <a href="https://thediplomat.com/2020/08/facebooks-problematic-history-in-south-asia/">scale and scope of hate speech</a> on its platform in Asia. For example, while it has banned some <a href="https://time.com/6072272/facebook-sanatan-sanstha-india/">Hindu extremists</a>, it has left their pages online. </p>
<p>However, during our study we were pleased to see that Facebook did <a href="https://www.facebook.com/communitystandards/recentupdates/">broaden its definition</a> of hate speech, which now captures a wider range of hateful behaviour. It also explicitly recognises that what happens online can trigger offline violence.</p>
<p>It’s worth noting in the countries we focused on, “hate speech” is seldom precisely legally prohibited. We found other regulations such as cybersecurity or religious tolerance laws could be used to act against hate speech, but instead tended to be used to suppress political dissent. </p>
<p>We concluded that Facebook’s problem is not in defining hate, but being unable to identify certain types of hate, such as that posted in <a href="https://www.oxfordbibliographies.com/view/document/obo-9780199772810/obo-9780199772810-0176.xml">minority languages</a> and regional dialects. It also often fails to respond appropriately to user reports of hate content. </p>
<h2>Where hate was worst</h2>
<p>Media reports have shown Facebook struggles to automatically identify hate <a href="https://time.com/5739688/facebook-hate-speech-languages/">posted in minority languages</a>. It has <a href="https://www.newyorker.com/magazine/2020/10/19/why-facebook-cant-fix-itself">failed to provide training materials</a> to its own moderators in local languages, even though many are from Asia Pacific countries where English is not the first language. </p>
<p>In the Philippines and Indonesia in particular, we found LGBTIQ+ groups are exposed to an unacceptable level of discrimination and intimidation. This includes death threats, targeting of Muslims and threats of stoning or beheading. </p>
<p>On Indian pages, Facebook filters failed to capture vomiting emojis posted in response to gay wedding photos, and rejected some very clear reports of vilification. </p>
<p>In Australia, on the other hand, we found no unmoderated hate speech - only other types of insensitive and inappropriate comments. This could indicate less abuse gets posted, or there is more effective English language moderation from either Facebook or page administrators. </p>
<p>Similarly in Myanmar LGBTIQ+ groups experienced very little hate speech. But we are aware Facebook is working hard to <a href="https://about.fb.com/news/2018/11/myanmar-hria/">reduce hate speech on its platform there</a>, in the wake of it being used to <a href="https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/">persecute the Rohingya Muslim minority.</a> </p>
<p>Also, it’s likely gender diversity isn’t as volatile a subject in Myanmar as it is in <a href="https://www.reuters.com/world/india/indian-court-calls-sweeping-reforms-respect-lgbt-rights-2021-06-07/">India</a>, <a href="https://www.nbcnews.com/feature/nbc-out/indonesia-proposes-bill-force-lgbtq-people-rehabilitation-n1146861">Indonesia</a> and the Philippines. In these countries LGBTIQ+ rights are highly politicised.</p>
<p>Facebook has taken some <a href="https://www.facebook.com/business/news/sharing-actions-on-stopping-hate">important steps towards tackling hate speech</a>. However we’re concerned COVID-19 has forced the platform to become <a href="https://www.politico.eu/article/facebook-content-moderation-automation/">more reliant on machine moderation</a>. That too at a time when it can only automatically identify hate in around 50 languages - even though <a href="https://www.tomedes.com/translator-hub/asian-languages">thousands are spoken everyday</a> across the region. </p>
<h2>What we recommend</h2>
<p>Our report to Facebook outlines several key recommendations to help improve its approach to combating hate on its platform. Overall, we have urged the company to convene more regularly with persecuted groups in the region, so it can learn more about hate in their local contexts and languages.</p>
<p>This needs to happen alongside a boost to the numbers of its country policy specialists and in-house moderators with minority language expertise.</p>
<p>Mirroring <a href="https://en.efhr.eu/2018/01/29/efhr-welcomed-trusted-partner-channel-facebook">efforts in Europe</a>, Facebook also needs to develop and publicise its trusted partners channel. This provides visible, official hate speech-reporting partner organisations through which people can directly report hate activities to Facebook during crises such as the Christchurch mosque attacks.</p>
<p>More broadly, we would like to see governments and NGOs cooperate to set up an Asian regional hate speech monitoring trial, similar to one <a href="https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en.">organised by the European Union</a>. </p>
<p>Following the EU example, such an initiative could help identify urgent trends in hate speech across the region, strengthen Facebook’s local reporting partnerships, and reduce the overall incidence of hateful content on Facebook.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-facebook-created-its-own-supreme-court-for-judging-content-6-questions-answered-160349">Why Facebook created its own ‘supreme court’ for judging content – 6 questions answered</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/163723/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fiona R Martin and Aim Sinpeng from the University of Sydney, together with Katharine Gelber and Kirril Shields from the University of Queensland received Content Policy on Social Media research funding from Facebook. Martin is also a chief investigator on the Australian Research Council funded Discovery Project "Platform Governance: Rethinking Internet Regulation as Media Policy” (DP190100222).</span></em></p><p class="fine-print"><em><span>Aim Sinpeng does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We found LGBTIQ+ groups are exposed to an unacceptable level of discrimination and intimidation, including death threats, targeting of Muslims and threats of stoning or beheading.Fiona R Martin, Associate Professor in Convergent and Online Media, University of SydneyAim Sinpeng, Lecturer in Government and International Relations, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1603492021-05-05T21:08:00Z2021-05-05T21:08:00ZWhy Facebook created its own ‘supreme court’ for judging content – 6 questions answered<figure><img src="https://images.theconversation.com/files/399024/original/file-20210505-19-gg5zpf.jpg?ixlib=rb-1.1.0&rect=94%2C110%2C5184%2C3403&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facebook's new Oversight Board affirmed the social media network's ban on Donald Trump.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/FacebookOversightPanel/eb4ce13be55d4a0b82cb1607f0a2d5f0/photo?Query=Facebook%20oversight%20board&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=Anytime&totalCount=14&currentItemNo=5">AP Photo/Jeff Chiu</a></span></figcaption></figure><p><em>Facebook’s quasi-independent Oversight Board on May 5, 2021, <a href="https://www.oversightboard.com/news/226612455899839-oversight-board-upholds-former-president-trump-s-suspension-finds-facebook-failed-to-impose-proper-penalty/">upheld the company’s suspension of former President Donald Trump</a> from the platform and Instagram. The decision came four months after Facebook CEO Mark Zuckerberg <a href="https://www.washingtonpost.com/technology/2021/05/03/facebook-trump-decision-faq">banned Trump “indefinitely” for his role</a> in inciting the Jan. 6 riot at the U.S. Capitol. The board chastised Facebook for failing to either set an end date for the suspension or permanently ban Trump and gave the social media company six months to resolve the matter.</em> </p>
<p><em>What is this Oversight Board that made one of the most politically perilous decisions Facebook has ever faced? Why did the company create it, and is it a good idea? We asked <a href="https://scholar.google.com/citations?user=loPMxzAAAAAJ&hl=en&oi=ao">Siri Terjesen</a>, an expert on corporate governance, to answer these and several other questions.</em> </p>
<h2>1. What is the Facebook Oversight Board?</h2>
<p>The Oversight Board was set up to give users an independent third party to whom they can appeal Facebook moderation decisions, as well as to help set the policies that govern these decisions. The <a href="https://www.washingtonpost.com/technology/2021/05/03/facebook-trump-decision-faq/">idea was first proposed</a> by Zuckerberg in 2018 after a discussion with Harvard Law Professor Noah Feldman, and the board began work in October 2020, funded by a US$130 million trust provided by Facebook to cover the initial six years of operating expenses.</p>
<p><a href="https://oversightboard.com">According to the board</a>, it “was created to help Facebook answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why.” The Oversight Board has final decision-making authority, even above the board of directors, and its decisions are binding on Facebook. </p>
<p>The Oversight Board has <a href="https://www.oversightboard.com/meet-the-board/">20 members</a> from around the world and a diverse variety of disciplines and backgrounds, such as journalism, human rights and law, as well as different political perspectives. It even includes a former prime minister. The goal is to eventually expand the board to 40 members in total. </p>
<figure class="align-center ">
<img alt="former President Donald Trump raises his right arm as his hand forms a fist during a speech, with a row of US flags behind him" src="https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/399043/original/file-20210505-17-f0ioku.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In a statement, Trump called the Oversight Board decision a ‘total disgrace.’</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/FacebookTrumpExplainer/789bc4439a9f4f9d8c736ff17901e404/photo?Query=facebook&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=Anytime&totalCount=9368&currentItemNo=1">AP Photo/Jacquelyn Martin</a></span>
</figcaption>
</figure>
<h2>2. What other decisions has it made?</h2>
<p>So far, <a href="https://oversightboard.com/decision/">the board has reviewed 10 Facebook decisions</a>, including the one involving Trump. The decisions involved a variety of types of content, such as posts that were removed because they were deemed <a href="https://oversightboard.com/decision/FB-S6NRTDAJ/">racist</a>, <a href="https://oversightboard.com/decision/IG-7THR3SI1/">indecent</a> or <a href="https://oversightboard.com/decision/FB-R9K87402/">intended to incite violence</a>. It overturned Facebook’s ruling in six of the cases and upheld it in three of them. In the 10th case, the user deleted the post that Facebook had removed, which ended the board’s review. </p>
<p>In cases where the board overruled Facebook, the posts that had been removed were reinstated. And the board sometimes urged the company to clarify or revise its guidelines.</p>
<p>Given that Facebook is expected to take <a href="https://reason.com/volokh/2021/02/17/the-facebook-oversight-board/">20 to 30 billion enforcement actions</a> in 2021 alone, it’s unlikely the Oversight Board will be able to handle more than a handful of the most high-profile cases, like that of Trump. It’s one of the reasons the Oversight Board is dubbed “<a href="https://www.newyorker.com/tech/annals-of-technology/inside-the-making-of-facebooks-supreme-court">Facebook’s Supreme Court</a>.”</p>
<h2>3. Is it a model other social media companies are likely to follow?</h2>
<p>As a platform company, Facebook is unique.</p>
<p>It’s a social media giant that must monitor a global operation that <a href="https://investor.fb.com/financials/?section=annualreports">generates over $86 billion in revenue</a>, <a href="https://www.statista.com/statistics/273563/number-of-facebook-employees">employs 58,600 people</a> and serves <a href="https://www.oberlo.com/blog/facebook-statistics">more than 2.8 billion active monthly users</a> – more than a third of the world’s population – as well as millions of advertisers. Very few companies operate in a space that involves user content moderation, and none at this scale. Other platform companies have considerably less content, and usually only in one language, whereas Facebook is available in <a href="https://investor.fb.com/financials/?section=annualreports">100 languages</a>. </p>
<p>Given Facebook’s shareholder-elected corporate board of directors includes just 10 people, each of whom has their own demanding day job, it is not surprising to me that Zuckerberg decided to set up an outside panel to develop decisions about speech and online safety. </p>
<p>It’s unlikely, however, that other companies will ever have a similar type of board. The Oversight Board has been extremely resource intensive. It <a href="https://oversightboard.com">took over two years to establish</a> through a series of 22 roundtable meetings with participants in 88 countries, six in-depth workshops, 250 one-on-one discussions and 1,200 submissions – not to mention its high cost of $130 million, which is meant to last six years.</p>
<h2>4. Was it a good idea, from a corporate governance standpoint?</h2>
<p>A growing body of research <a href="https://www.doi.org/10.5465/19416520.2016.1120957">questions whether directors on corporate boards can fulfill their oversight responsibilities</a> on their own, due to the sheer amount of information that must be obtained, processed and shared. </p>
<p>While I think we will see more corporate boards outsource some decisions and processes to external panels – as a small board cannot be expected to have the requisite knowledge and skills on all topics – few corporations are likely to follow Facebook’s lead and grant an outside body the power to make unilateral decisions. </p>
<p>Since only the board of directors is beholden to a company’s shareholders, board directors ultimately need to take the final responsibility for corporate decisions. </p>
<figure class="align-center ">
<img alt="Facebook CEO Mark Zuckerberg looks to his right during a hearing on Capitol Hill in 2019" src="https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=416&fit=crop&dpr=1 600w, https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=416&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=416&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=523&fit=crop&dpr=1 754w, https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=523&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/399044/original/file-20210505-23-1ajwvt5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=523&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Zuckerberg may still face political blowback because of the Oversight Board’s decision.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/FacebookOversightPanel/acf2cd51a2fc4dcfab8cff7ab466f749/photo?Query=Facebook%20oversight%20board&mediaType=photo&sortBy=arrivaldatetime:desc&dateRange=Anytime&totalCount=14&currentItemNo=13">AP Photo/Andrew Harnik</a></span>
</figcaption>
</figure>
<h2>5. Does the Oversight Board shield Facebook from political or legal fallout?</h2>
<p>While it’s likely that some at Facebook hoped shifting its thorniest decisions would insulate the company, executives and corporate board members from political or legal problems, as the Trump decision shows, it won’t actually do that. </p>
<p>Certainly the decision to utilize an outside oversight body might be interpreted as political, as all 10 Facebook board directors <a href="https://investor.fb.com/leadership-and-governance/?section=board">live and work</a> predominantly in the United States and might be hesitant to vote to make decisions like restricting the freedom of expression of a former president who <a href="https://www.usnews.com/news/politics/articles/2021-04-27/president-trump-losing-support-from-republicans-poll-finds">still commands support among many Americans</a> – and <a href="https://www.cfr.org/blog/2020-election-numbers">won 47% of the popular vote</a> in the last election. </p>
<p>But whether Facebook makes the decision itself or outsources to an independent board, Facebook will still face the consequences if the decision to uphold the Trump ban alienates Americans or people around the world who feel it is an attack on their freedom of expression. </p>
<p>People may leave Facebook for other platforms such as <a href="https://techcrunch.com/2021/01/09/parler-jumps-to-no-1-on-app-store-after-facebook-and-twitter-bans/">Parler</a>, <a href="https://reason.com/2018/10/29/ready-to-get-off-facebook-reason-reviews/">Gab</a> and <a href="https://www.theverge.com/2021/1/7/22218989/signal-new-signups-whatsapp-facebook-privacy-controversy-elon-musk">Signal</a>, as <a href="https://fortune.com/2021/01/11/mewe-gab-rumble-growth-parler-trump-bans-social-media-violence/">many have already done</a> since the initial Trump ban in January – and knowing an outside body made the decision won’t stop them. </p>
<p>And a poor “political” decision <a href="https://www.wsj.com/articles/as-decision-on-trump-looms-facebook-preps-its-advertisers-11620151529">could drive away some advertisers</a> and make it harder to hire and retain employees, regardless of who made it.</p>
<h2>6. How are other social media companies handling these issues differently?</h2>
<p>Twitter CEO Jack Dorsey made an internal decision to <a href="https://www.nbcnews.com/tech/tech-news/twitter-permanently-bans-president-donald-trump-n1253588">permanently suspended Trump</a> from his company’s platform on Jan. 8, 2021. While Dorsey acknowledged that the decision <a href="https://www.foxnews.com/media/twitter-ceo-jack-dorsey-defends-trump-ban-but-admits-his-companys-power-sets-a-dangerous-precedent">set a “dangerous precedent</a>,” Twitter, like other social media companies, doesn’t have an appeals process for that kind of decision. </p>
<p>Some newer companies, such as <a href="https://www.usatoday.com/story/tech/2021/01/20/mewe-social-network-gains-members-touts-privacy-over-facebook/4228797001/">MeWe</a> and <a href="https://www.foxbusiness.com/technology/youtube-rival-rumble-growth-ceo">Rumble</a>, offer more lax content moderation in order to allow greater freedom of expression for users.</p>
<p><a href="https://gab.com/">Gab</a> describes itself as “A social network that champions free speech, individual liberty and the free flow of information online. All are welcome.” <a href="https://legal.parler.com/documents/guidelines.pdf">Parler’s content guidelines</a> are even more basic and keeps content moderation to an “absolute minimum. We prefer to leave decisions about what is seen and who is heard to each individual.” </p>
<p><a href="https://www.businessinsider.com/gab-reports-growth-in-the-midst-of-twitter-bans-2021-1">Gab</a> and <a href="https://qz.com/1982895/parler-needs-apple-so-much-its-actually-moderating-more-content/,">Parler</a> are presently banned from the app stores of both Apple and Google due to a lack of content moderation.</p>
<p>[<em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>.]</p><img src="https://counter.theconversation.com/content/160349/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Siri Terjesen has received research and review funding from U.S., Norwegian, and Swedish governments, as well as several private foundations.</span></em></p>The social media giant’s third-party review panel upheld Facebook’s ban on Donald Trump. A corporate governance expert explains why Facebook created the Oversight Board.Siri Terjesen, Phil Smith Professor of Entrepreneurship & Associate Dean, Research & External Relations, Florida Atlantic UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1570532021-03-16T18:52:01Z2021-03-16T18:52:01ZSocial media has huge problems with free speech and moderation. Could decentralised platforms fix this?<figure><img src="https://images.theconversation.com/files/389771/original/file-20210316-16-udbu79.jpg?ixlib=rb-1.1.0&rect=132%2C114%2C3702%2C2437&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Over the past few months, Twitter took down the account of the then-President of the United States and Facebook temporarily stopped users from sharing Australian media content. This begs the question: do social media platforms wield too much power? </p>
<p>Whatever your personal view, a variety of “decentralised” social media networks now promise to be the custodians of free-spoken, censorship-resistant and crowd-curated content, free of corporate and political interference. </p>
<p>But do they live up to this promise?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/trumps-twitter-tantrum-may-wreck-the-internet-139660">Trump’s Twitter tantrum may wreck the internet</a>
</strong>
</em>
</p>
<hr>
<h2>Cooperatively governed platforms</h2>
<p>In “decentralised” social media networks, control is actively shared across many servers and users, rather than a single corporate entity such as Google or Facebook. </p>
<p>This can make a network more resilient, as there is no central point of failure. But it also means no single arbiter is in charge of moderating content or banning problematic users.</p>
<p>Some of the most prominent decentralised systems use blockchain (often associated with <a href="https://theconversation.com/why-is-bitcoins-price-at-an-all-time-high-and-how-is-its-value-determined-152616">Bitcoin currency</a>). A blockchain system is a kind of distributed online ledger hosted and updated by thousands of computers and servers around the world.</p>
<p>And all of these plugged-in entities must agree on the contents of the ledger. Thus, it’s almost impossible for any single node in the network to meddle with the ledger without the updates being rejected.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389773/original/file-20210316-24-4i7ir3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A blockchain is a type of ledger or database which is ‘immutable’, meaning its data can’t be altered. As new data comes in it is entered into a new block, which is then locked into an existing chain of blocks.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>Gathering ‘Steem’</h2>
<p>One of the most famous blockchain social media networks is <a href="https://steemit.com/">Steemit</a>, a decentralised application that runs on the <a href="https://steem.com/">Steem</a> blockchain. </p>
<p>Because the Steem blockchain has its own cryptocurrency, popular posters can be rewarded by readers through micropayments. Once content is posted on the Steem blockchain, it can never be removed. </p>
<p>Not all decentralised social media networks are built on blockchains, however. The <a href="https://the-federation.info/">Fediverse</a> is an ecosystem of many servers that are independently owned, but which can communicate with one another and share data. </p>
<p><a href="https://joinmastodon.org/">Mastodon</a> is the most popular part of the Fediverse. Currently with close to three million <a href="https://the-federation.info/">users across more than 3,000 servers</a>, this open-source platform is made up of a network of communities, similar to Reddit or Tumbler.</p>
<p>Users can create their own “instances” of Mastodon — with many separate instances forming the wider network — and share content by posting 500-character-limit “toots” (yes, toots). Each instance is privately operated and moderated, but its users can still communicate with other servers if they want to.</p>
<h2>What do we gain?</h2>
<p>A lot of concern around social media involves what content is being monetised and who benefits. Decentralised platforms often seek to shift the point of monetisation. </p>
<p>Platforms such as Steemit, Minds and <a href="https://d.tube/">DTube</a> (another platform built on the <a href="https://steem.com/">Steem social blockchain</a>) claim to flip this relationship by rewarding users when their content is shared.</p>
<p>Another purported benefit of decentralised social media is freedom of speech, as there’s no central point of censorship. In fact, many decentralised networks in recent years have been developed in response to moderation practices.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/parler-what-you-need-to-know-about-the-free-speech-twitter-alternative-142268">Parler: what you need to know about the 'free speech' Twitter alternative</a>
</strong>
</em>
</p>
<hr>
<p>But even the most pro-free-speech platforms face challenges. There are always malicious people, such as violent extremists, terrorists and child pornographers, who should not be allowed to post at will. So in practice, every decentralised network requires some sort of moderation.</p>
<p>Mastodon provides a <a href="https://mastodon.social/about/more">set of guidelines</a> for user conduct and has moderators <a href="https://docs.joinmastodon.org/admin/moderation/">within particular servers</a> (or communities). They have the power to disable, silence or suspend user access and even to apply server-wide moderation. </p>
<p>As such, each server sets its own rules. However, if a server is “misbehaving”, the entire server can be put under a <a href="https://docs.joinmastodon.org/admin/moderation/">domain block</a>, with varying degrees of severity. Mastodon publicly lists the moderated servers and the reason for restriction, such as spreading conspiracy theories or hate speech.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Mastadon's communities sign-up page" src="https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=421&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=421&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=421&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=529&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=529&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389768/original/file-20210316-21-159elu6.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=529&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Mastadon’s communities sign-up page says the platform is ‘committed to active moderation against racism, sexism and transphobia’.</span>
<span class="attribution"><span class="source">Screenshot/Mastadon</span></span>
</figcaption>
</figure>
<p>Some systems are harder to moderate. Blockchain-based social network Minds claims to base its <a href="https://www.minds.com/content-policy">content policy</a> on the First Amendment of the US constitution. The platform attracted <a href="https://www.engadget.com/2018-04-20-minds-anti-facebook-crypto-social-network-extreme-content.html">controversy</a> for hosting <a href="https://www.vice.com/en/article/wjvp8y/minds-the-anti-facebook-has-no-idea-what-to-do-about-all-the-neo-nazis">neo-Nazi groups</a>. </p>
<p>Users who violate a rule receive a “strike”. Where the violation relates to “not safe for work” (NSFW) content, three strikes may result in the user being tagged under a NSFW filter. If this happens, other users must opt in to view the <a href="https://www.minds.com/minds/blog/power-to-the-people-the-minds-jury-system-975486713993859072">NSFW content</a>, for “total control” of their feed. </p>
<p>Minds’s content policy states NSFW content excludes posts of an illegal nature. These result in an immediate user ban and removal of the content. If a user wants to appeal a decision, the verdict comes from a randomly-selected <a href="https://www.minds.com/content-policy">jury</a> of users. </p>
<p>Even blockchain-based social media networks have content moderation systems. For example, <a href="https://peepeth.com/about">Peepeth</a> has a <a href="https://peepeth.com/a/terms">code of conduct</a> adapted from a speech by Vietnamese Thiền Buddhist monk and peace activist <a href="https://en.wikipedia.org/wiki/Th%C3%ADch_Nh%E1%BA%A5t_H%E1%BA%A1nh">Thích Nhất Hạnh</a>. </p>
<p>“Peeps” falling afoul of the code are removed from the main feed accessible from the Peepeth website. But since all content is recorded on the blockchain, it continues to be accessible to those with the technical know-how to retrieve it. </p>
<p>Steemit will also delete illegal or harmful content from its user-accessible feed, but the content remains on the Steem blockchain indefinitely.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/reddit-tackles-revenge-porn-and-celebrity-nudes-38112">Reddit tackles 'revenge porn' and celebrity nudes</a>
</strong>
</em>
</p>
<hr>
<h2>The search for open <em>and</em> safe platforms continues</h2>
<p>While some decentralised platforms may claim to offer a free for all, the reality of using them shows us some level of moderation is both inevitable and necessary for even the most censorship-resistant networks. There are a host of moral and legal obligations which are unavoidable. </p>
<p>Traditional platforms including Twitter and Facebook rely on the <a href="https://kelsienabben.medium.com/grounds-for-conspiracy-assessing-censorship-resistance-in-decentralisation-platforms-f6b317d5ad7f">moral responsibility</a> of a central authority. At the same time, they are the target of <a href="https://theconversation.com/googles-and-facebooks-loud-appeal-to-users-over-the-news-media-bargaining-code-shows-a-lack-of-political-power-154379">political and social pressure</a>. </p>
<p>Decentralised platforms have had to come up with more complex, and in some ways less satisfying, moderation techniques. But despite being innovative, they don’t really resolve the tension between moderating those who wish to cause harm and maximising free speech. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/googles-and-facebooks-loud-appeal-to-users-over-the-news-media-bargaining-code-shows-a-lack-of-political-power-154379">Google's and Facebook’s loud appeal to users over the news media bargaining code shows a lack of political power</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/157053/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chris Berg receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Elizabeth Morton and Marta Poblet do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Imagine if Facebook’s content was hosted on a blockchain — across many thousands of ordinary computers — and governed equally by each of them, rather than Mark Zuckerberg.Chris Berg, Principal Research Fellow and Co-Director, RMIT Blockchain Innovation Hub, RMIT UniversityElizabeth Morton, Research Fellow of the RMIT Blockchain Innovation Hub, Lecturer Taxation, RMIT UniversityMarta Poblet, Associate Professor, Graduate School of Business and Law, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1531192021-02-22T13:46:23Z2021-02-22T13:46:23ZFacebook’s free speech myth is dead – and regulators should take notice<figure><img src="https://images.theconversation.com/files/385404/original/file-20210221-19-18q8ra9.jpg?ixlib=rb-1.1.0&rect=13%2C0%2C4479%2C2775&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.alamy.com/facebook-logo-seen-on-the-smartphone-and-blurred-australian-flag-on-the-background-screen-concept-stafford-united-kingdom-february-18-2021-image405794748.html?pv=1&stamp=2&imageid=8C906DAB-C458-4A6D-90AB-AD9DBE690C0C&p=1396470&n=0&orientation=0&pn=1&searchtype=0&IsFromSearch=1&srch=foo%3dbar%26st%3d0%26pn%3d1%26ps%3d100%26sortby%3d2%26resultview%3dsortbyPopular%26npgs%3d0%26qt%3dfacebook%2520australia%26qt_raw%3dfacebook%2520australia%26lic%3d3%26mr%3d0%26pr%3d0%26ot%3d0%26creative%3d%26ag%3d0%26hc%3d0%26pc%3d%26blackwhite%3d%26cutout%3d%26tbar%3d1%26et%3d0x000000000000000000000%26vp%3d0%26loc%3d0%26imgt%3d0%26dtfr%3d%26dtto%3d%26size%3d0xFF%26archive%3d1%26groupid%3d%26pseudoid%3d%26a%3d%26cdid%3d%26cdsrt%3d%26name%3d%26qn%3d%26apalib%3d%26apalic%3d%26lightbox%3d%26gname%3d%26gtype%3d%26xstx%3d0%26simid%3d%26saveQry%3d%26editorial%3d%26nu%3d%26t%3d%26edoptin%3d%26customgeoip%3dGB%26cap%3d1%26cbstore%3d1%26vd%3d0%26lb%3d%26fi%3d2%26edrf%3d0%26ispremium%3d1%26flip%3d0%26pl%3d">mundissima/Alamy Stock Photo</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>Facebook’s recent decision to <a href="https://apnews.com/article/facebook-blocks-australia-news-access-fed95e78e8bf30f167eb1a2d893ac89c">block its Australian users</a> from sharing or viewing news content provoked a worldwide backlash and accusations of hubris and bullying. Although the company has now reversed its decision following an agreement with the Australian government, the row has exposed the fragility of Facebook’s founding myth: that Mark Zuckerberg’s brainchild is a force for good, providing a public space for people to connect, converse and cooperate.</p>
<p>An inclusive public space in the good times, Facebook has yet again proved willing to eject and exclude in the bad times – as a private firm ultimately has the right to do. Facebook seems to be a bastion of free speech up to and until the moment its revenue is endangered. At that point, as in the case of the Australian news ban, it defaults to a private space.</p>
<p><a href="https://journals.uic.edu/ojs/index.php/fm/article/view/10603/9549">My recent paper</a> explores social media’s spatial hybridity, arguing that we must stop seeing companies like Facebook as public spaces and “platforms” for free speech. Equally, given their ubiquity and dominance, we shouldn’t see them solely as private spaces, either. Instead, these companies should be defined as “corpo-civic” spaces – a mixture of the two – and regulated as such: by internal guidelines as well as external laws.</p>
<p>Facebook’s disagreement with the Australian government was over a <a href="https://www.crn.com.au/news/accc-warns-google-facebook-laws-are-just-the-start-559690">new set of laws</a> drawn up there to counter big tech’s monopoly power. The law in question responds to news companies’ complaints that they are losing advertising revenue to dominant content-sharing platforms such as Facebook and Google. <a href="https://parlinfo.aph.gov.au/parlInfo/download/legislation/ems/r6652_ems_2fe103c0-0f60-480b-b878-1c8e96cf51d2/upload_pdf/JC000725.pdf;fileType=application%2Fpdf">The law</a> compels Facebook to agree a fee with news companies in an effort to reimburse them for the advertising revenue they lose to Facebook.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A graph showing how Facebook have a growing share of display advertising in Australia" src="https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=311&fit=crop&dpr=1 600w, https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=311&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=311&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=391&fit=crop&dpr=1 754w, https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=391&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/385401/original/file-20210221-19-xbv39x.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=391&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Facebook’s growing share of display advertising revenue in Australia is one reason for the new law.</span>
<span class="attribution"><a class="source" href="https://www.accc.gov.au/system/files/ACCC%20Digital%20Platforms%20Service%20Inquiry%20-%20September%202020%20interim%20report.pdf">ACC Digital Platforms Services Inquiry: Interim report, September 2020</a></span>
</figcaption>
</figure>
<p>Despite threatening to withdraw from Australia, Google eventually chose to <a href="https://www.nytimes.com/2021/02/17/technology/facebook-google-australia-news.html">agree to those fees</a>. Facebook didn’t follow suit. Instead, as if by the flick of a switch, the company turned off the news in Australia. Caught in the crossfire and also finding themselves blocked on Facebook were <a href="https://www.cnbc.com/2021/02/18/facebook-blocked-charity-and-state-health-pages-in-australia-news-ban.html">charities and government organisations</a>, as well as <a href="https://www.theguardian.com/world/2021/feb/19/facebooks-australia-ban-threatens-to-leave-pacific-without-key-news-source">Pacific communities</a> outside of Australian jurisdiction. </p>
<p>The news block has played poorly for Facebook. Having claimed impotence in the face of growing disinformation for years, Facebook’s new-found iron fist <a href="https://www.wired.co.uk/article/facebook-australia-rupert-murdoch">has raised eyebrows</a>. But this apparent inconsistency can be explained – though perhaps not justified – when we see Facebook as a public space with private interests.</p>
<p>Social media firms aren’t the only organisations straddled between the private and the public. Shopping centres are a common example in the offline world. So are some apparently public spaces like New York’s Zuccotti Park where, in 2011, <a href="https://www.theguardian.com/world/blog/2011/nov/25/occupy-wall-street-eviction-inevitable">Occupy Wall Street protesters</a> found themselves evicted both by police and by the park’s private owners, Brookfield Properties.</p>
<figure class="align-center ">
<img alt="A busy shopping centre with many people walking around, some blurred" src="https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/385531/original/file-20210222-17-tmyhqc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Shopping centres are a common example of spaces that are both public and private.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/shopping-198234164">estherpoon/Shutterstock</a></span>
</figcaption>
</figure>
<p>Social media platforms operate similarly. Just as a shopping centre relies on footfall, Facebook profits from active users on its platform. For Facebook, this profit is generated <a href="https://www.nasdaq.com/articles/what-facebooks-revenue-breakdown-2019-03-28-0">almost entirely</a> via the revenue provided by online advertising. </p>
<p>It shouldn’t surprise us that, when confronted with a law that could force Facebook to part with an unspecified amount of its revenue, the company showed resistance – even if that deprived Australian users of news content and a <a href="https://privacyinternational.org/long-read/2852/protecting-civic-spaces">civic space to share and discuss it</a>. </p>
<h2>Nazis and nipples</h2>
<p>Facebook’s brief Australian news block is the latest example of a social media company falling short of its own principles. Governed by “community standards” that are <a href="https://www.justsecurity.org/70035/the-republic-of-facebook/">effectively in-platform laws</a>, platforms such as Facebook have a history of enforcing their rules on an ad-hoc basis. For years, researchers have <a href="https://journals.sagepub.com/doi/abs/10.1177/1461444809342738?journalCode=nmsa">argued</a> that this system is inadequate, inconsistent and open to abuse.</p>
<p>Most glaring is social media’s inconsistent enforcement of its own community standards. Facebook and Instagram’s moderation has previously targeted <a href="https://www.tandfonline.com/doi/abs/10.1080/14680777.2020.1783805?journalCode=rfms20">women’s nipples</a> and has <a href="https://www.bbc.co.uk/news/blogs-trending-50222380">forced sex workers offline</a>, while self-professed Nazis were only forced from Facebook after their participation in the US Capitol riots on January 6 2021.</p>
<p>During the run-up to the US election in 2020, Mark Zuckerberg actually <a href="https://www.latimes.com/business/technology/la-fi-tn-facebook-aspen-zuckerberg-regulation-20190626-story.html">invited regulation from the government</a>, which seemed to be an admission that Facebook had grown beyond its ability to regulate itself. Yet, as we’ve seen with events in Australia, the corporate half of these online civic spaces baulks at any external regulation that might be bad for business.</p>
<h2>Corpo-civic spaces</h2>
<p>So how should we regulate these hybrid spaces with competing and sometimes contradictory interests? My recent paper turns to “<a href="https://www.oxfordreference.com/view/10.1093/oi/authority.20110803103943995">third space theory</a>” for answers. Third space theory has been used to understand spatially ambiguous places, like when people’s homes become their workplaces, or when people feel a tension between their <a href="https://www.jstor.org/stable/40647476">ancestral and adopted homes</a>.</p>
<p>When applied to ambiguous spaces between the “corporate” and the “civic”, third space theory can help us better understand the unique regulatory challenges associated with social media companies. Facebook, for instance, is neither a wholly corporate nor a wholly civic space: it’s a corpo-civic one.</p>
<p>A corpo-civic governance approach would recognise that to heavily penalise and restrict social media companies would be to risk dismantling valuable civic spaces. At the same time, to see Facebook solely as a platform for free speech gives it licence to place maximising profits above ethics and human rights. </p>
<p>Instead, a corpo-civic governance model could apply international human rights standards to content moderation, putting the protection of people above the protection of profits. This is not dissimilar from the standards we expect of shopping centres, which may have their own private security policies but which must nevertheless abide by state law. </p>
<p>Because social media platforms are global and not local like shopping centres, it will be important for the laws that govern them to be transnational. Facebook may have briefly blocked the news for Australians, but it wouldn’t make the same decision for hundreds of millions of users across several different countries.</p>
<p>Australia might be “<a href="https://foreignpolicy.com/2021/02/09/australia-google-regulation-internet-big-tech-silicon-valley-media/">Ground Zero</a>” for laws aimed at reining in big tech, but it’s certainly not the only country drafting them. Having those state regulators work together on transnational policies will be crucial. In the meantime, events in Australia are a warning for tech companies and state regulators alike about social media’s hybrid nature, and the tension between people and profits that emerge from corpo-civic spaces.</p>
<p><em>This article was updated on February 23 2021 after Facebook agreed a compromise with the Australian government to reverse the news block.</em></p><img src="https://counter.theconversation.com/content/153119/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Carolina Are does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Facebook’s choice of profits over the people is difficult to reconcile with its commitment to free speech.Carolina Are, Researcher and visiting lecturer, City, University of LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1544272021-02-08T11:21:50Z2021-02-08T11:21:50ZBanning disruptive online groups is a game of Whac-a-Mole that web giants just won’t win<figure><img src="https://images.theconversation.com/files/382587/original/file-20210204-14-1k56qsd.jpg?ixlib=rb-1.1.0&rect=0%2C8%2C5991%2C3979&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/angry-mob-various-diverse-people-on-1508248688">Zenza Flarini/Shutterstock</a></span></figcaption></figure><p>From Washington DC to Wall Street, 2021 has already seen online groups causing major organised offline disruption. Some of it has been in violation of national laws, some in violation of internet platforms’ terms of service. When these groups are seen to cause societal harm, the solution has been knee-jerk: to ban or “deplatform” those groups immediately, leaving them digitally “homeless”.</p>
<p>But the online world is a Pandora’s box of sites, apps, forums and message boards. Groups banned from Facebook migrated seamlessly to Parler, and from Parler, via encrypted messaging apps, to a host of other platforms. My research has shown how easily users migrate between platforms on the “dark web”. Deplatforming won’t work on the regular internet for the same reason: it’s become too easy for groups to migrate elsewhere.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/does-deplatforming-work-to-curb-hate-speech-and-calls-for-violence-3-experts-in-online-communications-weigh-in-153177">Does 'deplatforming' work to curb hate speech and calls for violence? 3 experts in online communications weigh in</a>
</strong>
</em>
</p>
<hr>
<p>This year, we’ve come to see social platforms not as passive communication tools, but rather as active players in public discourse. Twitter’s <a href="https://blog.twitter.com/en_us/topics/company/2020/suspension.html">announcement</a> that it had permanently suspended Donald Trump in the wake of the Capitol riots is one such example: a watershed moment for deplatforming as a means of limiting harmful speech.</p>
<p>Elsewhere, the Robinhood investment platform suspended the trading of GameStop stocks after the Reddit group r/WallStreetBets (which had 2.2 million members at the time) <a href="https://theconversation.com/gamestop-im-one-of-the-wallstreetbets-degenerates-heres-why-retail-trading-craze-is-just-getting-started-154584">coordinated a mass purchase</a> of the shares. While the original Reddit group remained open, many r/WallStreetBets users had also been communicating via the social network Discord. In response, <a href="https://www.theverge.com/2021/1/27/22253251/discord-bans-the-r-wallstreetbets-server">Discord banned their channel</a>, citing “hate speech”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A tweet from a Reddit users asking people to migrate to a different platform" src="https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=246&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=246&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=246&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=309&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=309&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382747/original/file-20210205-23-na5j4c.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=309&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Platform promiscuity: a Twitter account connected to a Reddit trading group invites followers to connect on Instagram.</span>
</figcaption>
</figure>
<h2>Net Migration</h2>
<p>Deplatforming is the mechanism currently used by social networks and technology companies to suspend or ban users who’ve allegedly violated their terms of service. From a company’s perspective, deplatforming is a protection from potential legal actions. For others, it’s hoped that <a href="https://www.newstatesman.com/international/2021/01/ban-donald-trump-s-twitter-account-good">deplatforming might help stop</a> what some see as online mobs, intent on vandalising political, social, and financial institutions. </p>
<p>But deplatforming has proven ineffective in stifling these groups. When Trump was banned from social media, his <a href="https://www.independent.co.uk/life-style/gadgets-and-tech/trump-twitter-ban-parler-gab-b1785515.html">supporters quickly reorganised on Parler</a> – a social networking site that markets itself as the home of free speech. Shortly after, Parler was removed from the Apple and Google app stores, and <a href="https://www.bbc.co.uk/news/technology-55608081">Amazon Web Services</a> – who provided the digital infrastructure for the platform – removed Parler from its servers.</p>
<p>With Parler offline, Trump’s supporters began looking for alternative social media apps, including MeWe and CloudHub, which both <a href="https://techcrunch.com/2021/01/11/following-riots-alternative-social-apps-and-private-messengers-top-the-app-stores/">rose rapidly up the app store rankings</a>, organised by volume of downloads. Similarly, after the Discord ban, Reddit investors quickly <a href="https://ambcrypto.com/xrp-to-rally-tomorrow-wsb-and-telegram-hopes-so/">reorganised themselves on the messaging service Telegram</a>. These “Whac-a-Mole” dynamics, with deplatformed groups rapidly reforming on other platforms, is strikingly similar to what my research team and I have observed on the dark web.</p>
<h2>Dark dynamics</h2>
<p>The dark web is a hidden part of the internet that’s easily accessible through specialised web browsers such as TOR. Illicit trade is rife on the dark web, especially in dark “marketplaces”, where users trade goods using cryptocurrencies such as Bitcoin. Silk Road, regarded as the first dark marketplace, launched in 2011 and mostly sold drugs. Shut down by the FBI in 2013, it was followed by dozens of dark marketplaces which also traded in weapons, fake IDs and stolen credit cards.</p>
<figure class="align-center ">
<img alt="A web browser showing Silk Road website and a list of drugs for sale on it." src="https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/382730/original/file-20210205-15-whgsbj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Anonymous marketplaces like Silk Road are commonly removed from the dark web, causing user migration.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/san-francisco-us-august-31-2018-1168698142">Jarretera/Shutterstock</a></span>
</figcaption>
</figure>
<p>My collaborators and I looked at what happens after a dark marketplace is shut down by a police raid or an “exit scam” – where the marketplace’s moderators suddenly close the website and disappear with the users’ funds. We focused on “migrating” users, who move their trading activity to a different marketplace after a closure.</p>
<p>We found that most users <a href="https://www.nature.com/articles/s41598-020-74416-y">flocked to the same alternative marketplace</a>, typically the one with the highest amount of trading. User migration took place within hours, possibly coordinated via a <a href="https://news.bitcoin.com/after-empires-exit-scam-darknet-market-patrons-scramble-to-find-alternatives/">discussion forum such as Reddit or Dread</a>, and the overall amount of trading across the marketplaces quickly recovered. So, although individual marketplaces can be fragile, with participants being exposed to losses due to scams, this coordinated user migration guarantees the marketplaces’ overall resilience, so that new ones continue to flourish.</p>
<h2>Platform promiscuity</h2>
<p>Back in 2006, Facebook was competing for dominance against other social networks such as MySpace, Orkut, Hi5, Friendster and Multiply. When Facebook started to dominate the scene, <a href="https://arxiv.org/pdf/1307.1354.pdf;">network effects made it unstoppable</a>. </p>
<p>Put simply, network effects compound platform dominance because you and I are most likely to join networking platforms our friends are already on. Given this tendency, Facebook and Twitter grew to host billions of users, and Hi5 disappeared. By the time their dominance had crystallised, a ban from Facebook or Twitter would have meant total ostracisation from the online community.</p>
<p>In 2021, everything is different. Global communities organised by interests or political opinion are now established, and are able to quickly formulate emergency evacuation or migration plans. Members are usually in contact on several channels – even “dormant” channels few users are active upon. As dark markets show, dormant channels can become active when they’re required. </p>
<p>All this means that being banned from Twitter, Facebook, Instagram, Snapchat, Twitch and others no longer results in your isolation, or in your community being disbanded. Instead, just like on the dark web, deplatforming simply requires users to migrate to a new home, which they do in a matter of hours. </p>
<p>Deplatforming is clearly an ineffective strategy for stopping disruptive groups from forming and coordinating online. This means that policing online conversation will be harder in the future. Whether this is seen as good or bad will depend on the specific circumstances and - of course - your point of view.</p><img src="https://counter.theconversation.com/content/154427/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrea Baronchelli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Deplatformed groups can all too easily flock to alternative platforms to coordinate.Andrea Baronchelli, Associate Professor, Department of Mathematics, City, University of LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1540662021-02-01T05:19:31Z2021-02-01T05:19:31ZMeet Clubhouse, the voice-only social media app setting the internet abuzz<figure><img src="https://images.theconversation.com/files/381563/original/file-20210201-17-9i76pr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">shutterstock</span> </figcaption></figure><p>There’s a new kid on the social media block, and it is making big waves. It is less than a year old, yet is already valued at <a href="https://www.theinformation.com/articles/clubhouse-gets-investment-interest-at-1-billion-valuation">US$1 billion</a>, and has venture capitalists scrambling to invest. </p>
<p>The newcomer is <a href="https://www.joinclubhouse.com">Clubhouse</a>, a social platform built around “drop-in audio chat”. Featuring ephemeral real-time voice conversations that any user can listen to and nobody can record, the invite-only app has gathered a legion of high-profile fans and some two million users.</p>
<p>The coronavirus pandemic may have created the ideal conditions for Clubhouse to thrive: hordes of people isolated by lockdowns or safety concerns, desperate for social connection. Text-based social media is fine as far as it goes, but voice is a natural alternative. </p>
<p>After a recent injection of cash, Clubhouse is planning to expand. Here’s what all the fuss is about.</p>
<h2>What happens inside the Clubhouse</h2>
<p>Users can follow other users or topics of interest, as well as joining themed “clubs”. They then have access to a selection of chat rooms focusing on different topics, many of which are highly tuned to the zeitgeist. </p>
<p>Rooms come in all sizes. Some have just a few people chatting informally. Others might contain hundreds or even thousands of people listening to a panel of experts, perhaps a <a href="https://digitaldiaspora.substack.com/p/the-digital-diaspora-4th-edition?r=9g9s1&utm_campaign=post&utm_medium=web&utm_source=copy">politician</a>, a celebrity or a <a href="https://www.joinclubhouse.com/event/PQ488GWn">business leader</a>.</p>
<p>The others in the room are visible and you can bring up their profiles, complete with a list of whom they follow. The Clubhouse algorithm takes all this into account when offering content choices. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A phone screenshot showing the Clubhouse app interface with several profile images of speakers and audience members." src="https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1298&fit=crop&dpr=1 600w, https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1298&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1298&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1631&fit=crop&dpr=1 754w, https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1631&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/381546/original/file-20210201-19-c4d936.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1631&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Clubhouse users can enter different ‘rooms’ and see who’s talking.</span>
<span class="attribution"><a class="source" href="https://apps.apple.com/us/app/id1503133294">Clubhouse / Apple</a></span>
</figcaption>
</figure>
<p>If you want to say something, raise your hand, and the room owner can give you speaking privileges. You can even applaud a speaker by rapidly clicking the mute/unmute button.</p>
<p>All of this happens in audio-only. At its best, it’s like eavesdropping on a fascinating conversation – with the ability to join in if you have something to add. </p>
<p>Many early users <a href="https://twitter.com/i/events/1271171765515894786?s=13">raved</a> about how much they like it. One reason Clubhouse is proving so popular is that audio can feel much more intimate and “live” than text-based social media. People often prefer to talk and listen rather than use a keyboard. </p>
<h2>Social cachet</h2>
<p>In its short life, Clubhouse has garnered an enviable social cachet. At present, the only way to access the app is to be invited by an existing user.</p>
<p>From initial popularity among <a href="https://www.nytimes.com/2020/05/19/technology/clubby-silicon-valley-app-clubhouse.html">Silicon Valley investors,</a> Clubhouse has attracted an impressive number of public figures, including Oprah Winfrey, Elon Musk and Drake. You’ll also find experts with deep domain knowledge, politicians with policies to champion, and celebrities talking about their latest projects. </p>
<p>Well-known users like these have been a major drawcard, and the relative scarcity of invitations have added to a sense of exclusivity. </p>
<p>Rooms in Clubhouse are temporary. When the meeting is over, the room disappears, and any discussion is gone forever. And it is not possible to record the discussion. </p>
<p>The temporary nature of the rooms may help to stop the formation of “<a href="https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/">social media echo chambers</a>” where people are only exposed to those with whom they already agree. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/im-right-youre-wrong-and-heres-a-link-to-prove-it-how-social-media-shapes-public-debate-65723">I'm right, you're wrong, and here's a link to prove it: how social media shapes public debate</a>
</strong>
</em>
</p>
<hr>
<h2>Invitation only</h2>
<p>With membership by invitation only, for now at least, there are two ways you can join. The first is by personal invitation from a friend who is already a member. </p>
<p>Otherwise, you can download the app and reserve a username to get yourself on the waiting list. If you do this, anyone you know who is already a member might be notified – and if that happens, they might let you in.</p>
<p>Clubhouse is currently only available on Apple devices. The firm has stated its intention to release an Android version in the near future. </p>
<h2>What’s in the pipeline?</h2>
<p>The social media newcomer recently secured a <a href="https://www.axios.com/clubhouse-andreessen-horowitz-3a10475a-becd-4483-a81e-9ce76d24e85f.html">new round of funding</a> in the order of US$100 million. Plans for the future include opening up to the general public and allowing <a href="https://www.joinclubhouse.com/welcoming-more-voices">content creators to be paid</a>. </p>
<p>The firm is considering three types of income generation: tipping, ticket sales and subscriptions. How all of this will come together is yet to be decided. Their current user base of around <a href="https://www.bloomberg.com/opinion/articles/2021-01-25/forget-tiktok-clubhouse-is-social-media-s-next-star">2 million</a> will likely see exponential growth. (For comparison, Facebook is approaching 3 <em>billion</em> users and even Twitter boasts more than 300 million.)</p>
<h2>Serendipity</h2>
<p>The history of innovation is marked by people making serendipitous connections. Meeting the right person at the right time in an unplanned way. Such connections cannot be made on demand, but you can create the right conditions for them to happen spontaneously. </p>
<p>The app’s <a href="https://www.notion.so/Community-Guidelines-461a6860abda41649e17c34dc1dd4b5f">rules</a> try to make sure conversations are effectively off the record: </p>
<blockquote>
<p>You may not transcribe, record, or otherwise reproduce and/or share information obtained in Clubhouse without prior permission. </p>
</blockquote>
<p>This encourages spontaneity and relaxed, off-the-cuff chats – but critics say it also makes space for <a href="https://www.vanityfair.com/news/2020/12/the-murky-world-of-moderation-on-clubhouse">misogyny and racism</a>. As the Clubhouse network grows, it will <a href="https://www.bloomberg.com/news/articles/2021-01-26/as-tech-darling-clubhouse-grows-so-does-scrutiny?sref=3fALnGZC">face challenges</a> around transparency and content moderation like those confronting the likes of Facebook and YouTube.</p>
<p>Voice-based networks such as Clubhouse and Twitter’s <a href="https://www.theverge.com/2020/12/17/22187490/twitter-spaces-test-voice-chat-rooms">new Spaces feature</a> are <a href="https://www.technologyreview.com/2021/01/25/1016723/the-future-of-social-networks-might-be-audio-clubhouse-twitter-spaces/">well suited</a> to creating the right conditions for those serendipitous connections to be made – for better or worse. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-why-the-human-voice-is-so-versatile-69800">Explainer: Why the human voice is so versatile</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/154066/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Clubhouse offers a rare experience of spontaneity and intimacy online. But as the new social platform grows, it may face problems of moderation and abuse.David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1538332021-01-25T15:25:42Z2021-01-25T15:25:42ZKamala Harris abuse campaign shows how trolls evade social media moderation<p>As Vice President Kamala Harris settles into her first full week in the White House, <a href="https://www.nbcnews.com/news/asian-america/black-south-asian-americans-celebrate-my-vice-president-social-media-n1255021">thousands are heading online to celebrate her groundbreaking achievement</a>. Unfortunately, thousands more are flooding social media with sexualised, transphobic, and racist posts which continue to highlight the particular <a href="https://www.power3point0.org/2020/05/01/why-disinformation-targeting-women-undermines-democratic-institutions/">abuse faced by female politicians online</a>. </p>
<p>My colleagues and I <a href="https://www.wilsoncenter.org/publication/malign-creativity-how-gender-sex-and-lies-are-weaponized-against-women-online">studied this abuse</a> in the weeks leading up to the 2020 US presidential election, revealing how trolls and abusers commonly use coded language and dog whistles to evade the <a href="https://slate.com/technology/2020/04/coronavirus-facebook-content-moderation-automated.html">moderation efforts</a> of social media companies. This evolution of online hate speech undermines the “automated moderation” tools that platforms are currently using to tackle hate speech.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-publish-or-not-to-publish-the-medias-free-speech-dilemmas-in-a-world-of-division-violence-and-extremism-153451">To publish or not to publish? The media's free-speech dilemmas in a world of division, violence and extremism</a>
</strong>
</em>
</p>
<hr>
<p>As well as abuse and hate speech, our team tracked the proliferation of “gendered and sexualised disinformation narratives” about 13 female politicians across six social media platforms. These false or misleading narratives are based on women’s gender or sexuality, and are often spread with some degree of coordination. </p>
<p>In one example of gendered disinformation, a <a href="http://worldpolicy.org/2017/12/20/how-disinformation-became-a-new-threat-to-women/">former Ukrainian MP</a> was targeted with doctored images of her running naked around the streets of Kyiv on Twitter. The former MP has stated that the image and narrative still circulates online whenever she does public work abroad.</p>
<h2>Evading moderation</h2>
<p>Our study, which collected data between September and November 2020, found over 336,000 abusive posts on Twitter, Reddit, Gab, Parler, 4chan and 8kun, 78% of which targeted Kamala Harris. That equates to four abusive posts a minute over the course of our two-month study period – and three a minute directed at the woman who is now vice president of the United States. Each of these posts had the capacity to be shared thousands of times, exponentially multiplying their reach.</p>
<p>Social media companies are using <a href="https://journals.sagepub.com/doi/full/10.1177/2053951720943234">automated content moderation tools</a> in an effort to flag and delete gendered hate speech quicker. These tools are designed to instantly detect harmful social media posts, but they can only do so by being told which words <a href="https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/how-automated-tools-are-used-in-the-content-moderation-process/">are considered abusive</a>. This leaves a “blind spot” for any abusive language which has yet to be flagged as abusive by human moderators.</p>
<p>Our research shows that online abusers are evolving the language they use in order to <a href="https://scholarship.law.cornell.edu/cgi/viewcontent.cgi?article=2620&context=facpub">avoid detection by moderation tools</a>. We call this process “malign creativity”, which we believe to be a significant challenge social media companies must overcome if they are to conduct effective content moderation at scale. </p>
<p>We’ve observed abusers crafting false narratives and memes, tailored to the female politician they seek to harass, and shrouded in coded language. An example of a false sexualised narrative we saw against Kamala Harris was that she “slept her way to the top” and is therefore unfit to hold office. This narrative spread across platforms with hashtags that no automated classifier could detect without being pre-coded to do so, such as #HeelsUpHarris or #KneePadsKamala. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A network of words connected by lines. The words are abusive and sexualised and reference Kamala Harris" src="https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=387&fit=crop&dpr=1 600w, https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=387&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=387&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=486&fit=crop&dpr=1 754w, https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=486&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/380330/original/file-20210124-17-niqu0j.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=486&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A network visualisation showing some of the abusive terms, often coded, shared about Kamala Harris between September and November 2020.</span>
<span class="attribution"><span class="source">Wilson Center Malign Creativity Report</span></span>
</figcaption>
</figure>
<p>Through network visualisation, we saw that some users who engaged with this narrative also engaged with other sexualised, racist and transphobic narratives. This underscored the intersectional abuse that <a href="https://decoders.amnesty.org/projects/troll-patrol/findings">women of colour face online</a>.</p>
<h2>Prioritising moderation</h2>
<p>Coded language and dog whistles (which are subtle messages designed to be understood by a certain audience without being explicit) make detecting gendered and sexualised disinformation on social media <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf">particularly difficult without high levels of investment</a> in detection technology. That’s why our report recommends social media platforms update their content moderation tools to pick up on new and emerging narratives that demean the world’s most powerful women. </p>
<p>This should be done in coordination with the women themselves, or their campaign and marketing teams. Platforms must also allow women to submit “incident reports” that cover multiple individual posts, rather than forcing them to report each piece of abusive content, one at a time – which is both laborious and upsetting.</p>
<p>Gendered and sexualised disinformation affects the public’s perceptions of high-profile women. Some women at the beginning of their careers <a href="https://www.ndi.org/sites/default/files/NDI%20Tweets%20That%20Chill%20Report.pdf">may feel harder hit by gendered disinformation and abuse</a>, choosing not to enter public-facing careers at all due to the abuse they see targeted at others – and at themselves. </p>
<p>One recent study found that <a href="https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190089283.001.0001/oso-9780190089283">women decrease their engagement online</a> to avoid ongoing or potential harassment. One of those interviewed in the study resented that she had to “wade through all this filth… to just do the basic function of participating” on social media. These sentiments highlight the harsh reality that women must accept if they wish to engage online, where harassment is <a href="https://www.womensmediacenter.com/speech-project/online-abuse-101/#men-harassed">more sustained and violent compared to what men face</a>.</p>
<p>Kamala Harris’ historic inauguration has been cause for celebration for women, and women of colour especially. It’s also an opportunity for a new generation of women to feel inspired to pursue leadership roles. </p>
<p>To ensure women are inspired by the presence of other women in high office and not dissuaded by the abuse they may face, social media platforms and governments are responsible for providing spaces in which women can participate equally online. Effectively moderating the gendered abuse women suffer on social media – especially that which passes undetected by automated tools – is a crucial part of that responsibility.</p><img src="https://counter.theconversation.com/content/153833/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alexandra Pavliuc is London Project Lead at the Faculty of Communication and Design, Ryerson University, and a Visiting Fellow at Institut Montaigne.</span></em></p>New research suggests tech firms need to improve how they detect abuse in response to the evolving use of coded language.Alexandra Pavliuc, PhD Candidate at the Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1457562020-09-08T05:43:48Z2020-09-08T05:43:48ZTikTok suicide video: it’s time platforms collaborated to limit disturbing content<figure><img src="https://images.theconversation.com/files/356876/original/file-20200908-16-pazcjt.jpg?ixlib=rb-1.1.0&rect=0%2C29%2C4898%2C3201&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>A disturbing video purporting to show a suicide is reportedly doing the rounds on the popular short video app TikTok, reigniting debate about what social media platforms are doing to limit circulation of troubling material.</p>
<p>According to media <a href="https://www.news.com.au/technology/online/social/parents-warned-about-shocking-suicide-video-on-tiktok-that-may-be-hidden-in-other-content/news-story/c135157c5a009fdcaaa7f9d54b146f7a">reports</a>, the video first showed up on Facebook in late August but has been re-uploaded and shared across Instagram and TikTok — reportedly sometimes <a href="https://heavy.com/news/2020/09/tiktok-viral-suicide-video/">cut with seemingly harmless content</a> such as cat videos.</p>
<p>TikTok users have <a href="https://www.tiktok.com/@aesthetically_80s/video/6869650300878261510">warned</a> others to swipe away quickly if they see a video pop up showing a man with long hair and a beard. </p>
<p>A statement by TikTok <a href="https://www.news.com.au/technology/online/social/parents-warned-about-shocking-suicide-video-on-tiktok-that-may-be-hidden-in-other-content/news-story/c135157c5a009fdcaaa7f9d54b146f7a">quoted</a> by News.com.au said:</p>
<blockquote>
<p>Our systems have been automatically detecting and flagging these clips for violating our policies against content that displays, praises, glorifies, or promotes suicide.</p>
<p>We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who’ve reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family.</p>
</blockquote>
<p>Schools and child safety advocates have warned parents to be alert for the possibility their child may see — or may have already seen — the video if they are a TikTok or Instagram user.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1302982768981413888"}"></div></p>
<p>The sad reality is users will continue to post disturbing content and it is impossible for platforms to moderate before posting. And once a video is live, it doesn’t take long for the content to migrate across to other platforms. </p>
<p>Pointing the finger at individual platforms such as TikTok won’t solve the problem. What’s needed is a coordinated approach where the big social media giants work together. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-just-blame-youtubes-algorithms-for-radicalisation-humans-also-play-a-part-125494">Don't just blame YouTube’s algorithms for ‘radicalisation’. Humans also play a part</a>
</strong>
</em>
</p>
<hr>
<h2>Evading moderation</h2>
<p>Post-moderation means even the worst content can be published. Either the platforms identify it with machine learning systems, or users report it to be processed by human moderators. But it can be live for five minutes, an hour or longer.</p>
<p>Once a video is up, it can be downloaded by bad actors, modulated to reduce the chance of detection by content moderation machine learning systems, and shared across multiple platforms — Reddit, Instagram, Facebook or more.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A teen looks at their phone." src="https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/356878/original/file-20200908-24-1uv4l6t.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">TikTok is particularly popular among young people.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>These bad actors can cut the video slightly differently, edit it within harmless material, put filters on it or distort the audio to make it difficult for the content moderation programs to automatically identify disturbing videos. Machine learning with visual content is advancing but it’s not perfect.</p>
<p>This is broadly what happened with video of the Christchurch massacre, where content taken from the gunman’s Facebook livestream of his attack was downloaded and then shared across various platforms. </p>
<p>By the time Facebook took down the original video, people already had copies of it and were uploading to Facebook, Reddit, YouTube and more. It very quickly became a cross-platform problem. These bad actors can also add hashtags (some very innocent-sounding) to target a particular community.</p>
<p>One of the key draws of TikTok as a social media platform is its “spreadability”; how easily it facilitates creating and sharing new videos based on the one a user was just watching. </p>
<p>With just a few taps users can create a “duet” video showing themselves reacting to the disturbing content. Bad actors, too, can easily re-upload videos that have been removed. Now this purported suicide video is out in the wild, it will be difficult for TikTok to control its spread.</p>
<h2>What about copyright takedowns?</h2>
<p>Some have noted social media platforms appear very adept at quickly removing copyrighted material from their services (and thereby avoiding huge fines), but can seem more tardy when it comes to disturbing content.</p>
<p>However, copyright videos are, in many ways, easier for machine learning moderation systems to detect. Existing systems used to limit the spread of copyrighted material have been built specifically for copyright enforcement.</p>
<p>For example, TikTok uses a system for detecting coprighted material (specifically music licensed by major record labels) to <a href="https://techcrunch.com/2020/08/12/acrcloud-profile/">automatically identify a song’s fingerprint</a>. </p>
<p>Even so, TikTok has faced a range of issues <a href="https://www.musicbusinessworldwide.com/nmpa-calls-for-scrutiny-of-tiktok-says-platform-has-consistently-violated-us-copyright-law-and-the-rights-of-songwriters-and-music-publishers/">relating to copyright enforcement</a>. Detecting hate speech or graphic videos on the platform is much more difficult. </p>
<h2>Room for improvement</h2>
<p>Certainly, there’s room for improvement. It’s a platform-wide, society-wide problem — we can’t just say TikTok is doing a bad job, it’s something all the platforms need to tackle together.</p>
<p>But asking market competitors to come up with a coordinated approach is not easy; platforms normally don’t share resources and work together globally to handle content moderation. But maybe they should.</p>
<p>TikTok employs massive teams of human moderators in addition to their algorithmically driven automated content moderation. These human content moderators work in many regions and languages to monitor content that may violate terms of use. </p>
<p>Recent events show TikTok is aware of growing demand for improved content moderation practices. In March 2020, responding to national security concerns, TikTok’s parent company ByteDance committed to <a href="https://www.wsj.com/articles/tiktok-to-stop-using-china-based-moderators-to-monitor-overseas-content-11584300597">stop using moderation teams based in China</a> to moderate international content. It also established a “transparency centre” in March 2020 to allow outside observers and experts <a href="https://techcrunch.com/2020/03/11/tiktok-to-open-a-transparency-center-where-outside-experts-can-examine-its-moderation-practices/">to scrutinise the platform’s moderation practices</a>. </p>
<p>These platforms have enormous power, and with that comes responsibility. We know content moderation is hard and nobody is saying it needs to be fixed overnight. More and more users know how to game the system, and there’s no single solution that will make the problem go away. It’s an evolving problem and the solution will need to constantly evolve too.</p>
<h2>Improving digital citizenship skills</h2>
<p>There’s a role for citizens, too. Every time these disturbing videos do the rounds, many more people go online to find the video - they talk about it with their friends and contribute to its circulation. </p>
<p>Complicating matters is the fact reporting videos on TikTok is not as straightforward as it is on other platforms, such as Facebook or Instagram. A recent <a href="https://journals.sagepub.com/doi/10.1177/2050157920952120">study</a> I (Bondy Kaye) was involved in compared features on TikTok with its Chinese counterpart, Douyin. We found the report function was located in the “share” menu accessed from the main viewing screen on both platforms — not a place many would think to look. </p>
<p>So if you’re a TikTok user and you encounter this video, don’t share it around - even in an effort to condemn it. You can report the video by clicking the share icon and selecting the appropriate reporting option. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/becoming-more-like-whatsapp-wont-solve-facebooks-woes-heres-why-113368">Becoming more like WhatsApp won't solve Facebook’s woes – here's why</a>
</strong>
</em>
</p>
<hr>
<p><em>Anyone seeking support and information about suicide can contact Lifeline on 131 114 or Beyond Blue on 1300 224 636.</em></p><img src="https://counter.theconversation.com/content/145756/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ariadna Matamoros-Fernández have received an award from Facebook, which includes research funding.</span></em></p><p class="fine-print"><em><span>D. Bondy Valdovinos Kaye does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A video purporting to show a suicide is reportedly circulating on TikTok, reigniting debate about content moderation on social media. Collaborating with competitors may be the key.Ariadna Matamoros-Fernández, Lecturer in Digital Media at the School of Communication, Queensland University of TechnologyD. Bondy Valdovinos Kaye, PhD Candidate / Editorial Assistant, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1417032020-07-02T01:39:31Z2020-07-02T01:39:31ZReddit removes millions of pro-Trump posts. But advertisers, not values, rule the day<p>On Monday, online discussion platform Reddit <a href="https://www.theguardian.com/technology/2020/jun/29/reddit-the-donald-twitch-social-media-hate-speech">permanently took down</a> its largest community of Donald Trump supporters, r/The_Donald.</p>
<p>The community had more than 7,000 active users per day (although this has previously been much higher). The ban was <a href="https://www.reddit.com/r/announcements/comments/hi3oht/update_to_our_content_policy/">on the grounds</a> that some posts incited violence, and the community had engaged in harassment on other subreddits. It will have removed hundreds of thousands of posts, and millions of comments going back many years. </p>
<p>The “r/The_Donald” subreddit is a themed, online message board where users can submit, comment and vote on posts. The <a href="https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html">decision to ban</a> it comes as several other platforms censure racist and violent material from Trump and his supporters.</p>
<p>Twitter recently <a href="https://www.reuters.com/article/us-twitter-factcheck/with-fact-checks-twitter-takes-on-a-new-kind-of-task-idUSKBN2360U0">fact-checked</a> some of Trump’s posts, video live-streaming service Twitch has temporarily <a href="https://www.theverge.com/2020/6/29/21307145/twitch-donald-trump-ban-campaign-account">banned</a> the president’s account, and Facebook is now <a href="https://www.nytimes.com/2020/06/29/business/dealbook/facebook-boycott-ads.html">losing advertisers</a> over its unwillingness to moderate hateful material and disinformation, including from the president.</p>
<p>According to the <a href="https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html">New York Times</a>, Reddit <a href="https://thenextweb.com/apps/2020/06/29/reddit-bans-r-thedonald-and-2000-other-hateful-subreddits-because-it-was-about-time/">also banned</a> another 2,000 communities across the political spectrum alongside the pro-Trump community, including left-leaning groups. </p>
<p>But while some may celebrate these actions, the moves should be understood within the context of a largely deregulated information economy, in which “doing good” is mostly about “doing well”. In other words: making money.</p>
<p>Upon a close look, the removal of r/The_Donald exposes the inadequacies of market-based information governance. Even in cases where individual governance decisions benefit society, the information economy remains primarily motivated by profit.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-vs-news-australia-wants-to-level-the-playing-field-facebook-politely-disagrees-141043">Facebook vs news: Australia wants to level the playing field, Facebook politely disagrees</a>
</strong>
</em>
</p>
<hr>
<h2>Reddit’s changing approach</h2>
<p>Started in 2015, r/The_Donald was the largest and most controversial subreddit dedicated to supporting Trump. Before the ban, it had more than 790,000 subscribers and was at times one of the most popular subreddits on the platform.</p>
<p>In June last year, Reddit “quarantined” <a href="https://www.theverge.com/2019/6/26/18759967/reddit-quarantines-the-donald-trump-subreddit-misbehavior-violence-police-oregon">the subreddit over posts inciting violence</a>. Several months later it purged most of the community’s volunteer moderators, arguing they weren’t upholding the platform’s policies, particularly through allowing banned content to stay up.</p>
<p>These shifts mirror changes in Reddit’s overall governance approach.</p>
<p>Historically, the platform has sold itself as a democratic space for free speech, with administrators resisting censorship in <a href="https://www.dailydot.com/unclick/reddit-beatingwomen-misogyny-images/">favour of a hands-off philosophy</a>. However, like other platforms, Reddit now faces pressure from advertisers that don’t want their brands associated with political extremism.</p>
<p>Advertising is a <a href="https://www.cnbc.com/2018/06/29/how-reddit-plans-to-make-money-through-advertising.html">growing part of Reddit’s economic model</a>. And with major partners such as <a href="https://www.redditinc.com/assets/case-studies/LOreal_Case_Study.pdf">L'Oréal</a> and <a href="https://www.redditinc.com/assets/case-studies/Audi_Case_Study.pdf">Audi</a>, advertisers’ preferences undoubtedly hold sway in how the website is regulated. </p>
<p>But as digital marketing agency iCrossing’s chief media officer <a href="https://www.cnbc.com/2018/06/29/how-reddit-plans-to-make-money-through-advertising.html">has previously argued</a>:</p>
<blockquote>
<p>What makes it (Reddit) attractive to consumers, which is the free and open ability to post, makes them scary to advertisers.</p>
</blockquote>
<h2>Walking a tightrope</h2>
<p>For major social media platforms, content regulation is a delicate issue, teetering on a balance between value and liability. </p>
<p>Reddit’s laissez-faire approach and community-led model invites broad participation and has helped its user base grow. However, this also fosters content that’s distasteful, unseemly and potentially dangerous – creating brand associations many advertisers would rather avoid. </p>
<p>The r/The_Donald subreddit embodies this tension. Reddit’s gradual regulation of it, and eventual banning, indicates the value-liability balance has tipped towards the latter.</p>
<p>While there is reason to laud these regulatory shifts, they are products of political-economic realities, rather than social priorities. And they speak to a much broader issue of information policy in contemporary society. </p>
<p>Although social media platforms are central to civic discourse, they’re also products in a competitive market economy. As long as that market economy remains deregulated by governments, individual companies will have outsized power. </p>
<p>They <em>may</em> use their power for social good, but this decision will be market-based, and thus can change with the winds of financial promise. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1277659843613077505"}"></div></p>
<h2>Risks for Reddit, risks for the internet</h2>
<p>Much of Reddit’s popularity has come from its status as the “wild west” of the internet. </p>
<p>The platform’s new approach may alienate its more dedicated user base. In trying to balance the ethos of free speech with increasing pressure to regulate, Reddit finds itself stuck <a href="https://thesocietypages.org/cyborgology/2018/10/29/reddit-quarantined/">between a rock and a hard place</a>.</p>
<p>And as Reddit moves to moderate and ban hateful content, more extreme users are going elsewhere. Prior to the r/The_Donald subreddit’s banning, participants had already established their own <a href="https://thedonald.win/">external site</a> and were encouraging others to move there. </p>
<p>Similarly, moderators on the quarantined r/MGTOW (an anti-feminist men’s rights subreddit) are now directing subscribers to a <a href="https://discord.com/login?redirect_to=%2Fchannels%2F%40me">Discord</a> channel – a community-based discussion app for private and public interaction.</p>
<p>Moderators of the quarantined r/TheRedPill (another anti-feminist men’s rights group) have been directing users to an external site for over a year.</p>
<p>Users leaving for external sites will reduce hateful content on Reddit, but will concentrate this hate elsewhere. And such sites are often far less regulated than larger platforms.</p>
<p>Conservatives increasingly complain <a href="https://www.theatlantic.com/ideas/archive/2019/07/conservatives-pretend-big-tech-biased-against-them/594916/">digital platforms are anti-conservative</a>. Reddit’s actions against r/The_Donald will likely increase calls for new, conservative-founded platforms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-just-blame-echo-chambers-conspiracy-theorists-actively-seek-out-their-online-communities-127119">Don't (just) blame echo chambers. Conspiracy theorists actively seek out their online communities</a>
</strong>
</em>
</p>
<hr>
<h2>How to prevent distilled anger</h2>
<p>Reddit’s move highlights the influence of economics in platform governance – and the vulnerabilities that arise from this. </p>
<p>Rather than individual moderation decisions, what’s needed is a broad regulatory framework that holds corporate bodies to account. We need to reconsider “<a href="https://www.reuters.com/article/us-twitter-trump-executive-order-explain/explainer-whats-in-the-law-protecting-internet-companies-and-can-trump-change-it-idUSKBN23434V">safe harbour</a>” laws that protect social media companies from legal liability. </p>
<p>More broadly, we need to recognise social media are entangled with civic society, and enact social policies that coincide with the weight of that responsibility. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1277652980527853568"}"></div></p><img src="https://counter.theconversation.com/content/141703/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The platform also took down another 2,000 communities, including left-leaning groups. The move comes just months ahead of the 2020 US presidential election.Simon Copland, PhD Student -- Sociology, Australian National UniversityJenny L. Davis, Lecturer in the School of Sociology, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1262732019-11-04T19:02:20Z2019-11-04T19:02:20ZIndia’s social media content removal order is a nail in the coffin of the internet as we know it<figure><img src="https://images.theconversation.com/files/300031/original/file-20191104-88372-1j050u.jpg?ixlib=rb-1.1.0&rect=53%2C44%2C5937%2C3943&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Geo-location technology can be used to block online content within a specified area in the world, thereby allowing for differences in national laws. </span> <span class="attribution"><span class="source">shutterstock</span></span></figcaption></figure><p>In recent weeks, India’s High Court of Delhi put another nail in the coffin of the internet as we currently know it. The court <a href="http://lobis.nic.in/ddir/dhc/PMS/judgement/23-10-2019/PMS23102019S272019.pdf">granted an order</a> requiring Facebook, Twitter and Google to remove certain content globally, based on that content being defamatory under local law in India.</p>
<p>This decision underlines a worrying trend of a “<a href="https://www.internetjurisdiction.net/Internet-Jurisdiction-Global-Status-Report-2019-Key-Findings_web">race to the bottom</a>” for internet freedom, where the <a href="https://www.linkedin.com/pulse/third-dimension-jurisdiction-dan-jerker-b-svantesson/">scope of jurisdiction</a> claimed by the courts is global. </p>
<p>If widely adopted, this may result in a situation where the only content that remains online is that which complies with all the laws of every country in the world.</p>
<h2>Another brick in the wall</h2>
<p>In reaching its decision, the Indian court relied on a string of recent decisions from around the world. For example, it drew from the <a href="https://blog.oup.com/2017/08/supreme-court-canada-state-sovereignty/">Canadian approach in Equustek</a>, where the Supreme Court of Canada ordered Google to remove content globally.</p>
<p>It also referred to a 2017 <a href="https://www.linkedin.com/pulse/sydney-become-internet-content-blocking-capital-world-svantesson/">Australian case</a> in which the Supreme Court of New South Wales ruled Twitter must globally block any future posting by a specific user.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-goes-full-circle-on-censorship-like-it-or-not-19488">Facebook goes full circle on censorship, like it or not</a>
</strong>
</em>
</p>
<hr>
<p>The most recent decision referred to was a <a href="https://www.linkedin.com/pulse/bad-news-internet-europes-top-court-opens-door-global-svantesson/">ruling</a> by the Court of Justice of the European Union (CJEU) in which the CJEU concluded the EU’s <a href="https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32000L0031">e-commerce directive</a> doesn’t prevent courts in EU countries from ordering social media sites to block or remove information worldwide.</p>
<p>Following the CJEU’s decision, several <a href="https://euinternetpolicy.wordpress.com/2019/10/04/the-cjeu-facebook-judgment-on-filtering-with-global-effect-clarifying-some-misunderstandings/">leading</a> <a href="https://www.twobirds.com/en/news/articles/2019/global/notice-and-stay-down-orders-and-impact-on-online-platforms#__prclt=pzS67trR">commentators</a> argued that, while much has been made of the CJEU’s apparent green light to global takedown orders, in reality this was just a decision about the dividing line between EU law and national law.</p>
<p>Even if this is true, <a href="https://www.washingtonpost.com/technology/2019/10/03/facebook-can-be-ordered-remove-content-worldwide-eu-says/">headlines</a> around the world didn’t communicate such a nuanced outcome. And with the current decision from India, we can see with complete clarity how that case is now being used by foreign courts. This shows how careful courts must be as to the messaging of their judgements.</p>
<p>It’s of course possible to suggest this type of application of an EU law case is a mistake by the Indian court, rather than the CJEU - and there is certainly merit in such an argument. However, the CJEU’s decision was a missed opportunity to clearly communicate a general stance against global orders as being standard.</p>
<h2>A missed opportunity to explain geo-location technologies</h2>
<p>Geo-location technology may be used to block online content within a specified geographical area. This practice caters to a global internet while still respecting differences in laws, and in India’s case could provide an alternative to a global blocking order.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/governments-are-making-fake-news-a-crime-but-it-could-stifle-free-speech-117654">Governments are making fake news a crime – but it could stifle free speech</a>
</strong>
</em>
</p>
<hr>
<p>However, more than once, courts have failed to understand how this technology operates. And at least on this occasion, errors could have been avoided since the court “had specifically directed the defendants to throw some light on how geo-blocking is done and to keep a technical person present in court to seek clarification on geo-blocking”. </p>
<p>The court said none of the internet platforms had given a detailed explanation as to how geo-blocking is done.</p>
<p>As a result, the court clearly misunderstood the impact of geo-blocking: </p>
<blockquote>
<p>If geo-blocking alone is permitted in respect of the entire content, […] the offending information would still […] be accessible from India, […] by accessing the international websites of these platforms.</p>
</blockquote>
<p>Where geo-blocking is done <a href="https://script-ed.org/wp-content/uploads/2014/10/svantesson.pdf">by reference to domain names</a>, internet users can indeed use another country’s version of the site in question and access the content. This seems to be the situation the court had in mind. </p>
<p>In contrast, with blocking by geo-location technology, the content is tailored to the user’s location, regardless of which country’s version of the site is accessed. It’s highly unfortunate the court wasn’t made to understand this important distinction.</p>
<h2>Silver linings, and the way onward</h2>
<p>Although the above probably makes clear that I see the Indian court’s decision as a setback, there are also some positive aspects that ought to be highlighted.</p>
<p>In its decision, the court clearly acknowledged the importance of the scope of jurisdiction issue and the implications of global orders.</p>
<p>The court also devoted considerable effort to discussing case law from around the world. This is an important step if we are to see a global harmonisation in approach. That said, I’d like to add that currently harmonisation seems to be taking us in an undesirable direction, with global blocking/removal orders as standard.</p>
<p>Given the court had taken account of the international environment, it’s disappointing, not to say odd, that it didn’t properly engage with the international law issues raised by the defendants. For instance, defendants mentioned the doctrine of comity, which demands courts take the international impact of their decisions into consideration.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/caution-over-the-eus-call-for-global-forgetfulness-from-google-34815">Caution over the EU's call for global forgetfulness from Google</a>
</strong>
</em>
</p>
<hr>
<p>While the Indian court decision is currently under <a href="https://www.livelaw.in/news-updates/facebook-appeals-delhi-hc-db-single-bench-order-global-blocking-of-content-baba-ramdev-149354">appeal</a>, there’s no point denying the future of the internet looks bleak when it comes to scope of jurisdiction. </p>
<p>The case discussed here sets an important precedent, not just for India but also the rest of the world. And much is at stake.</p><img src="https://counter.theconversation.com/content/126273/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dan Jerker B. Svantesson was an ARC Future Fellow (project number FT120100583) during 2012-2016. During this period he received funding from the Australian Research Council for a project dealing with the topic of this piece. Professor Svantesson is currently writing a Global Status Report - dealing with, amongst other things, the issue of this piece - on behalf of the Internet & Jurisdiction Policy Network. The views expressed herein are those of the author alone.</span></em></p>The order requires Facebook, Twitter and Google to remove certain content globally, based on it being defamatory under India’s local law.Dan Jerker B. Svantesson, Professor, Bond UniversityLicensed as Creative Commons – attribution, no derivatives.