tag:theconversation.com,2011:/ca/topics/online-extremism-39852/articlesOnline extremism – The Conversation2024-03-20T04:06:40Ztag:theconversation.com,2011:article/2262192024-03-20T04:06:40Z2024-03-20T04:06:40ZTerrorist content lurks all over the internet – regulating only 6 major platforms won’t be nearly enough<figure><img src="https://images.theconversation.com/files/583026/original/file-20240320-17-wn83c.jpg?ixlib=rb-1.1.0&rect=4%2C241%2C2619%2C1761&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/burning-car-unrest-antigovernment-crime-581564755">Bumble Dee/Shutterstock</a></span></figcaption></figure><p>Australia’s eSafety commissioner <a href="https://www.abc.net.au/news/2024-03-19/social-media-esafety-commissioner-terrorist-violent-extremist/103603518">has sent legal notices</a> to Google, Meta, Telegram, WhatsApp, Reddit and X (formerly Twitter) asking them to show what they’re doing to protect Australians from online extremism. The six companies <a href="https://www.esafety.gov.au/newsroom/media-releases/tech-companies-grilled-on-how-they-are-tackling-terror-and-violent-extremism">have 49 days to respond</a>.</p>
<p>The notice comes at a time when governments are increasingly cracking down on major tech companies to address online harms like <a href="https://theconversation.com/australia-has-fined-x-australia-over-child-sex-abuse-material-concerns-how-severe-is-the-issue-and-what-happens-now-215696">child sexual abuse material</a> or <a href="https://www.cbsnews.com/news/mark-zuckerberg-apologizes-parents-victims-online-exploitation-senate-hearing/">bullying</a>.</p>
<p>Combating online extremism presents unique challenges different from other content moderation problems. Regulators wanting to establish effective and meaningful change must take into account what research has shown us about extremism and terrorism.</p>
<h2>Extremists are everywhere</h2>
<p>Online extremism and terrorism have been pressing concerns for some time. A stand-out example was the 2019 Christchurch terrorist attack on two mosques in Aotearoa New Zealand, which was live streamed on Facebook. It led to the <a href="https://www.beehive.govt.nz/release/nz-and-france-seek-end-use-social-media-acts-terrorism">“Christchurch Call” to action</a>, aimed at countering extremism through collaborations between countries and tech companies.</p>
<p>But despite such efforts, <a href="https://www.rand.org/pubs/perspectives/PEA1458-2.html">extremists still use online platforms</a> for networking and coordination, recruitment and radicalisation, knowledge transfer, financing and mobilisation to action.</p>
<p>In fact, extremists use the same online infrastructure as everyday users: marketplaces, dating platforms, gaming sites, music streaming sites and social networks. Therefore, all regulation to counter extremism needs to consider the rights of regular users, as well.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/christchurch-attacks-5-years-on-terrorists-online-history-gives-clues-to-preventing-future-atrocities-225273">Christchurch attacks 5 years on: terrorist’s online history gives clues to preventing future atrocities</a>
</strong>
</em>
</p>
<hr>
<h2>The rise of ‘swarmcasting’</h2>
<p>Tech companies have responded with initiatives like the <a href="https://gifct.org/membership">Global Internet Forum to Counter Terrorism</a>. It shares information on terrorist online content among its members (such as Facebook, Microsoft, YouTube, X and others) so they can take it down on their platforms. These approaches aim to <a href="https://gifct.org/hsdb/">automatically identify and remove</a> terrorist or extremist content.</p>
<p>However, a moderation policy focused on individual pieces of content on individual platforms fails to capture much of what’s out there.</p>
<p>Terrorist groups commonly use a <a href="https://static.rusi.org/20190716_grntt_paper_06.pdf">“swarmcasting” multiplatform approach</a>, leveraging 700 platforms or more to distribute their content.</p>
<p>Swarmcasting involves using “beacons” on major platforms such as Facebook, Twitter and Telegram to direct people to locations with terrorist material. This beacon can be a hyperlink to a blog post on a website like Wordpress or Tumblr that then contains further links to the content, perhaps hosted on Google Drive, JustPaste.It, BitChute and other places where users can download it.</p>
<p>So, while extremist content may be flagged and removed from social media, it remains accessible online thanks to swarmcasting. </p>
<h2>Putting up filters isn’t enough</h2>
<p>The process of identifying and removing extremist content is far from simple. For example, at a recent US Supreme Court hearing over internet regulations, <a href="https://law.stanford.edu/podcasts/the-netchoice-cases-reach-the-supreme-court/">a lawyer argued</a> platforms could moderate terrorist content by simply removing anything that mentioned “al Qaeda”.</p>
<p>However, internationally recognised terrorist organisations, their members and supporters do not solely distribute policy-violating extremist content. Some may be discussing non-terrorist activities, such as those who engage in humanitarian efforts.</p>
<p>Other times their content is borderline (awful but lawful), such as misogynistic dog whistles, or even “hidden” <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/isj.12454">in a different format</a>, such as memes.</p>
<p>Accordingly, platforms can’t always cite policy violations and are compelled to use other methods to counter such content. They report using various content moderation techniques such as redirecting users, <a href="https://www.pbs.org/newshour/politics/google-to-expand-misinformation-prebunking-initiative-in-europe">pre-bunking misinformation</a>, promoting counterspeech and <a href="https://www.bbc.com/news/technology-57697779">offering warnings</a>, or <a href="https://theconversation.com/what-is-shadowbanning-how-do-i-know-if-it-has-happened-to-me-and-what-can-i-do-about-it-192735">implementing shadow bans</a>. Despite these efforts, online extremism continues to persist.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/disinformation-threatens-global-elections-heres-how-to-fight-back-223392">Disinformation threatens global elections – here's how to fight back</a>
</strong>
</em>
</p>
<hr>
<h2>What is extremism, anyway?</h2>
<p>All these problems are further compounded by the fact we lack a <a href="https://www.unodc.org/e4j/en/terrorism/module-4/key-issues/defining-terrorism.html">commonly accepted definition</a> for terrorism or extremism. All definitions currently in place are contentious.</p>
<p>Academics attempt to seek clarity by using <a href="https://www.ijcv.org/index.php/ijcv/article/view/3809">relativistic definitions</a>, such as</p>
<blockquote>
<p>extremism itself is context-dependent in the sense that it is an inherently relative term that describes a deviation from something that is (more) ‘ordinary’, ‘mainstream’ or ‘normal’. </p>
</blockquote>
<p>However, what is something we can accept as a universal normal? Democracy is not the global norm, nor are equal rights. Not even our understanding of <a href="https://blogs.lse.ac.uk/humanrights/2016/09/14/are-human-rights-really-universal-inalienable-and-indivisible/">central tenets of human rights</a> is globally established.</p>
<h2>What should regulators do, then?</h2>
<p>As the eSafety commissioner attempts to shed light on how major platforms counter terrorism, we offer several recommendations for the commissioner to consider.</p>
<p>1. Extremists rely on more than just the major platforms to disseminate information. This highlights the importance of expanding the current inquiries beyond just the major tech players.</p>
<p>2. Regulators need to consider the differences between platforms that resist compliance, those that comply halfheartedly, and those that struggle to comply, such as small content storage providers. Each type of platform <a href="https://ksp.techagainstterrorism.org/">requires different regulatory approaches</a> or assistance. </p>
<p>3. Future regulations should encourage platforms to transparently collaborate with academia. The global research community is well positioned <a href="https://gifct.org/wp-content/uploads/2021/07/GIFCT-TaxonomyReport-2021.pdf">to address these challenges</a>, such as by developing actionable definitions of extremism and novel countermeasures.</p><img src="https://counter.theconversation.com/content/226219/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award funded by the Australian Government. Marten Risius has received project funding from the Global Internet Forum to Counter Terrorism (GIFCT). </span></em></p><p class="fine-print"><em><span>Stan Karanasios has received funding from Emergency Management Victoria, Asia-Pacific Telecommunity, and the International Telecommunications Union. Stan is a Distinguished Member of the Association for Information Systems.</span></em></p>Online extremism is a unique challenge – terrorists use methods that can’t be captured by standard content moderation. So, what can we do about it?Marten Risius, Senior Lecturer in Business Information Systems, The University of QueenslandStan Karanasios, Associate Professor, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2002062023-04-06T09:11:30Z2023-04-06T09:11:30ZA dictionary of the manosphere: five terms to understand the language of online male supremacists<figure><img src="https://images.theconversation.com/files/518182/original/file-20230329-16-e4riiu.jpg?ixlib=rb-1.1.0&rect=23%2C15%2C2562%2C1527&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/computer-screen-light-reflect-glasses-close-1218171448">Tero Vesalainen/Shutterstock</a></span></figcaption></figure><p><a href="https://slate.com/human-interest/2014/10/a-thot-is-not-a-slut-on-popular-slurs-race-class-and-sex.html">Thot</a>. <a href="https://rationalwiki.org/wiki/Manosphere_glossary#White_knight">White knight</a>. <a href="https://www.aljazeera.com/opinions/2020/9/6/red-pills-and-dog-whistles-it-is-more-than-just-the-internet">Red pilled</a>. <a href="https://www.colorado.edu/asmagazine/2023/03/16/what-does-ted-cruz-cucks-again-actually-mean">Cuck</a>. <a href="https://journals.sagepub.com/doi/abs/10.1177/1097184X17706401?journalCode=jmma">Beta</a>. <a href="https://www.sciencedirect.com/science/article/abs/pii/S2211695822000514">Soyboy</a>. <a href="https://rationalwiki.org/wiki/Manosphere_glossary">Unicorn</a>. <a href="https://www.yahoo.com/lifestyle/does-apos-chad-apos-mean-010140962.html">Chad</a>.</p>
<p>To many people, these words won’t mean much. To others, they are a core part of the vocabulary of the “manosphere” – a collection of websites, social media accounts and forums dedicated to men’s issues, from health and fitness to dating and men’s rights. </p>
<p>Many (though not all) manosphere communities have become spaces where explicit anti-women and anti-feminist sentiment abound. These include <a href="https://www.bbc.co.uk/news/blogs-trending-44053828">incels</a>, <a href="https://journals.sagepub.com/doi/abs/10.1177/13505084221137989">men’s rights activists</a>, <a href="https://www.bbc.com/news/entertainment-arts-57572152">red-pillers</a>, <a href="https://www.routledge.com/The-Language-of-Pick-Up-Artists-Online-Discourses-of-the-Seduction-Industry/Dayter-Rudiger/p/book/9780367473006">pick-up artists</a> and <a href="https://modernlanguagesopen.org/articles/10.3828/mlo.v0i0.454/">male separatists</a>. </p>
<p>I’m interested in how men use language, especially in the media and online, and what this tells us about contemporary masculinity and gender relations. In my <a href="https://global.oup.com/academic/product/language-and-mediated-masculinities-9780190081058?cc=gb&lang=en&">recent book</a>, I show how the language of the manosphere creates a culture of exclusion, denigration (mainly of women, but also of other men), male power and entitlement.</p>
<p>Understanding what manosphere terms mean can help teachers and parents start conversations with young men who are <a href="https://safeguarding.network/content/responding-to-the-incel-ideology">engaging with</a> manosphere and male supremacist content. Recognising how language and ideology are connected can help with deradicalisation efforts, or ideally prevent radicalisation in the first place. And for young men and boys themselves, this awareness can improve their digital literacy and help them resist manipulation.</p>
<p>For police and other authorities, <a href="https://www.technologyreview.com/2020/02/07/349052/the-manosphere-is-getting-more-toxic-as-angry-men-join-the-incels/">language</a> can be an <a href="https://gnet-research.org/2022/01/24/assessing-misogyny-as-a-gateway-drug-into-violent-extremism/">early warning system</a> to identify men at risk of carrying out <a href="https://www.washingtonpost.com/graphics/2019/local/yoga-shooting-incel-attack-fueled-by-male-supremacy/">male supremacist violence</a>. Tragedies in <a href="https://www.theguardian.com/us-news/2015/feb/20/mass-shooter-elliot-rodger-isla-vista-killings-report">Isla Vista</a>, <a href="https://www.nytimes.com/2015/10/10/us/roseburg-oregon-shooting-christopher-harper-mercer.html">Oregon</a>, <a href="https://www.bbc.co.uk/news/uk-56269095">Toronto</a>, <a href="https://www.washingtonpost.com/graphics/2019/local/yoga-shooting-incel-attack-fueled-by-male-supremacy/">Tallahassee</a> and <a href="https://www.theguardian.com/uk-news/2021/aug/13/plymouth-shooting-suspect-what-we-know-jake-davison">Plymouth</a> were all prefaced by the perpetrators publishing male supremacist and incel content. </p>
<p>It is difficult to give a comprehensive overview of every instance of manosphere language. It is a constantly evolving collection of terms, sometimes in response to new issues that emerge, or in an attempt to subvert social media moderation efforts (abbreviations and acronyms are good examples of this). Here are some key terms to know.</p>
<h2>Red and blue pill</h2>
<p>The cyberpunk blockbuster The Matrix is the source of a key symbol in the manosphere – the <a href="https://www.newamerica.org/political-reform/reports/misogynist-incels-and-male-supremacism/red-pill-to-black-pill/">red pill</a>. In the film, protagonist Neo is offered a choice of two pills. If he takes the blue pill, he will continue to exist in the world as he knows it, which is actually a simulation controlled by sentient machines who have enslaved humanity as a power source. If he takes the red pill, he will be released into the “real world”, where the curtain is pulled back and the truth is revealed. </p>
<p>In the manosphere, those who have been “red-pilled” <a href="https://www.bbc.co.uk/news/entertainment-arts-57572152">see the world</a> as it really is, understanding the so-called <a href="https://www.newamerica.org/political-reform/reports/misogynist-incels-and-male-supremacism/red-pill-to-black-pill">“real” nature</a> of women’s behaviour and dating preferences. As researchers Megan Kelly, Alex DiBranco and Julia DeCook <a href="https://www.newamerica.org/political-reform/reports/misogynist-incels-and-male-supremacism/red-pill-to-black-pill/">write</a>: </p>
<blockquote>
<p>Red pillers awaken to the “truth” that socially, economically and sexually, men are at the whims of women’s (and feminists’) power and desires.</p>
</blockquote>
<p>The pill symbolism has also been taken up by the <a href="https://en.wikipedia.org/wiki/Alt-right">alt-right</a> and cuts across a variety of conspiracy theories, from the claim of feminism controlling the world to shadowy global elites influencing public opinion.</p>
<figure class="align-center ">
<img alt="A disembodied pair of open-palm hands against a black background, one hand holds a red pill, the other holds a blue pill." src="https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/518188/original/file-20230329-28-egi54m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Red or blue?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/red-pill-blue-concept-right-choice-1164968740">diy13/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Alphas and betas</h2>
<p>The manosphere is obsessed with status, power, prestige and hierarchy. The idea of <a href="https://link.springer.com/book/10.1007/978-3-030-70470-4">alphas and betas</a> is central to this. Originally developed by biologist <a href="https://davemech.org/">David Mech</a> in his early work on wolf packs, the “alpha” was argued to be the most socially dominant male. Mech has since refuted this account as <a href="https://www.newyorker.com/science/elements/the-myth-of-the-alpha-wolf">overly simplistic</a>. </p>
<p>The concept was co-opted by the <a href="https://medium.com/@SexCoachSarah/a-brief-history-of-pickup-artists-the-seduction-community-cc9b26bff690">seduction community</a>, a community organised around sharing tips and guidance for attracting and seducing women, before making its way to other parts of the manosphere.</p>
<p>Becoming an alpha is an aspirational goal for many men who engage with manosphere content. Alphas are in charge, have their pick of sexual partners and have ultimate control, both of themselves and others. Betas are the polar opposite: physically and psychologically weak, sexually unattractive, timid, submissive, meek and generally lacking in the qualities necessary to attain “real” manhood. </p>
<h2>Chads and Stacys</h2>
<p>The hierarchy of the manosphere, and the claimed primacy of looks over personality, can be clearly seen in the caricatures of <a href="https://www.isdglobal.org/explainers/incels/">Chads and Stacys</a>.</p>
<p>Chads are the “ultimate alpha” – the ultra-masculine, virile, powerful and sexually attractive man to whom Stacys and other women flock. The term “<a href="https://knowyourmeme.com/memes/gigachad">gigachad</a>” refers to the most alpha of alpha males. </p>
<p>Stacys are an idealisation of femininity – a hyper-attractive, sexually desirable, promiscuous but vapid woman. She is ultimately unobtainable, especially to men who are not Chads. Simultaneously the objects of disdain and desire, Chads and Stacys highlight a clichéd view of men and women, rooted in stereotypes and pigeonholes rather than in reality.</p>
<figure class="align-center ">
<img alt="Photo of a large male wolf with a beautiful grey coat." src="https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/518187/original/file-20230329-14-tgxiex.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The ‘alpha male’ is an important concept in the manosphere.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/alpha-male-wolf-autumn-540588361">Mircea Costina/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Cuck</h2>
<p>Shortened from <a href="https://www.pastemagazine.com/politics/donald-trump/the-surprising-etymology-of-the-alt-rights-favorit">cuckold</a>, meaning a man whose wife has been unfaithful (a term first used as early as 1250), cuck is widely used in the manosphere and alt-right spaces.</p>
<p>The term is strongly associated with a subgenre of “humiliation pornography”, in which a man derives sexual pleasure from watching his female partner have sex with another man. Cuck is often used as an insult, especially since the idea of allowing one’s partner to have consensual sex with other men goes against heteronormative notions of male sexuality, control and ownership.</p>
<p>In some cases, such pornography also has an interracial dimension, contributing to racist stereotypes of Black men’s hypersexuality and hyperphysicality. Linguist Maureen Kosse has <a href="https://www.colorado.edu/asmagazine/2023/03/16/what-does-ted-cruz-cucks-again-actually-mean">written about</a> how cuck is used to “spread covertly racist online discourse by cloaking medieval sexual logic and racial anger in misogynistic humor”.</p>
<h2>(N)awalt</h2>
<p>(N)awalt means “(not) all women are like that”. The more common form “Awalt” is typically used to ascribe negative stereotypes to women. Denying their individuality, Awalt is used to suggest women are all vapid, insincere, sexually promiscuous, driven by emotions rather than rationality, motivated by financial gain and more. Awalt is also deployed to emphasise the claim that men are everything women are not – moral, rational, intelligent, loyal, honourable and individualistic.</p>
<p>It is clear that manosphere language is contributing to an increasingly politicised and fractious form of gender relations. By understanding this language, we can better counter it.</p><img src="https://counter.theconversation.com/content/200206/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Lawson is a Research Fellow in the Institute for Research on Male Supremacism.</span></em></p>Learning how online misogynists use language can help teachers and parents intervene in radicalisation.Robert Lawson, Associate Professor in Sociolinguistics, Birmingham City UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2006292023-02-26T19:06:11Z2023-02-26T19:06:11ZCan ideology-detecting algorithms catch online extremism before it takes hold?<figure><img src="https://images.theconversation.com/files/512142/original/file-20230224-16-hxuklk.jpg?ixlib=rb-1.1.0&rect=0%2C11%2C3840%2C2144&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Ideology has always been a critical element in understanding how we view the world, form opinions and make political decisions. </p>
<p>However, the internet has revolutionised the way opinions and ideologies spread, leading to new forms of online radicalisation. Far-right ideologies, which advocate for ultra-nationalism, racism and opposition to immigration and multiculturalism, have proliferated on social platforms.</p>
<p>These ideologies have strong links with violence and terrorism. In recent years, <a href="https://www.asio.gov.au/sites/default/files/2022-02/ASIO_Annual_Report_2020-21.pdf">as much as 40%</a> of the caseload of the Australian Security Intelligence Organisation (ASIO) was related to far-right extremism. This has <a href="https://www.abc.net.au/news/2023-02-13/right-wing-terror-threat-declines-says-asio/101965964">declined</a>, though, with the easing of COVID restrictions.</p>
<p>Detecting online radicalisation early could help prevent far-right ideology-motivated (and potentially violent) activity. To this end, we have developed a <a href="https://arxiv.org/abs/2208.04097">completely automatic system</a> that can determine the ideology of social media users based on what they do online.</p>
<h2>How it works</h2>
<p>Our proposed pipeline is based on detecting the signals of ideology from people’s online behaviour. </p>
<p>There is no way to directly observe a person’s ideology. However, researchers can observe “ideological proxies” such as the use of political hashtags, retweeting politicians and following political parties. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-far-right-online-spaces-use-mainstream-media-to-spread-their-ideology-189066">How far-right online spaces use mainstream media to spread their ideology</a>
</strong>
</em>
</p>
<hr>
<p>But using ideological proxies requires a lot of work: you need experts to understand and label the relationships between proxies and ideology. This can be expensive and time-consuming. </p>
<p>What’s more, online behaviour and contexts change between countries and social platforms. They also shift rapidly over time. This means even more work to keep your ideological proxies up to date and relevant.</p>
<h2>You are what you post</h2>
<p>Our pipeline simplifies this process and makes it automatic. It has two main components: a “media proxy”, which determines ideology via links to media, and an “inference architecture”, which helps us determine the ideology of people who don’t post links to media.</p>
<p>The media proxy measures the ideological leaning of an account by tracking which media sites it posts links to. Posting links to Fox News would indicate someone is more likely to lean right, for example, while linking to the Guardian indicates a leftward tendency. </p>
<p>To categorise the media sites users link to, we took the left-right ratings for a wide range of news sites from two datasets (though many are available). One was <a href="https://reutersinstitute.politics.ox.ac.uk/our-research/digital-news-report-2018">based on a Reuters survey</a> and the other curated by experts at <a href="https://www.allsides.com/media-bias/ratings">Allsides.com</a>. </p>
<p>This works well for people who post links to media sites. However, most people don’t do that very often. So what do we do about them?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/covid-wasnt-a-bumper-campaign-for-right-wing-extremists-but-the-threat-from-terror-remains-199964">COVID wasn't a 'bumper campaign' for right-wing extremists. But the threat from terror remains</a>
</strong>
</em>
</p>
<hr>
<p>That’s where the inference architecture comes in. In our pipeline, we determine how ideologically similar people are to one another with three measures: the kind of language they use, the hashtags they use, and the other users whose content they reshare.</p>
<p>Measuring similarity in hashtags and resharing is relatively straightforward, but such signals are not always available. Language use is the key: it is always present, and a known indicator of people’s latent psychological states. </p>
<p>Using machine-learning techniques we found that people with different ideologies use different kinds of language. </p>
<p>Right-leaning individuals tend to use moral language relating to vice (for example, harm, cheating, betrayal, subversion and degradation), as opposed to virtue (care, fairness, loyalty, authority and sanctity), more than left-leaning individuals. Far-right individuals use grievance language (involving violence, hate and paranoia) significantly more than moderates. </p>
<p>By detecting these signals of ideology, our pipeline can identify and understand the psychological and social characteristics of extreme individuals and communities.</p>
<h2>What’s next?</h2>
<p>The ideology detection pipeline could be a crucial tool for understanding the spread of far-right ideologies and preventing violence and terrorism. By detecting signals of ideology from user behaviour online, the pipeline serves as an early warning systems for extreme ideology-motivated activity. It can provide law enforcement with methods to flag users for investigation and intervene before radicalisation takes hold.</p><img src="https://counter.theconversation.com/content/200629/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rohit Ram receives funding from the Defence Science and Technology Group (DSTG) and was supported by an Australian Government Research Training Program (RTP) Scholarship.</span></em></p><p class="fine-print"><em><span>Marian-Andrei Rizoiu receives funding from Meta (Facebook) Research, the Defence Science and Technology Group (DSTG), The Department of Home Affairs and the Defence Innovation Network. </span></em></p>An automatic system to determine political ideology from online posts could be a powerful tool against online radicalisation.Rohit Ram, PhD Student, Social Data Science, University of Technology SydneyMarian-Andrei Rizoiu, Senior Lecturer in Behavioral Data Science, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1935972022-11-03T12:13:49Z2022-11-03T12:13:49ZPolitical violence in America isn’t going away anytime soon<figure><img src="https://images.theconversation.com/files/493041/original/file-20221102-22-8qlz3x.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A member of the National Guard patrols the U.S. Capitol on March 4, 2021. </span> <span class="attribution"><a class="source" href="https://media.gettyimages.com/photos/member-of-the-national-guard-patrols-the-grounds-of-the-us-capitol-on-picture-id1231514110?s=612x612">Brendan Smialowski/AFP via Getty Images</a></span></figcaption></figure><p>A <a href="https://www.npr.org/2022/10/29/1132537240/government-warns-domestic-attacks-midterm-elections">warning</a> about the <a href="https://www.vox.com/policy-and-politics/2022/10/29/23428956/political-attacks-increasing-far-right-congress-pelosi">threat of political violence </a> heading into the 2022 midterm elections was issued to state and local law enforcement officials by the U.S. Department of Homeland Security on Oct. 28, 2022. </p>
<p>The bulletin was released the same day that Speaker of the House of Representatives Nancy Pelosi’s husband was hospitalized after a <a href="https://www.cnn.com/2022/11/02/politics/paul-pelosi-attack-latest-depape-court">home invasion</a> by a lone right-wing extremist seeking to harm her.</p>
<p>This incident is the latest in an increasing stream of extremist <a href="https://www.politico.com/news/2022/10/29/pelosi-assault-attacks-threats-political-figures-00064113">confrontations</a> taking place across the United States in recent years. These incidents have primarily targeted Democrats, including a <a href="https://www.npr.org/2020/12/17/947652491/6-suspects-indicted-for-conspiracy-to-kidnap-michigan-gov-gretchen-whitmer">plot</a> to kidnap Michigan Gov. Gretchen Whitmer in 2020. But threats from both sides of the political spectrum are up <a href="https://www.nytimes.com/2022/10/01/us/politics/violent-threats-lawmakers.html">significantly</a>.</p>
<p>And, of course, there was the Jan. 6, 2021, <a href="https://january6th.house.gov/">insurrection</a> at the U.S. Capitol, where supporters of a defeated Republican president, acting on a <a href="https://www.brennancenter.org/our-work/analysis-opinion/focus-big-lie-not-big-liar">widespread lie</a> he perpetuated, violently attempted to prevent the certification of electoral votes. According to well-documented public evidence, some rioters planned to find and execute both Speaker Pelosi and Vice President <a href="https://www.nytimes.com/2022/06/16/us/politics/jan-6-gallows.html">Mike Pence</a>.</p>
<p>Such incidents reflect a disturbing trend that targets the very fabric, foundation and future of U.S. democracy. But what led to this point?</p>
<p>As a researcher taking a critical and apolitical eye toward security issues, I believe the rise in contemporary right-wing political extremism – and violence – began with an outdated focus in national communications policy.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A large brick home down the hill from a police tape stretched across the street." src="https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/493087/original/file-20221102-23-4s8fkw.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Police take measurements around House
Speaker Nancy Pelosi’s San Francisco home after her husband, Paul Pelosi, was assaulted inside the home on Oct. 28, 2022.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/police-take-measurements-around-speaker-of-the-united-news-photo/1244292841?phrase=pelosi%20home&adppopup=true">Tayfun Coskun/Anadolu Agency via Getty Images</a></span>
</figcaption>
</figure>
<h2>Media-induced slow burn</h2>
<p>Until the late 1980s, the <a href="https://www.mtsu.edu/first-amendment/article/955/fairness-doctrine">Federal Communications Commission’s Fairness Doctrine</a> required traditional licensed broadcasters to offer competing viewpoints on controversial public issues. But these rules <a href="https://www.usatoday.com/story/news/factcheck/2020/11/28/fact-check-fairness-doctrine-applied-broadcast-licenses-not-cable/6439197002/">did not apply</a> to cable or satellite providers. As a result, the rise of cable news channels in the 1990s led to highly partisan programming that <a href="https://theconversation.com/dont-be-too-quick-to-blame-social-media-for-americas-polarization-cable-news-has-a-bigger-effect-study-finds-187579">helped divide</a> American society in the ensuing decades. </p>
<p>This programming fueled increasing polarization in the public and political arenas. Bipartisanship was abandoned in the 1990s, when the Republican Congress under Speaker Newt Gingrich <a href="https://history.princeton.edu/about/publications/burning-down-house-newt-gingrich-fall-speaker-and-rise-new-republican-party">embraced</a> a “scorched-earth” policy of governing. That meant treating the minority party not as the loyal opposition and respected elected colleagues who had differences over policy, but as enemies.</p>
<p>In addition to emerging <a href="https://harvardpolitics.com/organized-polarize-cnn-fox-news-msnbc-roots-partisan-cable-television/">partisan cable television networks like MSNBC and Fox News</a>, in the early 2000s, an increasingly polarized Congress and the public received a new source of division: social media.</p>
<p>Internet platforms such as Twitter, Facebook and 4Chan allowed anyone, anywhere, to create, produce and distribute political commentary and extremist rhetoric that could be amplified by other users and drive the day’s news cycle. </p>
<p>Political pundits and influencers across the spectrum became less concerned about correctly informing the public. Instead, <a href="https://nicd.arizona.edu/blog/2021/06/14/how-the-outrage-industrial-complex-profits-from-stoking-americans-anger-at-each-other/">they stoked outrage</a> in the search for money-generating clicks and advertising dollars. And political parties exploited this outrage to satisfy and energize their voting base or funders. </p>
<figure class="align-center ">
<img alt="A white woman and man pull back a black curtain to show a voting machine with a big screen." src="https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/493044/original/file-20221102-24-qix10y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Philadelphia city commissioners display a voting machine in Philadelphia City Hall on Oct. 24, 2022.</span>
<span class="attribution"><a class="source" href="https://media.gettyimages.com/photos/philadelphia-city-commissioner-lisa-deeley-and-deputy-comissioner-picture-id1244203987?s=612x612">Ed Jones/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>Moderation or censorship?</h2>
<p>To combat online extremism, social media companies reluctantly began <a href="https://knowledge.wharton.upenn.edu/article/social-media-firms-moderate-content/">moderating user posts</a> and sometimes <a href="https://reason.org/commentary/social-media-companies-have-the-right-to-ban-users/">banned</a> prominent users who violated their community standards or terms of service. </p>
<p>In response to what it dubbed “<a href="https://www.politico.com/news/2022/07/01/social-media-sweeps-the-states-00043229">censorship</a>” from Big Tech, the right wing <a href="https://www.pewresearch.org/journalism/2022/10/06/the-role-of-alternative-social-media-in-the-news-and-information-environment/">splintered</a> into numerous niche platforms catering to their conspiracy theories and extremist or violent views such as Truth Social – run by former President Trump – Gab, Parler, Rumble and others. </p>
<p>Compared with Democrats, Republicans have mastered this form of gutter politics. One example: Right-wing political figures have <a href="https://www.theguardian.com/us-news/2022/oct/31/donald-trump-jr-misinformation-memes-paul-pelosi-hammer">mocked</a> Paul Pelosi for being attacked, spread <a href="https://www.politico.com/news/2022/10/31/conservatives-disinformation-paul-pelosi-assault-00064208">baseless conspiracy theories</a> about his personal life and used the incident for applause lines at <a href="https://thehill.com/homenews/campaign/3713080-arizona-governor-candidate-kari-lake-jokes-about-paul-pelosi-attack/">campaign rallies</a>. </p>
<p>Accordingly, today’s voters and politicians end up confronting one another in the public sphere not on matters and substance affecting the future of the country, but on fundamental facts and conspiracy theories, or to address distractions often generated by their respective media ecosystems. This is only exacerbated by a prolonged nationwide decline in <a href="https://thehill.com/changing-america/enrichment/education/598795-media-literacy-is-desperately-needed-in-classrooms/">media literacy</a> and <a href="https://www.ncsl.org/legislators-staff/legislators/legislators-back-to-school/tackling-the-american-civics-education-crisis.aspx">civics education</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A crowd of people, some wearing protective helmets, push up against a group of protesters. One of them holds an American flag in the air." src="https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/493083/original/file-20221102-26-22xyb5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rioters outside the U.S. Capitol Building on Jan. 6, 2021, clash with police.</span>
<span class="attribution"><a class="source" href="https://media.gettyimages.com/photos/supporters-of-us-president-donald-trump-fight-with-riot-police-the-picture-id1230457933?s=612x612">Roberto Schmidt/AFP via Getty Images</a></span>
</figcaption>
</figure>
<h2>Law enforcement’s unique problem</h2>
<p>Against this backdrop, federal law enforcement has become more vocal in warning about the dangers of domestic political extremism, including a <a href="https://www.dhs.gov/ntas/advisory/national-terrorism-advisory-system-bulletin-february-07-2022">bulletin</a> issued in February 2022. The Oct. 28 DHS bulletin further underscores this concern. </p>
<p>But it’s hard for law enforcement to effectively address political extremism, because speech protected under the <a href="https://constitution.congress.gov/constitution/amendment-1/">First Amendment</a> is a major consideration. Phrases like “I’m fighting for you!” or “Saving our country!” might seem like typical political bluster to one person. But they could be seen by others as an implied call for intimidation or violent action against political opponents, election officials, volunteer poll workers and even ordinary voters. </p>
<p>How does speech turn into violent action? Security specialists and scholars use the term “<a href="https://www.wired.com/story/jargon-watch-rising-danger-stochastic-terrorism/">stochastic terrorism</a>” to capture how a single, hard-to-locate person might be inspired or influenced toward violence by broader extremist rhetoric, <a href="https://apnews.com/article/california-donald-trump-san-francisco-47c103cfe696df9faf0e57e1c7dd4f10">as appears to have been the case</a> with the man who allegedly tried to kill Paul Pelosi with a hammer. </p>
<p>Law enforcement’s problem is made worse by right-wing lawmakers who normalize or actively praise the actions of violent extremists, calling them “<a href="https://www.marketwatch.com/story/trump-and-allies-work-to-rebrand-jan-6-rioters-as-patriots-heroes-and-martyrs-01626809391">patriots</a>” and demanding their prison sentences be overturned or <a href="https://www.politico.com/news/2022/01/30/trump-pardon-jan6-defendants-00003450">pardoned</a>. This helps obscure the actual reasons for such incidents, often by deflecting them into broader conspiracy theories involving their opponents.</p>
<p>Certainly there are controversial left-leaning politicians, pundits, activists and talking points too. </p>
<p>But few – if any – openly disregard the fabric of American government, scheme to overturn democratic elections by force or plot to assassinate politicians. </p>
<p>By contrast, there are over <a href="https://www.brookings.edu/blog/fixgov/2022/10/07/democracy-on-the-ballot-how-many-election-deniers-are-on-the-ballot-in-november-and-what-is-their-likelihood-of-success/">300 Republican election deniers</a> running for office this year, including many incumbents – the vast majority of whom endorse political violence such as the Jan. 6 attack either by their actions or their silence. </p>
<h2>Hope for the best; prepare for the worst</h2>
<p>Tensions are high heading into the 2022 midterms. Politicians are making final arguments, and the online messaging machines are spreading campaign information, fundraising requests – and plenty of disinformation as well.</p>
<p>Americans expect a <a href="https://www.brennancenter.org/our-work/research-reports/why-presidential-transition-process-matters">peaceful transfer of political power</a> after elections, but recent history shows we must prepare for the worst. It’s clear that the modern Republican Party is openly and successfully embracing and exploiting misinformation, outrage and attacks on democracy and the rule of law. </p>
<p>Until Republicans actively disavow their extremist rhetoric and the misinformation contributing to it, I believe the likelihood for political violence in America increases with each passing day.</p><img src="https://counter.theconversation.com/content/193597/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Forno has received research funding related to cybersecurity from the National Science Foundation (NSF) and the Department of Defense (DOD) during his academic career, and sits on the advisory board of BlindHash, a cybersecurity startup focusing on remedying the password problem. He is a registered independent voter, too.</span></em></p>The rise in contemporary right-wing political extremism – and violence – can be traced back to events in the 1990s.Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore CountyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1609862021-05-17T20:07:01Z2021-05-17T20:07:01ZJacinda Ardern calls for ‘ethical algorithms’ to combat online extremism. What this means<p>New Zealand’s prime minister Jacinda Ardern has called for “ethical algorithms” to help stop online radicalisation.</p>
<p>She made her call on the weekend at the second summit of the “<a href="https://www.christchurchcall.com/index.html">Christchurch Call</a>” for action to eliminate terrorist and violent extremist content online. </p>
<p>The first Christchurch Call summit was convened by Ardern and French president Emmanuel Macron in May 2019. It took place two months after New Zealand’s first and worst mass shooting in decades, the Christchurch mosque shootings, in which a 28-year-old Australian gunman killed 51 men, women and children. </p>
<p>The Christchurch Call is a voluntary compact between governments and technology companies. So far 55 nations have signed on – with the most notable new signatory <a href="https://apnews.com/article/europe-technology-government-and-politics-edb4e1cd037984509c3dc04178637f5c">being the United States</a>, which refused to join under Donald Trump. </p>
<p>Google (which owns YouTube), Facebook, Twitter, Microsoft and Amazon have also signed on, as well as Japanese messaging app LINE, French search engine Qwant and video-sharing sites Daily Motion and JeuxVideo.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Trump supporters, believing false claims a election was stolen, try to break through a police barrier at the US Capitol in on January 6 2021." src="https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/400955/original/file-20210517-21-su57r5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Trump supporters, believing false claims a election was stolen, try to break through a police barrier at the US Capitol in on January 6 2021.</span>
<span class="attribution"><span class="source">John Minchillo/AP</span></span>
</figcaption>
</figure>
<p>In light of clear examples of extremist behaviour still being fomented online – the storming of the US Capitol in January being a case in point – one might question how much has been achieved. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/two-years-on-from-the-christchurch-terror-attack-how-much-has-really-changed-156850">Two years on from the Christchurch terror attack, how much has really changed?</a>
</strong>
</em>
</p>
<hr>
<p>On the weekend Arden, while noting the progress made in areas such as the platforms’ protocols for moderating and removing extremist content, singled out <a href="https://www.theguardian.com/world/2021/may/15/jacinda-ardern-calls-for-ethical-algorithms-to-help-stop-online-radicalisation">the need for ethical algorithms</a>. Here’s why.</p>
<h2>How social media platforms serve content</h2>
<p>Imagine a large, vast restaurant. Service here works in an interesting way. </p>
<p>The waiters dash around the restaurant to bring diners as much food as they can eat. They don’t take orders but effectively direct you to what you will eat by putting that food in front of you. </p>
<p>The restaurant owner has designed it this way, to keep you eating as much as possible. </p>
<p>How do the waiters know what you like? They have a record of what you ate last time. They listen in on your table conversation. You mention you feel like French fries? They will bring you buckets of fries over and over. </p>
<p>At first you think: “Isn’t this wonderful, these waiters know just what I like.” </p>
<p>But the waiters don’t care about what you like. They just want you to keep eating. Even if the food is unhealthy and increases your risk of disease or death. No matter. They’ll keep bringing it as long as you keep eating.</p>
<p>If these waiters were ethical, if they cared about your well-being, they might bring you healthy alternatives. They might put a salad before you. If the restaurant owner was ethical, the service would not be designed to encourage overeating. It would seek to interest you in something else.</p>
<p>But then you might stop eating. You might leave the restaurant. That would hurt profits.</p>
<h2>Algorithms are designed to decide what we see</h2>
<p>Social media algorithms work the same as the service in our metaphorical restaurant. Algorithms are tech companies’ <a href="https://www.google.com/search/howsearchworks/algorithms/">secret recipes</a> to keep users on their platforms. </p>
<p>The easiest way to do that is serve you content you like – perhaps with even more salt, sugar and fat. </p>
<p>On YouTube it’s more of the same type of content you’ve been watching. Like videos of stray dogs being rescued? You’ll get more of those recommended to you. If it’s videos about governments hiding alien technology, you’ll get more of those.</p>
<p>Facebook works a little bit differently. It will recommend groups for you to join based on your interests. If you’ve joined a group about native birds, or ascending to the fifth dimension, more such groups will be recommended to you. Those groups enable you to interact with and make “friends’ with others who share your interests and beliefs.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-facebook-created-its-own-supreme-court-for-judging-content-6-questions-answered-160349">Why Facebook created its own ‘supreme court’ for judging content – 6 questions answered</a>
</strong>
</em>
</p>
<hr>
<h2>Repetition and normalisation</h2>
<p>These strategies reinforce and normalise our interests and views. They are crucial reasons for the <a href="https://www.nature.com/articles/s41599-020-00546-3">viral-like spread</a> of extremism. </p>
<p>An idea, no matter how absurd or extreme, becomes more acceptable
if <a href="https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR453/RAND_RR453.pdf">repeated over and over again</a>. Advertisers know this. So do propagandists. The more we view videos and posts pushing the same ideas, and connect with people who share the same views, the more we feel we’re normal and it’s those who disagree with us who are deluded. </p>
<p>This radicalisation is a social phenomenon. It is also a business. </p>
<p>Those pushing or holding radical ideas often think they are opposing Big Tech and other corporate interests. They couldn’t be more wrong. Extremist content is a lucrative market segment. Keeping your eyes on a page, enthralling you and reinforcing your views is a way for content creators, social influencers and the platforms themselves to make bank, boost their ego and spread their message. Which, in turn, legitimises their message.</p>
<p>Remember the fundamental business model: for Big Tech it is about about selling your attention to advertisers, no matter the message. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/reddit-removes-millions-of-pro-trump-posts-but-advertisers-not-values-rule-the-day-141703">Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day</a>
</strong>
</em>
</p>
<hr>
<figure class="align-center ">
<img alt="New Zealand Prime Minister Jacinda Ardern, third right, at the Christchurch Call summit on May 15 2021, discussing how to combat violent extremism being spread online." src="https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&rect=0%2C227%2C2000%2C1005&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=371&fit=crop&dpr=1 600w, https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=371&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=371&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=467&fit=crop&dpr=1 754w, https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=467&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/400950/original/file-20210517-19-1mulw5s.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=467&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">New Zealand Prime Minister Jacinda Ardern, third right, at the Christchurch Call summit on May 15 2021, discussing how to combat violent extremism being spread online.</span>
<span class="attribution"><span class="source">Christchurch Call/AP</span></span>
</figcaption>
</figure>
<h2>Can math be made ethical?</h2>
<p>Arden’s call is for algorithms designed with intent – the intent to reduce the promotion of content which can harm you, kill you or – given the right conditions – someone else.</p>
<p>An <a href="https://www.brookings.edu/research/ethical-algorithm-design-should-guide-technology-regulation/">ethical algorithm </a> would encourage a more balanced diet, even if it meant you would stop consuming. </p>
<p>Limiting what the waiters can serve you doesn’t completely avoid the need for important discussions. For example, then who should decide what healthy means? But this would be a less contentious, more productive debate than a stale argument about free expression versus censorship. Especially when the real discussion is the promotion and convenience of "junk” thinking.</p>
<p>Limiting consumption <a href="https://www.sciencedirect.com/science/article/pii/S0148296315001186">by making things</a> harder to find, not delivered on a platter, is preferable to any outright ban.</p><img src="https://counter.theconversation.com/content/160986/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Nathalie Collins does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social media algorithms are akin to a licence to promote junk food or tobacco to children.Nathalie Collins, Academic Director (National Programs), Edith Cowan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1561252021-03-08T19:06:42Z2021-03-08T19:06:42ZMeet BreadTube, the YouTube activists trying to beat the far-right at their own game<figure><img src="https://images.theconversation.com/files/388200/original/file-20210308-19-9qia42.jpg?ixlib=rb-1.1.0&rect=0%2C77%2C5184%2C3368&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Nordwood Themes/Unsplash</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>YouTube has gained a reputation for facilitating far-right <a href="https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html">radicalisation</a> and spreading <a href="https://www.researchgate.net/publication/338789290_Understanding_the_Incel_Community_on_YouTube">antisocial ideas</a>.</p>
<p>However, in an interesting twist, the same <a href="https://www.theatlantic.com/international/archive/2020/09/how-memes-lulz-and-ironic-bigotry-won-internet/616427/">subversive, comedic, satiric and ironic</a> tactics used by far-right internet figures are now being countered by a group of leftwing YouTubers known as “BreadTube”.</p>
<p>By making videos on the same topics as the far-right, BreadTube videos essentially <a href="https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html">hijack Youtube’s algorithm</a> by getting recommended to viewers who consume far-right content. BreadTubers want to pop YouTube’s political bubbles to create space for deradicalisation. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=199&fit=crop&dpr=1 600w, https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=199&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=199&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=250&fit=crop&dpr=1 754w, https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=250&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/388188/original/file-20210308-20-gtxot6.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=250&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The subreddit devoted to BreadTube content desribes it as being like ‘YouTube, but good’.</span>
</figcaption>
</figure>
<h2>Pivot to the (political) left</h2>
<p>The name “BreadTube” has its origin in anarcho-socialist book <a href="https://www.penguin.com.au/books/the-conquest-of-bread-cla-9780141396118">The Conquest of Bread</a>, by Peter Kropotkin. The name emerged organically as a more comedic alternative to the name “LeftTube”, and captures the dissident leftwing nature of the creators it encompasses. </p>
<p>The movement has no clear origin, but many BreadTube channels started in opposition to “anti-SJW” (social justice warrior) content that <a href="https://www-tandfonline-com.virtual.anu.edu.au/doi/pdf/10.1080/14680777.2018.1447333?needAccess=true">gained traction in the mid-2010s</a>. </p>
<p>The main figures associated with BreadTube are Natalie Wynn, creator of <a href="https://www.youtube.com/c/ContraPoints/videos">ContraPoints</a>; Abigail Thorn, creator of <a href="https://www.youtube.com/user/thephilosophytube">Philosophy Tube</a>; Harris Brewis, creator of <a href="https://www.youtube.com/user/hbomberguy">Hbomberguy</a>; and Lindsay Ellis, creator of a channel named after <a href="https://www.youtube.com/user/chezapoctube">herself</a>. Originally the label was imposed on these creators, and while they all identify with it to varying degrees, there remains a vibrant debate as to <a href="https://medium.com/swlh/breadtube-lefttube-youtube-contrapoints-hbomberguy-radicalization-541d69313f8b">who is part of the movement</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/388185/original/file-20210308-23-bji1wg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">YouTuber Natalie Wynn’s ContraPoints is among the leading channels for BreadTube content.</span>
</figcaption>
</figure>
<p>BreadTubers are united only by a shared interest in combating the far-right online and a willingness to engage with challenging social and political issues. These creators infuse politics with their other interests such as <a href="https://www.youtube.com/watch?v=cHTMidTLO60">films</a>, <a href="https://www.youtube.com/watch?v=_mvYUaZYtJk">video games</a>, <a href="https://www.youtube.com/watch?v=oENI8NnTx0w">popular culture</a>, <a href="https://www.youtube.com/watch?v=ejdlkfXwPQc">histories</a> and <a href="https://www.youtube.com/watch?v=bFeXJkcKYaU">philosophy</a>.</p>
<p>The current most popular Breadtuber, Wynn, has described her channel as a “<a href="https://www.youtube.com/watch?v=0Ix9jxid2YU">long theatrical response to fascism</a>” — and a part of “<a href="https://www.youtube.com/watch?v=2Nrz4-FZx6k">the left’s immune system</a>”. In an interview with the New Yorker, Wynn said she wants to create better propaganda than the far-right, with the aim of <a href="https://www.newyorker.com/culture/persons-of-interest/the-stylish-socialist-who-is-trying-to-save-youtube-from-alt-right-domination">winning people over</a> rather than just criticising.</p>
<p>Euphemisms, memes and <a href="https://www.npr.org/2019/07/09/739999739/youtube-creators-are-trying-to-fight-radicalization-online">“inside” internet language</a> are also used in a way that traditional media <a href="https://www.theatlantic.com/international/archive/2020/09/how-memes-lulz-and-ironic-bigotry-won-internet/616427/">struggle to</a> replicate. <a href="https://www.splcenter.org/hatewatch/2018/09/18/ok-sign-white-power-symbol-or-just-right-wing-troll">The Southern Poverty Law Centre</a> has referenced BreadTubers to help unpack how memes spread among far-right groups, and the difficulty in identifying the line between “trolling” and genuine use of far-right symbols.</p>
<p>BreadTubers use the same titles, descriptions and tags as far-right YouTube personalities, so their content is recommended to the same viewers. In their recent <a href="https://www.researchgate.net/publication/338565276_YouTube_as_Praxis_On_BreadTube_and_the_Digital_Propagation_of_Socialist_Thought">journal article on BreadTube</a>, researchers Dmitry Kuznetsov and Milan Ismangil summed up the strategy thus:</p>
<blockquote>
<p>The first layer involves use of search algorithms by BreadTubers to disseminate their videos. The second layer – a kind of affective hijacking – revolves around using a variety of theatrical and didactical styles to convey leftist thought.</p>
</blockquote>
<h2>What are the results?</h2>
<p>The success of Breadtubers has been hard to quantify, although they seem to be gaining significant traction. They receive tens of millions of views a month and have been increasingly referenced in <a href="https://www.npr.org/2019/07/09/739999739/youtube-creators-are-trying-to-fight-radicalization-online">media</a> and <a href="https://www.researchgate.net/publication/338565276_YouTube_as_Praxis_On_BreadTube_and_the_Digital_Propagation_of_Socialist_Thought">academia</a> as a case study in deradicalisation. </p>
<p>For example, <a href="https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html">The New York Times has reported</a> deeply on the journey of individuals from the far-right to deradicalisation via BreadTube. Further, the <a href="https://www.reddit.com/r/BreadTube/">r/Breadtube</a> section of Reddit and videos from all BreadTube creators are littered with users describing how they broke away from the far-right. </p>
<p>These anecdotal journeys, while individually unremarkable, collectively demonstrate the success of the movement.</p>
<h2>YouTube’s algorithms are a problem</h2>
<p>The claim that YouTube helps <a href="https://www.usatoday.com/story/news/nation/2020/12/15/google-youtube-white-supremacist-nazi-problem/3830535001/">promote far-right</a> content is both <a href="https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth">widely accepted</a> and <a href="https://www.businessinsider.com.au/youtube-algorithm-not-radicalizing-people-penn-state-study-found-2019-10?r=US&IR=T">contested</a>. </p>
<p>The central problem in trying to understand which is true is that YouTube’s algorithm is secret. YouTube’s fixation with <a href="https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth">maximising watch time</a> has meant users are <a href="https://www.technologyreview.com/2018/04/12/143919/an-ex-google-engineer-is-scraping-youtube-to-pop-our-filter-bubbles/">recommended content</a> designed to keep them hooked. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/youtubes-algorithms-might-radicalise-people-but-the-real-problem-is-weve-no-idea-how-they-work-129955">YouTube's algorithms might radicalise people – but the real problem is we've no idea how they work</a>
</strong>
</em>
</p>
<hr>
<p>Critics say YouTube has historically had a tendency to <a href="https://journals-sagepub-com.virtual.anu.edu.au/doi/full/10.1177/1940161220964767">recommend increasingly extreme content</a> to the site’s rightwing users. Until recently, mainstream conservatives had a limited presence on YouTube and thus the extreme right was over-represented in rightwing political and social commentary. </p>
<p>At its worst, the YouTube algorithm can allegedly <a href="https://www.cnbc.com/2019/12/30/critics-slam-youtube-study-showing-no-ties-to-radicalization.html">create a personalised radicalisation bubble</a>, recommending only far-right content and even introducing the viewer to <a href="https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html">content that pushes them further</a> in that direction. </p>
<p>YouTube is aware of these concerns and <a href="https://www.nytimes.com/live/2020/2020-election-misinformation-distortions#youtube-clamped-down-on-content-but-researchers-say-qanon-still-spread">does tinker with its algorithm</a>. But how effectively it does this has been <a href="https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html">questioned</a>.</p>
<h2>Limitations</h2>
<p>Ultimately, BreadTubers identify and discuss, but don’t have the answer to, many of the <a href="https://www.theatlantic.com/magazine/archive/2017/12/brotherhood-of-losers/544158/">structural causes of alienation</a> that may be driving far-right recruitment. </p>
<p><a href="https://www.france24.com/en/20181116-income-inequality-financial-crisis-economic-uncertainty-rise-far-right-europe-austerity">Economic inequality</a>, <a href="https://journals-sagepub-com.virtual.anu.edu.au/doi/full/10.1177/0963721418817755">lack of existential purpose</a>, <a href="https://csreports.aspeninstitute.org/documents/Knight2018-Chapter4.pdf">distrust in modern media</a> and <a href="http://www.oecd.org/gov/trust-in-government.htm">frustration at politicians</a> are just some of the problems that may have a part to play. </p>
<p>Still, BreadTube may yet be <a href="https://www.youtube.com/watch?v=2Nrz4-FZx6k">one piece of the puzzle</a> in addressing the problem of far-right content online. Having popular voices that are tuned into internet culture —and which aim to respond to extremist content using the same tone of voice — could be invaluable in turning the tide of far-right radicalisation.</p><img src="https://counter.theconversation.com/content/156125/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alexander Mitchell Lee receives funding from the Australian Government Research Training Program (AGRTP) Stipend Scholarship.</span></em></p>Leftwing YouTubers are aiming to get their videos in front of viewers who typically watch far-right content, by mimicking their keywords and hoping the site’s algorithms will do the rest.Alexander Mitchell Lee, PhD Candidate, Crawford School of Public Policy, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1476682020-10-08T05:46:48Z2020-10-08T05:46:48ZFacebook is removing QAnon pages and groups from its sites, but critical thinking is still the best way to fight conspiracy theories<p>Facebook has <a href="https://www.nytimes.com/2020/10/06/technology/facebook-qanon-crackdown.html">announced a ban</a> on groups and pages identified with the rapidly growing QAnon conspiracy movement, which will cover both Facebook itself and the Facebook-owned Instagram. </p>
<p>QAnon is a far-right conspiracy theory that alleges, among other things, that US President Donald Trump is battling Satan-worshipping paedophiles and a global child sex-trafficking ring run by Democrats. While the movement began in the US, it has begun to attract followers in other countries, <a href="https://theconversation.com/why-qanon-is-attracting-so-many-followers-in-australia-and-how-it-can-be-countered-144865">including Australia</a>.</p>
<p>Facebook’s ban escalates a policy announced in August that <a href="https://www.nytimes.com/2020/09/18/technology/facebook-tried-to-limit-qanon-it-failed.html">aimed to ban</a> QAnon groups promoting violence, and comes as the social media giant attempts to <a href="https://www.nytimes.com/2020/10/07/technology/facebook-political-ads-ban.html">slow the spread of disinformation</a> on its platform in the lead-up to the US presidential election on November 3. </p>
<p>Twitter also banned “<a href="https://twitter.com/TwitterSafety/status/1285726277719199746">so-called ‘QAnon’ activity</a>” in July. After Facebook’s latest move, some QAnon adherents were quick to <a href="https://www.nytimes.com/2020/10/07/technology/qanon-believers-say-being-banned-from-facebook-is-proof-of-the-conspiracy.html">claim</a> the ban itself was more evidence of a cover-up.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-qanon-is-attracting-so-many-followers-in-australia-and-how-it-can-be-countered-144865">Why QAnon is attracting so many followers in Australia — and how it can be countered</a>
</strong>
</em>
</p>
<hr>
<h2>Can social media suppress ‘dangerous’ ideas?</h2>
<p>Facebook’s action raises important questions. Will it work? Will taking down these pages stop the spread of “potentially dangerous” ideas? </p>
<p>There is some evidence it will. In 2015, Facebook blocked accounts and deleted posts associated with the Islamic State of Iraq and Syria (ISIS). Thereafter, the group’s propaganda did not seem to pop up as often elsewhere online (although it has <a href="https://www.bbc.com/news/technology-53389657">not disappeared entirely</a>). </p>
<p>However, if groups are banned from Facebook or other platforms, they may still find ways to propagate material. This can create a “black market” of ideas out of public view, where any idea, no matter how objectionable, can go completely unchecked. </p>
<h2>Should social media suppress ‘dangerous’ ideas?</h2>
<p>Another question is whether Facebook <em>should</em> be banning “potentially dangerous” groups and pages, and therefore ideas, from its platforms. This is a harder question to answer. </p>
<p>Platforms such as Facebook sit in a grey area in relation to freedom of expression. Banning somebody from a platform does not infringe on their legal right to express themselves — it just means they will have to do it elsewhere. </p>
<p>However, Facebook and other platforms such as Instagram and Twitter are among the main avenues for public expression, and are used not only by everyday individuals but also large organisations and even elected representatives. So the removal of certain groups or ideas should be at least concerning. This is particularly true for those like QAnon which do not directly call for violence (though the group <a href="https://ctc.usma.edu/the-qanon-conspiracy-theory-a-security-threat-in-the-making/">has been linked</a> to some violent incidents).</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/netflixs-the-social-dilemma-highlights-the-problem-with-social-media-but-whats-the-solution-147351">Netflix's The Social Dilemma highlights the problem with social media, but what's the solution?</a>
</strong>
</em>
</p>
<hr>
<h2>The value of free expression</h2>
<p>Trump has said he has heard followers of QAnon are “<a href="https://www.nytimes.com/2020/08/19/us/politics/trump-qanon-conspiracy-theories.html">people who love our country</a>”. Like other far-right groups, QAnon is ultra-nationalistic, so Trump is likely correct.</p>
<p>QAnon’s ultra-nationalism is important when we talk about the Facebook ban because one of the founding principles of the United States as a nation is the idea people should be free to express any idea they like, including conspiracy theories, ideas associated with religious cults and hateful propaganda.</p>
<p>Key texts that informed the foundation of the US, such as the introduction to <a href="https://archive.org/details/ageofreason00painiala">The Age of Reason</a> by Thomas Paine, <a href="https://books.google.com.au/books?id=nejQAAAAMAAJ&dq=areopagitica&pg=PP13&redir_esc=y#v=onepage&q=areopagitica&f=false">Areopagitica</a> by John Milton and and John Stuart Mill’s <a href="https://www.gutenberg.org/files/34901/34901-h/34901-h.htm">On Liberty</a>, all make similar arguments on freedom of expression.</p>
<figure class="align-center ">
<img alt="A black-and-white photograph of a bald middle-aged man in late 19th-century dress (John Stuart Mill).k" src="https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=754&fit=crop&dpr=1 600w, https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=754&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=754&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=947&fit=crop&dpr=1 754w, https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=947&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/362369/original/file-20201008-20-s9edm5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=947&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">English philosopher John Stuart Mill argued for the important of the right to free expression, especially of heretical ideas.</span>
<span class="attribution"><span class="source">London Stereoscopic Company - Hulton Archive</span></span>
</figcaption>
</figure>
<p>They argue that when we deny an idea the chance to be expressed, we do <em>ourselves</em> a disservice because we deny ourselves a chance to hear it. It is not just the right of the expresser to think and say; it is the right of the listener to hear and think.</p>
<p>From this point of view, ideas expressed by QAnon or any other fringe group should sharpen our ability to think critically about what we claim to know. If someone puts forward a seemingly crazy idea, they should be heard because they could be correct or hold kernels of truth to their ideas — if not, they need to be publicly refuted for the benefit of everyone. </p>
<p>John Stuart Mill argued “the greatest harm done is to those who are not heretics, and whose whole mental development is cramped, and their reason cowed, by the fear of heresy”. The heretical view is therefore the most salient of all views because in its heresy enhances our individual and collective ability for critical thinking. </p>
<h2>No simple solutions</h2>
<p>Do these centuries-old principles still hold in the age of social media? Platforms like Facebook appear perfectly suited to the promotion and dissemination of conspiracy theories like QAnon. In their relentless quest for our attention, the platforms take advantage of the human tendency to find salacious and infuriating articles and ideas more captivating than nuanced, balanced and factual material.</p>
<p>There is no simple solution or shortcut to mitigating potentially dangerous ideas. They need to be openly refuted but to do this requires time and engagement with the ideas themselves but importantly first, an ability for critical thinking. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/to-combat-conspiracy-theories-teach-critical-thinking-and-community-values-147314">To combat conspiracy theories teach critical thinking – and community values</a>
</strong>
</em>
</p>
<hr>
<p>Who will do this work? It may be an indictment on our educational systems if it can be shown that we are not producing enough critical thinkers. Perhaps this is a place to start, so we do not have to rely on Silicon Valley to tell us what crazy ideas we can read, because those ideas will find it hard to find a home to begin with. </p>
<p>In the meantime perhaps Facebook can use its algorithms and tremendous resources to find a way to promote critical thinking and to incentivise nuanced and balanced discourse — adding to the global discussion rather than merely subtracting.</p><img src="https://counter.theconversation.com/content/147668/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Shane Satterley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A Facebook ban on QAnon may not be the best way to address the fast-growing far-right conspiracy movement.Shane Satterley, PhD Candidate, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1455052020-09-08T12:18:22Z2020-09-08T12:18:22ZPortland and Kenosha violence was predictable – and preventable<figure><img src="https://images.theconversation.com/files/356430/original/file-20200903-20-1pypf6g.jpg?ixlib=rb-1.1.0&rect=0%2C7%2C5169%2C3433&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Portland police hold back Chandler Pappas, who was with the victim, in the wake of a fatal shooting on Aug. 29, 2020.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/portland-police-hold-back-chandler-pappas-who-was-with-the-news-photo/1228264216">Nathan Howard/Getty Images</a></span></figcaption></figure><p>The U.S. reached a deadly moment in protests over racial injustice, as back-to-back shootings in Kenosha, Wisconsin, and Portland, Oregon, on Aug. 25 and 29 took the lives of three people and seriously injured another. </p>
<p>It was tragic – but not surprising. </p>
<p>The alleged shooters were at the protests for different reasons: One was a pro-police supporter who believed he was <a href="https://www.wsj.com/articles/who-is-kyle-rittenhouse-and-what-happened-in-the-kenosha-shootings-11598653456">protecting local businesses</a> in Kenosha and the other an “<a href="https://www.wsj.com/articles/what-is-known-of-michael-reinoehl-person-of-interest-in-portland-killing-11599087170">antifa supporter” and “fixture of anti-police demostrations”</a> in Portland. The victims included apparent <a href="https://apnews.com/0994e25654d255e552aaad8a15e16c84">supporters of Black Lives Matter protests</a> and a <a href="https://www.oregonlive.com/portland/2020/08/man-fatally-shot-after-pro-trump-caravan-was-patriot-prayer-friend-and-supporter.html">supporter of a far-right group</a>. Together, they reflect an escalating risk of spontaneous violence as heavily armed citizen vigilantes and individuals mobilize at demonstrations and protests.</p>
<p>As a <a href="https://scholar.google.com/citations?user=YNZE_wMAAAAJ&hl=en&oi=ao">scholar of extremism</a> and director of the <a href="https://www.american.edu/centers/university-excellence/peril.cfm">Polarization and Extremism Research and Innovation Lab</a> at American University, I have spent the past few months watching people mobilize across the political spectrum – about Second Amendment rights, state shelter-in-place orders and police brutality, and in reaction to those protests – while leaders respond insufficiently to the threat of violence. </p>
<h2>Foreseeable conflict</h2>
<p>I wasn’t the only one expecting violence. In mid-July, terrorism expert <a href="https://extremism.gwu.edu/jj-macnab">J.J. McNab</a> <a href="https://www.thetrace.org/rounds/daily-bulletin-congressional-panel-gets-warning-on-boogaloo-violence/">testified before Congress</a> about her concern “that there will be a shootout at one or more of the Black Lives Matter protests,” warning of the dangers of having <a href="https://www.indystar.com/story/news/nation/2020/09/05/kentucky-derby-2020-protests-breonna-taylor-angry-viking-louisville/5729427002/">heavily armed groups with conflicting goals</a> at the same events.</p>
<p>The danger existed long before that, though. In my new book, “<a href="https://press.princeton.edu/books/hardcover/9780691203836/hate-in-the-homeland">Hate in the Homeland: The New Global Far Right</a>,” I explain that the past three years – from the <a href="https://www.facinghistory.org/sites/default/files/Unite_the_Right_Rally_in_Charlottesville_Timeline.pdf">Charlottesville “Unite the Right” rally in 2017</a>, through mass shootings in <a href="https://www.nytimes.com/2018/10/27/us/active-shooter-pittsburgh-synagogue-shooting.html">Pittsburgh</a> and <a href="https://www.cbsnews.com/news/el-paso-shooting-victim-death-toll-rises-22-today-death-penalty-for-domestic-terrorism-in-walmart-shooting-2019-08-05/">El Paso</a>
to this more recent violence – have shown the growing activity of the extremist fringe in U.S. society. </p>
<p>Yet over the past year, the presence of a wide range of militia and vigilante groups has repeatedly caught local communities and national leaders unprepared to handle the threat they pose.</p>
<p>The pandemic has changed some things: The threat from planned extremist violence, like in <a href="https://www.nbcnews.com/news/world/new-zealand-mosque-shootings">Christchurch, New Zealand</a> in March 2019 and <a href="https://www.cbsnews.com/news/poway-synagogue-shooting-suspect-john-earnest-in-custody-after-1-dead-3-injured-today-live-updates-2019-04-27/">Poway, California the following month</a>, is probably lower now – in part because there are fewer large public gatherings for extremists to target. But the threat of spontaneous violence – especially at protests organized around racial injustice and police brutality – is high. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="People hug each other and hold candles at a vigil" src="https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/356432/original/file-20200903-22-v93obi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">People comfort each other a vigil for victims of an Aug. 3, 2019, shooting in El Paso.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/APTOPIXTexasMallShooting/a4b990a566fc4dec961a354706a3a440/photo">AP Photo/John Locher</a></span>
</figcaption>
</figure>
<h2>Militia and vigilante groups’ conflicting goals</h2>
<p>Americans’ collective inaction to stem the <a href="https://www.theguardian.com/us-news/2020/aug/19/facebook-qanon-us-militia-groups-restrictions">growth of militia</a> and <a href="https://theconversation.com/vigilantism-again-in-the-news-is-an-american-tradition-141849">vigilante groups</a> is, in part, rooted in confusion about their goals. </p>
<p>Extremist and paramilitary groups in the U.S. are motivated by a <a href="https://theconversation.com/militias-warning-of-excessive-federal-power-comes-true-but-where-are-they-143333">wide range of competing factors</a>. Some are <a href="https://www.nytimes.com/2019/04/25/us/border-militia-mexico.html">white supremacists</a> seeking to <a href="https://www.hup.harvard.edu/catalog.php?isbn=9780674286078">spark a race war</a>. Others are <a href="https://slate.com/news-and-politics/2020/01/militia-richmond-virginia-gun-rally.html">fighting a government</a> they <a href="https://cup.columbia.edu/book/oath-keepers/9780231550314">perceive to be tyrannical</a>. Still others are oriented around <a href="https://www.cbs58.com/news/creator-of-kenosha-guard-group-explains-call-to-action-before-deadly-shooting">vigilante support for or defense of local businesses</a> and law enforcement. </p>
<p>Left-wing militias have also grown in recent years, primarily organized around <a href="https://www.nytimes.com/2018/11/02/opinion/socialist-left-guns-nra-trump.html">resistance to the far right</a>. These include the recently formed <a href="https://www.newsweek.com/armed-black-demonstrators-challenge-white-supremacist-militia-georgias-stone-mountain-park-1515494">Not F**cking Around Coalition</a>, a Black militia group that has shown up at protests this summer to challenge white supremacists. </p>
<p>At this summer’s protests, that division has been on clear display. Even within groups that ostensibly share the same goals – such as the <a href="https://www.theatlantic.com/technology/archive/2020/07/american-boogaloo-meme-or-terrorist-movement/613843/">Boogaloo bois</a>, who call for revolution or civil war – there is little alignment. </p>
<p>In late May, three alleged members of the Boogaloo movement were arrested in Las Vegas for <a href="https://www.nbcnews.com/news/all/three-men-connected-boogaloo-movement-tried-provoke-violence-protests-feds-n1224231">allegedly plotting to spark violence at a Black Lives Matter protest</a>. But a month later in Richmond, Boogaloo groups marched alongside Black protesters and chanted to “<a href="https://www.npr.org/2020/07/06/887467436/-were-willing-to-do-what-it-takes-causes-collide-in-richmond-s-streets">drown out the white supremacists</a>” who showed up.</p>
<p>Despite their conflicting goals, militia and vigilante groups all share a sense of dire threat and a belief that their lives, their future survival or people they want to protect are threatened by some outside group. This is <a href="https://mitpress.mit.edu/books/extremism">“us versus them” thinking at its most extreme</a>; militias feel compelled to defend against those threats. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3789%2C2518&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3789%2C2518&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345820/original/file-20200706-3980-1q7hz4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Armed civilians have been attending public protests throughout the year.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/armed-counter-protesters-and-a-police-officer-stand-watch-news-photo/1223867050">George Frey/Getty Images</a></span>
</figcaption>
</figure>
<h2>Conditions ripe for online radicalization</h2>
<p>Extremists thrive when people feel uncertain and isolated. They invite new members to join a community and engage heroically to thwart a pressing threat. One review of existing literature finds that almost all recent research finds the “<a href="https://doi.org/10.1007/s40894-019-00108-y">need for belonging</a>” is key to extremism, along with a need for control.</p>
<p>That’s why the current moment is a tinderbox for paramilitary and extremist growth. Millions of Americans are anxious about an unseen virus, are isolated during shutdowns, face widespread economic uncertainty and are spending much more time online, where encounters with propaganda and misinformation are more likely. </p>
<p>During the COVID-19 pandemic, there has been <a href="https://www.adn.com/nation-world/2020/08/01/armed-civilians-militia-like-groups-surge-into-public-view-this-summer-at-rallies-and-counter-protests/">explosive growth</a> in radical political groups, civilian militia, vigilante and conspiracy group membership on social media – across the ideological spectrum. Earlier this summer, Facebook banned <a href="https://www.businessinsider.com/facebook-bans-hundreds-of-groups-users-linked-to-boogaloo-movement-2020-7">hundreds of accounts</a> associated with the far-right “boogaloo” scene, which advocates for revolution and civil war. Last month, Facebook removed nearly 10,000 <a href="https://www.theguardian.com/us-news/2020/aug/19/facebook-qanon-us-militia-groups-restrictions">QAnon groups and 980</a> “offline anarchist groups,” including some that “identify as Antifa.” </p>
<p>Social media plays a role in the radicalization of <a href="https://www.start.umd.edu/pubs/START_PIRUS_UseOfSocialMediaByUSExtremists_ResearchBrief_July2018.pdf">90% of recent extremists</a> in the U.S. The current situation is no exception.</p>
<p>Throughout the spring and summer of 2020, across the country, heavily armed vigilante and militia members responded to incendiary calls to action and to misinformation on social media related to state regulations on gun ownership, shelter-in-place orders and, finally, Black Lives Matter protests. Calls have gone out to “<a href="https://techcrunch.com/2020/08/26/facebook-kenosha-guard-militia-protest/">armed citizens to protect our lives and property</a>” to show up at protests to defend against “evil thugs.” </p>
<p>In Kenosha, local law enforcement legitimized vigilante and militia presence by thanking them for being there. “<a href="https://www.businessinsider.com/kenosha-police-thanked-armed-militia-and-gave-water-2020-8">We appreciate you guys</a>,” one police officer in an armored vehicle says on a widely circulated video as he tossed a water bottle to armed militia members. <a href="https://www.businessinsider.com/kenosha-police-thanked-armed-militia-and-gave-water-2020-8?op=1">Thanking citizen vigilantes for their support</a> essentially empowers individuals to <a href="https://www.mediamatters.org/black-lives-matter/after-kenosha-shootings-former-sheriff-david-clarke-advises-radio-listeners-how">take matters into their own hands</a>.</p>
<p>Under these conditions, if there’s anything surprising about the violence that has erupted, it’s that it took so long for it to happen. </p>
<iframe src="https://www.facebook.com/plugins/video.php?href=https%3A%2F%2Fwww.facebook.com%2Fsam.wunderle%2Fvideos%2F10216501641126335%2F&show_text=0&width=560" width="100%" height="291" style="border:none;overflow:hidden" scrolling="no" frameborder="0" allowtransparency="true" allowfullscreen="true"></iframe>
<figure><figcaption><span class="caption">A video of Kenosha police thanking armed civilians.</span></figcaption></figure>
<h2>What can be done?</h2>
<p>There are several ways to reduce the threat of future violence, but they all include minimizing the number of people who feel empowered – by local authorities <a href="https://slate.com/news-and-politics/2020/09/trump-support-kyle-rittenhouse-election-violence.html">or elected officials</a> – to act violently. </p>
<p>Leaders at all political levels could affirm people’s right to protest peacefully while unequivocally condemning vigilante and militia mobilization, regardless of the reason. Many studies have found that incendiary or hateful rhetoric from politicians both <a href="https://doi.org/10.1080/03050629.2020.1739033">deepens political polarization</a> and increases support for political violence. Research in Germany has shown that when <a href="https://doi.org/10.1086/386271">politicians use incendiary language</a>, violence increases. But when they use different words, <a href="https://fortune.com/2017/02/13/donald-trump-national-security-cve-right-wing-extremism-terrorism-germany/">violence drops</a>.</p>
<p>If public rhetoric doesn’t cool down, I expect escalating polarization and politicization of the protests and vigilante violence may make matters worse in the coming months. I’m particularly concerned because firearms purchases have <a href="https://www.cnn.com/2020/08/03/politics/gun-background-checks-fbi/index.html">skyrocketed during the pandemic</a>.</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p>
<p>However, communities could work to <a href="https://www.westernstatescenter.org/2020-white-nationalism-in-schools-trainer">interrupt the radicalization</a> of <a href="https://www.american.edu/centers/university-excellence/upload/splc_peril_covid_parents_guide.pdf">young people, and adults</a>. <a href="https://www.american.edu/centers/university-excellence/peril.cfm">My own research lab</a> recently released a <a href="https://www.splcenter.org/PERIL">guide for parents and caregivers to online radicalization</a>, in collaboration with the Southern Poverty Law Center, in order to better help recognize risk and build resilience to extremist narratives during the COVID-19 pandemic. This fall we will study how tools like that affect parents’ abilities to intervene at early stages of radicalization.</p>
<p>Our aim is to reduce the chances of people adopting extremist views and joining militia or vigilante groups in the first place. After all, having fewer extremists seems likely to reduce extremist violence.</p><img src="https://counter.theconversation.com/content/145505/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cynthia Miller-Idriss does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The increasing visibility of a wide range of militia and vigilante groups has repeatedly caught local communities and national leaders off guard.Cynthia Miller-Idriss, Professor of Education and Sociology, American UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1426332020-07-22T19:19:59Z2020-07-22T19:19:59ZFar-right ‘boogaloo’ movement is using Hawaiian shirts to hide its intentions<figure><img src="https://images.theconversation.com/files/348471/original/file-20200720-63094-1uybg4c.jpg?ixlib=rb-1.1.0&rect=224%2C54%2C4142%2C2926&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Members of the boogaloo have taken to wearing Hawaiian shirts.</span> <span class="attribution"><a class="source" href="https://www.cpimages.com/CS.aspx?VP3=DamView&VBID=2RLQ2J41JF4I&SMLS=1&RW=1440&RH=814&RW=1440&RH=814">(Charlie Riedel/AP)</a></span></figcaption></figure><p>Members of the loosely organized far-right “boogaloo” movement are making the rounds in the news. They’re gaining notoriety not for being <a href="https://www.theatlantic.com/technology/archive/2020/07/american-boogaloo-meme-or-terrorist-movement/613843/">linked to domestic acts of terrorism in the United States</a>, but for their penchant for Hawaiian shirts. </p>
<p>Their fondness for aloha-infused militia looks has caught the interest of journalists and <a href="https://www.nytimes.com/2020/06/29/style/boogaloo-hawaiian-shirt.html?referringSource=articleShare">prominent news outlets</a>. This mix of street fashions has become an identifying characteristic of boogaloo boys or bois. </p>
<p><a href="https://www.theguardian.com/world/2020/jul/08/boogaloo-boys-movement-who-are-they-what-do-they-believe">The boogaloo is a fragmented community</a> that <a href="https://www.bbc.com/news/blogs-trending-53018201">began as a firearms board on 4chan</a> and then <a href="https://www.theguardian.com/world/2020/jul/01/what-is-boogaloo-movement-rightwing-anti-government">blossomed on Facebook</a>. <a href="https://www.usatoday.com/story/news/nation/2020/06/19/what-is-boogaloo-movement/3204899001/">The term boogaloo</a> comes from the 1980s movie <em>Breakin’ 2: Electric Boogaloo</em>. More recently the term has been used to refer to <a href="https://heavy.com/news/2020/06/the-boogaloo-movement-5-fast-facts/">anti-government sentiment, civil unrest and the desire for a second civil war</a>. </p>
<p>The boogaloo community includes far-right, pro-gun, anti-government libertarians spanning a wide spectrum of ideologies including <a href="https://www.adl.org/resources/backgrounders/alt-right-a-primer-on-the-new-white-supremacy">white supremacy</a>, anarchy and a range of conspiracy theories. The boogaloo are, however, unified by violent militant attitudes and terrorist tendencies. They are also savvy when it comes to managing their public image and hiding their actions.</p>
<h2>Hawaiian shirts vs. aloha shirts</h2>
<p>As a scholar studying the intersections of fashion, visual culture and social issues, the boogaloo’s adoption of Hawaiian shirts troubles me. Hawaiian shirts have <a href="https://www.atlasobscura.com/articles/history-of-aloha-hawaiian-shirt">historically</a> <a href="https://www.atlasobscura.com/articles/history-of-aloha-hawaiian-shirt">symbolized place, consumerism, colonialist oppression and the opposition to conventional culture, and have been an alternative to formal wear</a>.</p>
<p>On the one hand, we have the problematic association of Hawaiian prints with laid-back lifestyles. On the other hand, Indigenous Hawaiians perceive <a href="https://www.stanforddaily.com/2018/02/26/hawaiian-clothes-and-colonialism/">these motifs as stereotyping their authentic culture</a>.</p>
<p>Hawaiian shirts’ meanings play out in surprising ways within the far-right’s efforts to make their ideology mainstream. For mainstream onlookers, Hawaiian shirts worn with tactical gear may fool them about the boogaloo’s true colours. The common <a href="https://www.racked.com/2018/2/23/16982034/aloha-shirt-history">association of Hawaiian prints with relaxed easy-going attitudes</a> is misguided here. The boogaloo are <a href="https://www.voanews.com/usa/race-america/boogaloo-boys-aim-provoke-2nd-us-civil-war">bent on violence</a> and <a href="https://www.cnn.com/2020/06/03/us/boogaloo-extremist-protests-invs/index.html">hope for a second civil war to advance their agenda</a>.</p>
<p>Called <a href="http://www.thealohashirt.com/history">aloha shirts</a> in Hawaii, these garments were reclaimed from their colonialist implications by Indigenous Hawaiian designers. Since the mid-1980s, designers like <a href="https://www.hawaiibusiness.com/talk-story-sig-zane/">Sig Zane have injected aloha prints with authentic Indigenous energy</a>.</p>
<p>Early Hawaiian shirts featured Asian motifs, which were replaced by local motifs in the 1930s. With this shift the shirts started embodying <a href="https://www.hawaii.edu/uhwo/clear/home/lawaloha.html">“aloha,” meaning respect for all animated or inanimate beings</a>.</p>
<p>For Hawaiians, and especially Indigenous Hawaiians, the boogaloo’s co-opting of Hawaiian shirts is outrageous. In today’s Hawaii, aloha shirts symbolize tolerance and <a href="https://www.independent.co.uk/news/world/americas/far-right-hawaiian-print-shirts-why-protesters-boogaloo-racist-a9539776.html">Hawaiians don’t want it associated with the racism of the boogaloo</a>.</p>
<figure class="align-center ">
<img alt="Five men belonging to the boogaloo movement wearing camouflage clothing and carrying guns." src="https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=339&fit=crop&dpr=1 600w, https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=339&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=339&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=426&fit=crop&dpr=1 754w, https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=426&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/348477/original/file-20200720-37-g033fy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=426&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The boogaloo are known for being armed and wearing militia-style clothing.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/16086041@N00/49416109936/">(Anthony Crider/Flickr)</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<h2>Boogaloo boys’ street style</h2>
<p>Although the boogaloo movement seems to pop up in 2019, its roots are entrenched in online fringe politics. <a href="https://www.splcenter.org/hatewatch/2020/06/05/boogaloo-started-racist-meme">Civil rights advocacy groups and researchers have linked the boogaloo to white supremacist</a> groups as early as 2013. These online communities are a natural evolution from neo-Nazi and militant white nationalist organizations.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-conspiracy-theories-spread-online-its-not-just-down-to-algorithms-133891">How conspiracy theories spread online – it's not just down to algorithms</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.yahoo.com/entertainment/why-boogaloo-bois-wear-hawaiian-210228866.html">Accessorizing fatigues with Hawaiian shirts</a> is a styling attempt by far-right groups to manage their public image. The clash of camo or tactical fabrics and aloha prints is certainly striking. This is especially true against urban backdrops of cityscapes or protester and police outfits.</p>
<p>It isn’t new for white supremacists to co-opt conformity dress, but <a href="https://www.vice.com/en_ca/article/ep4abn/the-aloha-shirt-is-bigger-than-the-boogaloo-movement">incorporating Hawaiian shirts</a> opens new avenues for political posturing. A similar strategy was employed in 2017 by white supremacist protesters in Charlottesville, Va. <a href="https://www.gq.com/story/uniform-of-white-supremacy">They sported polo shirts and khakis</a>, in an attempt to lend a sense of legitimacy to their cause.</p>
<p>White supremacists adopted business casual attire to distance themselves from the negative connotations of Nazi and Ku Klux Klan garb. At least superficially, this new look helped conceal their true nature. But their violence eventually surfaced and <a href="https://www.gq.com/story/fred-perry-wants-alt-right-bros-to-stop-wearing-their-polos">fashion brands’ promptly made declarations to distance themselves</a> from the movement.</p>
<h2>What do Hawaiian shirts mean to the far-right?</h2>
<p>Perhaps Hawaiian shirts, fatigues and assault weapons synthesize the disparity of beliefs among the loosely organized boogaloo. The colourful elements in Hawaiian prints could suggest this unity in a perceived diversity. Although these fringe groups share a belief in an upcoming race war, they differ on many other topics. </p>
<p>Some commentators have suggested that the Hawaiian shirt motifs <a href="https://www.bellingcat.com/news/2020/05/27/the-boogaloo-movement-is-not-what-you-think/">speak to the boogaloo’s online origins</a>. After all, they were a meme before becoming a somewhat coherent virtual and then physical organization. This inside joke, <a href="https://www.splcenter.org/hatewatch/2020/06/05/boogaloo-started-racist-meme">copy-and-paste esthetic of memes and GIFs</a> is shared by boogaloos in social media. </p>
<p>Integrating Hawaiian prints into paramilitary outfits is a calculated effort by far-right affiliates. They want to get noticed in a crowded political space. Thus, the boogaloos’ seemingly innocent outfits are about calling attention to themselves, while simultaneously masking their violent intentions.</p>
<p>Don’t be tricked by the kitschy cheerfulness of their Hawaiian prints. There is nothing as <a href="http://www.honolulumagazine.com/Honolulu-Magazine/July-2020/How-Hawaiian-Shirts-Fight-Extremism/">far from the aloha spirit</a> as the hate championed by the boogaloo.</p><img src="https://counter.theconversation.com/content/142633/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Henry Navarro Delgado does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The boogaloos, a far-right community, have taken to wearing Hawaiian shirts. This co-option is far from the spirit of the shirt, which signifies respect for all animated or inanimate beings.Henry Navarro Delgado, Associate Professor of Fashion, Toronto Metropolitan UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1422682020-07-13T20:04:21Z2020-07-13T20:04:21ZParler: what you need to know about the ‘free speech’ Twitter alternative<figure><img src="https://images.theconversation.com/files/346812/original/file-20200710-6739-nv7bx1.png?ixlib=rb-1.1.0&rect=11%2C11%2C2402%2C1115&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Wikimedia</span></span></figcaption></figure><p>Amid claims of social media platforms stifling free speech, a new challenger called Parler is <a href="https://www.forbes.com/sites/johnscottlewinski/2020/07/04/social-media-platform-parler-becomes-hot-political-topic-between-conservaties-progressives/#4a38b95c20ea">drawing</a> <a href="https://www.theguardian.com/politics/2020/jun/28/the-uk-social-media-platform-where-neo-nazis-can-view-terror-atrocities">attention</a> for its anti-censorship stance. </p>
<p>Last week, Harper’s Magazine <a href="https://harpers.org/a-letter-on-justice-and-open-debate/">published</a> an open letter signed by 150 academics, writers and activists concerning perceived threats to the future of free speech.</p>
<p>The letter, signed by Noam Chomsky, Francis Fukuyama, Gloria Steinem and J.K. Rowling, among others, reads:</p>
<blockquote>
<p>The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted.</p>
</blockquote>
<p>Debates surroundings free speech and censorship have taken centre stage in recent months. In May, Twitter <a href="https://www.bbc.com/news/technology-52843986">started adding</a> fact-check labels to tweets from Donald Trump. </p>
<p>More recently, Reddit <a href="https://theconversation.com/reddit-removes-millions-of-pro-trump-posts-but-advertisers-not-values-rule-the-day-141703">permanently removed</a> its largest community of Trump supporters. </p>
<p>In this climate, Parler <a href="https://home.parler.com/about/">presents itself</a> as a “non-biased, free speech driven” alternative to Twitter. Here’s what you should know about the US-based startup.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/is-cancel-culture-silencing-open-debate-there-are-risks-to-shutting-down-opinions-we-disagree-with-142377">Is cancel culture silencing open debate? There are risks to shutting down opinions we disagree with</a>
</strong>
</em>
</p>
<hr>
<h2>What is Parler?</h2>
<p>Parler reports more than <a href="https://www.cnbc.com/2020/06/27/parler-ceo-wants-liberal-to-join-the-pro-trump-crowd-on-the-app.html">1.5 million users</a> and is <a href="https://news.yahoo.com/social-media-tumult-startup-parler-draws-conservatives-041427679.html">growing in popularity</a>, especially as Twitter and other social media giants crackdown on <a href="https://abcnews.go.com/Business/twitters-fact-checking-labels/story?id=70903715">misinformation</a> and <a href="https://www.reuters.com/article/us-usa-socialmedia/u-s-social-media-firms-say-they-are-removing-violent-content-faster-idUSKBN1W329I">violent content</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=331&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=331&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=331&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=416&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=416&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346777/original/file-20200710-22-102xppk.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=416&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Parler appears similar to Twitter in its appearance and functions.</span>
<span class="attribution"><span class="source">screenshot</span></span>
</figcaption>
</figure>
<p>Parler is very similar to <a href="https://twitter.com/">Twitter</a> in appearance and function, albeit clunkier. Like Twitter, Parler users can follow others and engage with public figures, news sources and other users. </p>
<p>Public posts are called “parleys” rather than “tweets” and can contain up to 1,000 characters.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=137&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=137&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=137&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=172&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=172&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346780/original/file-20200710-87076-1w6201f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=172&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Users can comment, ‘echo’ or ‘vote’ on parleys.</span>
<span class="attribution"><span class="source">screenshot</span></span>
</figcaption>
</figure>
<p>Users can search for hashtags, make comments, “echo” posts (similar to a retweet) and “vote” (similar to a like) on posts. There’s also a direct private messaging feature, just like Twitter. </p>
<p>Given this likeness, what actually is unique about Parler?</p>
<h2>Fringe views welcome?</h2>
<p>Parler’s main selling point is its claim it <a href="https://www.kusi.com/parler-ceo-john-matze-wants-the-growing-social-media-platform-to-embrace-free-speech/">embraces freedom of speech and has minimal moderation</a>. “If you can say it on the street of New York, you can say it on Parler”, founder John Matze <a href="https://www.cnbc.com/2020/06/27/parler-ceo-wants-liberal-to-join-the-pro-trump-crowd-on-the-app.html">explains</a>. </p>
<p>This branding effort capitalises on allegations competitors such as Twitter and Facebook <a href="https://www.politico.com/news/2020/06/25/ted-cruz-joins-parler-339811">unfairly censor content</a> and <a href="https://apnews.com/5e761263c5324fe3b450b2cbb53d15c8">discriminate against</a> right-wing political speech.</p>
<p>While other platforms often employ <a href="https://abcnews.go.com/Business/twitters-fact-checking-labels/story?id=70903715">fact checkers, or third-party editorial boards</a>, Parler <a href="https://legal.parler.com/documents/guidelines.pdf">claims to moderate</a> content based on American Federal Communications Commission guidelines and Supreme Court rulings.</p>
<p>So if someone shared demonstrably false information on Parler, Matze said it would be up to other users to fact-check them “<a href="https://www.foxnews.com/tech/social-media-alt-parler-censorship">organically</a>”.</p>
<p>And although Parler is still dwarfed by Twitter (330 million users) and Facebook (2.6 billion users) the platform’s anti-censorship stance continues to attract users turned off by the regulations of larger social media platforms. </p>
<p>When Twitter recently hid tweets from Trump for “<a href="https://www.cnbc.com/2020/06/23/twitter-labeled-another-trump-tweet-for-violating-its-policies.html">glorifying violence</a>”, this <a href="https://www.wsj.com/articles/trump-campaign-weighs-alternatives-to-big-social-platforms-11593003602?mod=searchresults&page=1&pos=1">partly prompted</a> the Trump campaign to consider moving to a platform such as Parler.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=372&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=372&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=372&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=468&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=468&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346770/original/file-20200710-54-2r5i38.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=468&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Far-right American political activist and conspiracy theorist Lara Loomer is among Parler’s most popular users.</span>
<span class="attribution"><span class="source">screenshot</span></span>
</figcaption>
</figure>
<p>Matze also claims Parler <a href="https://www.kusi.com/parler-ceo-john-matze-wants-the-growing-social-media-platform-to-embrace-free-speech/">protects users’ privacy</a> by not tracking or sharing their data. </p>
<h2>Is Parler really a free speech haven?</h2>
<p>Companies such as Twitter and Facebook have denied they are <a href="https://www.washingtontimes.com/news/2020/may/27/trump-we-will-regulate-or-close-down-social-media-/">silencing conservative voices</a>, pointing to blanket policies against hate speech and content inciting violence. </p>
<p>Parler’s “free speech” has resulted in various American Republicans, including Senator Ted Cruz, promoting the platform.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1276200378296664066"}"></div></p>
<p>Many conservative influencers such as <a href="https://www.thetimes.co.uk/article/parler-katie-hopkins-and-laurence-fox-flee-to-twitters-anything-goes-rival-9kkm2d58m">Katie Hopkins</a>, <a href="https://www.cnet.com/news/right-wing-activist-laura-loomer-handcuffs-herself-to-twitters-nyc-office/#ftag=MSF491fea7">Lara Loomer</a> and <a href="https://www.cnet.com/news/infowars-alex-jones-test-the-limits-of-free-speech-on-twitter-facebook-youtube-apple/">Alex Jones</a> have sought refuge on Parler after <a href="https://www.abc.net.au/news/2020-06-20/katie-hopkins-permanently-suspended-from-twitter-for-27abuse-a/12376352">being banned</a> from other platforms. </p>
<p>Although it brands itself as a bipartisan safe space, Parler is mostly used by <a href="https://www.foxbusiness.com/technology/parler-user-numbers-john-matze">right-wing media, politicians and commentators</a>. </p>
<p>Moreover, a closer look at its <a href="https://legal.parler.com/documents/useragreement.pdf">user agreement</a> suggests it moderates content the same way as any platform, <a href="https://www.huffpost.com/entry/parler-free-speech-alternative-twitter-user-agreement_n_5ef660fdc5b6acab28419a5d">maybe even more</a>.</p>
<p>The company states: </p>
<blockquote>
<p>Parler may remove any content and terminate your access to the Services at any time and for any reason or no reason.</p>
</blockquote>
<p>Parler’s <a href="https://legal.parler.com/documents/guidelines.pdf">community guidelines</a> prohibit a range of content including spam, terrorism, unsolicited ads, defamation, blackmail, bribery and criminal behaviour. </p>
<p>Although there are no explicit rules against hate speech, there are policies against “fighting words” and “threats of harm”. This includes “a threat of or advocating for violation against an individual or group”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=429&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=429&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=429&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=539&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=539&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346767/original/file-20200710-38-1qirjp5.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=539&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Parler CEO John Matze clarified the platform’s rules after banning users, presumably for breaking one or more of the listed rules.</span>
</figcaption>
</figure>
<p>There are rules against content that is obscene, sexual or “lacks serious literary, artistic, political and scientific value”. For example, visuals of genitalia, female nipples, or faecal matter are barred from Parler. </p>
<p>Meanwhile, <a href="https://help.twitter.com/en/rules-and-policies/media-policy">Twitter</a> allows “consensually produced adult content” if its marked as “sensitive”. It also has no policy against the visual display of excrement.</p>
<p>As a private company, Parler can remove whatever content it wants. Some users have already been <a href="https://screenrant.com/parler-free-speech-censorship-users-banned/">banned</a> for breaking rules.</p>
<p>What’s more, in spite of claims it does not share user data, Parler’s <a href="https://legal.parler.com/documents/privacypolicy.pdf">privacy policy</a> states data collected can be used for advertising and marketing.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/friday-essay-twitter-and-the-way-of-the-hashtag-141693">Friday essay: Twitter and the way of the hashtag</a>
</strong>
</em>
</p>
<hr>
<h2>No marks of establishment</h2>
<p>Given its limited user base, Parler has yet to become the “<a href="https://www.foxnews.com/tech/social-media-alt-parler-censorship">open town square</a>” it aspires to be. </p>
<p>The platform is in its infancy and its user base is much less representative than larger social media platforms.</p>
<p>Despite Matze saying <a href="https://www.foxbusiness.com/technology/parler-user-numbers-john-matze">“left-leaning” users</a> tied to the Black Lives Matter movement were joining Parler to challenge conservatives, Parler lacks the diverse audience needed for any real debate. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=759&fit=crop&dpr=1 600w, https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=759&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=759&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=954&fit=crop&dpr=1 754w, https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=954&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/346765/original/file-20200710-50-i3ygby.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=954&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Upon joining the platform, Parler suggests following several politically conservative users.</span>
<span class="attribution"><span class="source">screenshot</span></span>
</figcaption>
</figure>
<p>Matze also said he doesn’t want Parler to be an “<a href="https://www.cnbc.com/2020/06/27/parler-ceo-wants-liberal-to-join-the-pro-trump-crowd-on-the-app.html">echo chamber</a>” for conservative voices. In fact, he is offering a US$20,000 “<a href="https://www.cnbc.com/2020/06/27/parler-ceo-wants-liberal-to-join-the-pro-trump-crowd-on-the-app.html">progressive bounty</a>” for an openly liberal pundit with 50,000 followers on Twitter or Facebook to join. </p>
<p>Clearly, the platform has a long way to go before it bursts its conservative bubble.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-just-blame-echo-chambers-conspiracy-theorists-actively-seek-out-their-online-communities-127119">Don't (just) blame echo chambers. Conspiracy theorists actively seek out their online communities</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/142268/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Audrey Courty does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Here’s what you need to know about the largely right-wing social media platform creeping into headlines.Audrey Courty, PhD candidate, School of Humanities, Languages and Social Science, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1417032020-07-02T01:39:31Z2020-07-02T01:39:31ZReddit removes millions of pro-Trump posts. But advertisers, not values, rule the day<p>On Monday, online discussion platform Reddit <a href="https://www.theguardian.com/technology/2020/jun/29/reddit-the-donald-twitch-social-media-hate-speech">permanently took down</a> its largest community of Donald Trump supporters, r/The_Donald.</p>
<p>The community had more than 7,000 active users per day (although this has previously been much higher). The ban was <a href="https://www.reddit.com/r/announcements/comments/hi3oht/update_to_our_content_policy/">on the grounds</a> that some posts incited violence, and the community had engaged in harassment on other subreddits. It will have removed hundreds of thousands of posts, and millions of comments going back many years. </p>
<p>The “r/The_Donald” subreddit is a themed, online message board where users can submit, comment and vote on posts. The <a href="https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html">decision to ban</a> it comes as several other platforms censure racist and violent material from Trump and his supporters.</p>
<p>Twitter recently <a href="https://www.reuters.com/article/us-twitter-factcheck/with-fact-checks-twitter-takes-on-a-new-kind-of-task-idUSKBN2360U0">fact-checked</a> some of Trump’s posts, video live-streaming service Twitch has temporarily <a href="https://www.theverge.com/2020/6/29/21307145/twitch-donald-trump-ban-campaign-account">banned</a> the president’s account, and Facebook is now <a href="https://www.nytimes.com/2020/06/29/business/dealbook/facebook-boycott-ads.html">losing advertisers</a> over its unwillingness to moderate hateful material and disinformation, including from the president.</p>
<p>According to the <a href="https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html">New York Times</a>, Reddit <a href="https://thenextweb.com/apps/2020/06/29/reddit-bans-r-thedonald-and-2000-other-hateful-subreddits-because-it-was-about-time/">also banned</a> another 2,000 communities across the political spectrum alongside the pro-Trump community, including left-leaning groups. </p>
<p>But while some may celebrate these actions, the moves should be understood within the context of a largely deregulated information economy, in which “doing good” is mostly about “doing well”. In other words: making money.</p>
<p>Upon a close look, the removal of r/The_Donald exposes the inadequacies of market-based information governance. Even in cases where individual governance decisions benefit society, the information economy remains primarily motivated by profit.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/facebook-vs-news-australia-wants-to-level-the-playing-field-facebook-politely-disagrees-141043">Facebook vs news: Australia wants to level the playing field, Facebook politely disagrees</a>
</strong>
</em>
</p>
<hr>
<h2>Reddit’s changing approach</h2>
<p>Started in 2015, r/The_Donald was the largest and most controversial subreddit dedicated to supporting Trump. Before the ban, it had more than 790,000 subscribers and was at times one of the most popular subreddits on the platform.</p>
<p>In June last year, Reddit “quarantined” <a href="https://www.theverge.com/2019/6/26/18759967/reddit-quarantines-the-donald-trump-subreddit-misbehavior-violence-police-oregon">the subreddit over posts inciting violence</a>. Several months later it purged most of the community’s volunteer moderators, arguing they weren’t upholding the platform’s policies, particularly through allowing banned content to stay up.</p>
<p>These shifts mirror changes in Reddit’s overall governance approach.</p>
<p>Historically, the platform has sold itself as a democratic space for free speech, with administrators resisting censorship in <a href="https://www.dailydot.com/unclick/reddit-beatingwomen-misogyny-images/">favour of a hands-off philosophy</a>. However, like other platforms, Reddit now faces pressure from advertisers that don’t want their brands associated with political extremism.</p>
<p>Advertising is a <a href="https://www.cnbc.com/2018/06/29/how-reddit-plans-to-make-money-through-advertising.html">growing part of Reddit’s economic model</a>. And with major partners such as <a href="https://www.redditinc.com/assets/case-studies/LOreal_Case_Study.pdf">L'Oréal</a> and <a href="https://www.redditinc.com/assets/case-studies/Audi_Case_Study.pdf">Audi</a>, advertisers’ preferences undoubtedly hold sway in how the website is regulated. </p>
<p>But as digital marketing agency iCrossing’s chief media officer <a href="https://www.cnbc.com/2018/06/29/how-reddit-plans-to-make-money-through-advertising.html">has previously argued</a>:</p>
<blockquote>
<p>What makes it (Reddit) attractive to consumers, which is the free and open ability to post, makes them scary to advertisers.</p>
</blockquote>
<h2>Walking a tightrope</h2>
<p>For major social media platforms, content regulation is a delicate issue, teetering on a balance between value and liability. </p>
<p>Reddit’s laissez-faire approach and community-led model invites broad participation and has helped its user base grow. However, this also fosters content that’s distasteful, unseemly and potentially dangerous – creating brand associations many advertisers would rather avoid. </p>
<p>The r/The_Donald subreddit embodies this tension. Reddit’s gradual regulation of it, and eventual banning, indicates the value-liability balance has tipped towards the latter.</p>
<p>While there is reason to laud these regulatory shifts, they are products of political-economic realities, rather than social priorities. And they speak to a much broader issue of information policy in contemporary society. </p>
<p>Although social media platforms are central to civic discourse, they’re also products in a competitive market economy. As long as that market economy remains deregulated by governments, individual companies will have outsized power. </p>
<p>They <em>may</em> use their power for social good, but this decision will be market-based, and thus can change with the winds of financial promise. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1277659843613077505"}"></div></p>
<h2>Risks for Reddit, risks for the internet</h2>
<p>Much of Reddit’s popularity has come from its status as the “wild west” of the internet. </p>
<p>The platform’s new approach may alienate its more dedicated user base. In trying to balance the ethos of free speech with increasing pressure to regulate, Reddit finds itself stuck <a href="https://thesocietypages.org/cyborgology/2018/10/29/reddit-quarantined/">between a rock and a hard place</a>.</p>
<p>And as Reddit moves to moderate and ban hateful content, more extreme users are going elsewhere. Prior to the r/The_Donald subreddit’s banning, participants had already established their own <a href="https://thedonald.win/">external site</a> and were encouraging others to move there. </p>
<p>Similarly, moderators on the quarantined r/MGTOW (an anti-feminist men’s rights subreddit) are now directing subscribers to a <a href="https://discord.com/login?redirect_to=%2Fchannels%2F%40me">Discord</a> channel – a community-based discussion app for private and public interaction.</p>
<p>Moderators of the quarantined r/TheRedPill (another anti-feminist men’s rights group) have been directing users to an external site for over a year.</p>
<p>Users leaving for external sites will reduce hateful content on Reddit, but will concentrate this hate elsewhere. And such sites are often far less regulated than larger platforms.</p>
<p>Conservatives increasingly complain <a href="https://www.theatlantic.com/ideas/archive/2019/07/conservatives-pretend-big-tech-biased-against-them/594916/">digital platforms are anti-conservative</a>. Reddit’s actions against r/The_Donald will likely increase calls for new, conservative-founded platforms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-just-blame-echo-chambers-conspiracy-theorists-actively-seek-out-their-online-communities-127119">Don't (just) blame echo chambers. Conspiracy theorists actively seek out their online communities</a>
</strong>
</em>
</p>
<hr>
<h2>How to prevent distilled anger</h2>
<p>Reddit’s move highlights the influence of economics in platform governance – and the vulnerabilities that arise from this. </p>
<p>Rather than individual moderation decisions, what’s needed is a broad regulatory framework that holds corporate bodies to account. We need to reconsider “<a href="https://www.reuters.com/article/us-twitter-trump-executive-order-explain/explainer-whats-in-the-law-protecting-internet-companies-and-can-trump-change-it-idUSKBN23434V">safe harbour</a>” laws that protect social media companies from legal liability. </p>
<p>More broadly, we need to recognise social media are entangled with civic society, and enact social policies that coincide with the weight of that responsibility. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1277652980527853568"}"></div></p><img src="https://counter.theconversation.com/content/141703/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The platform also took down another 2,000 communities, including left-leaning groups. The move comes just months ahead of the 2020 US presidential election.Simon Copland, PhD Student -- Sociology, Australian National UniversityJenny L. Davis, Lecturer in the School of Sociology, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1226092019-10-23T12:32:20Z2019-10-23T12:32:20ZAnalyzing online posts could help spot future mass shooters and terrorists<figure><img src="https://images.theconversation.com/files/290340/original/file-20190830-165989-tovgzw.jpg?ixlib=rb-1.1.0&rect=32%2C147%2C5431%2C3350&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Can online posts help scholars – or police – tell the difference between people who are just ranting and those who plan real violence?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/black-laptop-mans-fist-hand-isolated-778968253">Aggapom Poomitud/Shutterstock.com</a></span></figcaption></figure><p>In the weeks following two mass shootings in El Paso, Texas, and Dayton, Ohio, police forces across the United States made more than 20 arrests based on <a href="https://slate.com/technology/2019/08/el-paso-walmart-social-media-violence-threats-arrests.html">threats made on social media</a>. </p>
<p>Police in Florida, for example, arrested an alleged white supremacist who, police said, threatened a shooting at a Walmart. <a href="https://abcnews.go.com/US/florida-white-supremacist-arrested-threatening-shooting-walmart-police/story?id=64906798">Richard Clayton</a>, 26, allegedly posted on Facebook, “3 more days of probation left then I get my AR-15 back. Don’t go to Walmart next week.” </p>
<p>People who are contemplating, or even planning, serious crimes rarely make such clear public declarations of their intent. However, they might leave clues that, if properly understood, could offer opportunities to avert tragedy. <a href="https://scholar.google.com/citations?hl=en&user=N2wxtlUAAAAJ">We have</a> <a href="https://scholar.google.com/citations?user=XGnMu6gAAAAJ&hl=en&oi=ao">teamed up</a> with <a href="https://scholar.google.com/citations?hl=en&user=_Q1uzVYAAAAJ">computer scientist Anna Rumshisky</a> to collect and analyze more than 185,000 words of extremist or hateful narratives published online by people who have then gone on to commit large-scale shootings or terrorist crimes. </p>
<p>We have also assembled a second, admittedly smaller, sample of over 50,000 words published online by people who did not go on to kill. </p>
<p>The key question for us was whether we could identify signals in online posts that could help police and other officials tell the difference between people who are upset and ranting online and those who intend to do real physical harm. We wondered if the way people express their feelings online could signal whether someone is a real-world danger or a Facebook fantasist.</p>
<h2>The power of words</h2>
<p>In the aftermath of many mass shootings or terrorist attacks, over the past two decades and around the world, <a href="https://www.theguardian.com/uk-news/2014/nov/25/timeline-intelligence-lee-rigby-murder">media coverage</a> often indicates that police had previously encountered the suspect.</p>
<p>During the buildup to a mass shooting or a <a href="https://theconversation.com/what-drives-lone-offenders-62745">solo terrorist</a> attack, the planners often leak signals of what they’re about to do. A 2016 study found that in <a href="https://www.ncjrs.gov/pdffiles1/nij/grants/249937.pdf">nearly 60% of lone-actor terrorist attacks</a>, the person involved produced letters or public statements before the attack that outlined his or her beliefs – though not necessarily violent intent, like the Florida man did about Walmart. They need to maintain secrecy to carry out their plans, but these attackers may fear that if their motivations remain unknown, their actions will have <a href="https://www.routledge.com/Lone-Actor-Terrorists-A-behavioural-analysis-1st-Edition/Gill/p/book/9781138787568">no real meaning</a>.</p>
<p>In the past, researchers have looked to various attributes of people’s behavior and personalities when seeking warning signs that they might become <a href="https://doi.org/10.1037/tam0000061">violent and dangerous</a> to the public. But those signals were not enough to prevent many high-profile attacks. For instance, the FBI had analyzed the emails of <a href="https://www.motherjones.com/politics/2013/08/nidal-hasan-anwar-awlaki-emails-fbi-fort-hood/">Nidal Malik Hassan</a> before he shot more than 30 people, killing 13, at Fort Hood, Texas, in 2009. </p>
<p>Australian police had assessed <a href="http://www.lindtinquest.justice.nsw.gov.au/Documents/findings-and-recommendations.pdf">Man Haron Moris</a> as a potential risk to public safety the day before he took hostages in a Sydney coffee shop in 2014. Tamerlan Tsarnaev, who planned the 2013 Boston Marathon bombing, was, as the reporter’s phrase often goes, “<a href="https://theintercept.com/2015/11/18/terrorists-were-already-known-to-authorities/">known to authorities</a>,” as were the alleged perpetrators of the 9/11 attacks, and terrorist incidents in Madrid, London and Paris, <a href="https://wjla.com/news/nation-world/terrorists-known-to-authorities-carry-out-deadly-attacks">among others</a>. </p>
<p>However, there is not yet a way to evaluate or understand the relationship between writing words of hate and taking action.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=537&fit=crop&dpr=1 600w, https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=537&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=537&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=675&fit=crop&dpr=1 754w, https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=675&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/293272/original/file-20190919-22412-m6x34z.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=675&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Is it possible to tell when rage is going to come offline and into the physical world?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/virtual-personality-concept-254752348">Valery Sidelnykov/Shutterstock.com</a></span>
</figcaption>
</figure>
<h2>Fighting talk</h2>
<p>It’s hard to draw strong conclusions from words posted on the internet: A person can post on Instagram about how much they go to the gym while in fact devouring their second delivery pizza of the day.</p>
<p>For years, looking at people’s words has been of little use. A U.K. investigation of the murder of a British soldier found that the killers’ expressions of desire to become a “martyr,” for instance, were dismissed as “a fairly standard example of [online] rhetoric,” rather than a serious indicator of <a href="https://www.telegraph.co.uk/news/uknews/terrorism-in-the-uk/11252884/At-a-glance-findings-of-the-Intelligence-and-Security-Committee-report-on-murder-of-Lee-Rigby.html">violent intentions</a>.</p>
<p>Yet research has shown that words can indeed be used as indicators of their authors’ <a href="https://doi.org/10.1177%2F0261927X09351676">psychological states</a>. For instance, highly neurotic people are more likely to use first-person singulars, such as “I,” “me” and “mine.” By contrast, extroverts use more positive emotion words like “great,” “happy” and “amazing.” Social media posts have been used to diagnose <a href="https://doi.org/10.1145/1979742.1979614">personality</a>, <a href="https://doi.org/10.1145/2531602.2531608">personal values</a> and even <a href="https://doi.org/10.1016/j.jrp.2017.02.005">depression</a>. </p>
<p>Our work seeks to extend this research to the effort to prevent mass shootings and lone-actor terrorist attacks. We compared the online writings and postings of people who had allegedly committed a mass shooting or lone-offender terrorist attack to posts from people who had expressed ideological intent and motivation online, but had no violent plans or intent when they were intercepted by law enforcement. What we found was that there were key differences in how they use words. Those who engaged in real-world violence commented differently from enraged online commentators with no violent intent.</p>
<p>In particular, we have found that people who later became violent were more likely to use emotionally laden and specifically targeted words like “shit,” “hate,” “you” and “they.” Violent people were less likely to use words about the external world, such as “people,” “world,” “state” and “time.” </p>
<p>Our analysis continues, including looking at the <a href="https://doi.org/10.1016/S0021-9924(98)00009-4">structure of these two groups’ writing</a>, such as how well they stay on topic or diverge into tangents. We are also using machine learning and <a href="https://arxiv.org/abs/1810.06640">natural language processing</a> to develop automatic tools that could remove the need for human judgment and help analyze large swathes of text to minimize the <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">psychological and physical burden on analysts</a>. </p>
<p>Our findings are preliminary, but we are optimistic that these words can offer a window – and a warning – about individuals’ intentions. This work is <a href="https://slate.com/technology/2019/08/el-paso-walmart-social-media-violence-threats-arrests.html">by no means a standalone solution</a> to gun violence or terrorism, but it might help, even as predicting and preventing these sorts of attacks remains incredibly difficult.</p>
<p>[ <em>Like what you’ve read? Want more?</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=likethis">Sign up for The Conversation’s daily newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/122609/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Neil Shortland receives funding from the Department of Defense Minerva Initiative and National Institute of Justice </span></em></p><p class="fine-print"><em><span>Allyssa McCabe has received funding from Theodore Edson Parker Foundation and the University of Massachusetts President’s Office Creative Economy Initiatives Fund. </span></em></p>Researchers look for signals that might distinguish people who are upset and ranting online from those who intend to do real physical harm.Neil Shortland, Director, Center for Terrorism and Security Studies; Assistant Professor of Criminology and Justice Studies, UMass LowellAllyssa McCabe, Professor of Psychology, UMass LowellLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1194192019-09-03T11:17:33Z2019-09-03T11:17:33ZIn a world of cyber threats, the push for cyber peace is growing<figure><img src="https://images.theconversation.com/files/289712/original/file-20190827-184202-eaz1mv.jpg?ixlib=rb-1.1.0&rect=291%2C22%2C4184%2C2941&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A push for digital peace is growing around the world.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/cpu-peace-181400696">Finchen/Shutterstock.com</a></span></figcaption></figure><p>Digital conflict and military action are increasingly intertwined, and civilian targets – private businesses and everyday internet users alike – are vulnerable in the digital crossfire. But there are forces at work trying to promote peace online.</p>
<p>It will be a tough challenge: In May 2019, <a href="https://www.forbes.com/sites/kateoflahertyuk/2019/05/06/israel-retaliates-to-a-cyber-attack-with-immediate-physical-action-in-a-world-first/#46ed253df895">Israel responded to unspecified cyberattacks</a> by Hamas with an <a href="https://www.cnbc.com/2019/07/02/us-iran-cyber-strike-marks-a-military-game-changer-says-tech-expert.html">immediate airstrike</a> that destroyed the Gaza Strip building where the hackers were located.</p>
<p>The U.S. had done something similar in 2015, launching a <a href="https://www.nytimes.com/2015/08/28/world/middleeast/junaid-hussain-islamic-state-recruiter-killed.html">drone strike to kill</a> an <a href="https://dod.defense.gov/News/Article/Article/615305/iraq-progresses-in-isil-fight-key-extremist-confirmed-dead/">alleged Islamic State hacker</a>, but that operation was <a href="https://www.nytimes.com/2015/01/13/us/isis-is-cited-in-hacking-of-central-commands-twitter-feed.html">months in the making</a>. In July 2019, the U.S. also reversed the equation, <a href="https://www.cnbc.com/2019/07/02/us-iran-cyber-strike-marks-a-military-game-changer-says-tech-expert.html">digitally disabling Iranian missile-launching computers</a> in response to <a href="https://www.cnbc.com/2019/06/20/us-drone-shot-down-by-iranian-missile-in-international-airspace.html">Iran shooting down a U.S. military drone</a> over the <a href="https://theconversation.com/what-is-at-stake-in-the-strait-of-hormuz-120486">Strait of Hormuz</a>.</p>
<p>U.S. businesses <a href="https://www.washingtonpost.com/news/powerpost/paloma/the-cybersecurity-202/2019/06/24/the-cybersecurity-202-u-s-businesses-are-preparing-for-iranian-hacks-after-american-cyber-attack/5d1007a81ad2e552a21d507f/">fear they might be the targets of retaliation</a> for that attack from Iran. Even <a href="https://theconversation.com/why-the-russians-might-hack-the-boy-scouts-next-102229">local nonprofits</a> need to learn how to <a href="https://theconversation.com/5-ways-to-protect-yourself-from-cybercrime-120062">protect themselves from online threats</a>, potentially including national governments and terrorists. In some ways cyberspace has rarely seemed more unstable, even hostile. </p>
<p>At the same time, dozens of countries and hundreds of firms and nonprofits are fed up with all this digital violence, and are working toward greater cybersecurity for all – and even what might be called cyber peace.</p>
<h2>Serious hacking is getting easier</h2>
<p>Data and security breaches like the one carried out by the <a href="https://www.nytimes.com/2017/11/12/us/nsa-shadow-brokers.html">Shadow Brokers</a>, revealed in 2016, released extremely advanced hacking tools to the public, including ones created by the National Security Agency. Cybercriminals are using those programs, among others, to <a href="https://www.wsj.com/articles/u-s-cities-strain-to-fight-hackers-11559899800">hijack computer systems and data storage</a> in <a href="https://theconversation.com/hackers-seek-ransoms-from-baltimore-and-communities-across-the-us-118089">governments across the country</a>.</p>
<p>Some companies have been forced to <a href="https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/">revert to one-to-one instant-messaging and passing written memos</a> in the wake of <a href="https://theconversation.com/hackers-seek-ransoms-from-baltimore-and-communities-across-the-us-118089">ransomware attacks</a> and other cybercrimes.</p>
<p>The U.S. government is taking note. Instead of pushing the technological envelope, it has elected to use tried and true <a href="https://www.zdnet.com/article/us-wants-to-isolate-power-grids-with-retro-technology-to-limit-cyber-attacks/">analog technologies</a> to help secure the electricity grid, for example.</p>
<h2>A rising international effort</h2>
<p>A growing coalition, including the governments of France and New Zealand, is coming together to promote international standards of online behavior, aimed at reducing cyber insecurity. Nonprofits like the <a href="https://otalliance.org/about-us/non-governmental-organizations-ngos">Online Trust Alliance</a>, <a href="https://www.cyberpeacealliance.net/">Cyber Peace Alliance</a>, <a href="https://cybertechaccord.org">Cybersecurity Tech Accord</a> and <a href="https://ict4peace.org/activities/">ICT4Peace</a>, are joining, as are major funders like the <a href="https://hewlett.org/strategy/cyber/">Hewlett Foundation</a> and the <a href="https://carnegieendowment.org/programs/technology/cyber/">Carnegie Endowment for International Peace</a>. </p>
<p><a href="https://scholar.google.com/citations?user=YtgRGx0AAAAJ&hl=en&oi=ao">I</a> am the acting director of the <a href="https://ostromworkshop.indiana.edu/">Ostrom Workshop</a> at Indiana University that includes the <a href="https://ostromworkshop.indiana.edu/research/internet-cybersecurity/index.html">Cyber Peace Working Group</a>, one of several academic groups also working to protect the Internet and its users.</p>
<p>Although it’s too soon to say anything certain about long-term results, there are some early indications of success, including the outcome of a Paris meeting in November 2018. <a href="https://www.nytimes.com/2018/11/12/us/politics/us-cyberattacks-declaration.html">More than 60 nations</a> – though not the United States – signed the <a href="https://www.diplomatie.gouv.fr/en/french-foreign-policy/digital-diplomacy/france-and-cyber-security/article/cybersecurity-paris-call-of-12-november-2018-for-trust-and-security-in">Paris Call for Trust and Security in Cyberspace</a>, along with more than 130 companies and 90 universities and nonprofit organizations. The document is a <a href="https://www.diplomatie.gouv.fr/en/french-foreign-policy/digital-diplomacy/france-and-cyber-security/article/cybersecurity-paris-call-of-12-november-2018-for-trust-and-security-in">broad statement of principles</a> that focus on improving “cyber hygiene,” along with “the security of digital products and services” and the “integrity of the internet,” among other topics. It doesn’t legally bind its participants to do anything, but does lay out some basic points of agreement that could, in time, be codified into laws or other enforceable standards.</p>
<p>Its <a href="https://www.internetgovernance.org/2018/11/09/the-paris-igf-convergence-on-norms-or-grand-illusion/">critics</a> question whether it is too early to establish global commitments given that core issues of sovereignty over the internet remain unresolved. Nevertheless, the Paris Call has helped shape the conversation around the scope and <a href="https://ndias.nd.edu/news-publications/ndias-quarterly/the-meaning-of-cyber-peace/">meaning of cyber peace</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=452&fit=crop&dpr=1 600w, https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=452&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=452&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=568&fit=crop&dpr=1 754w, https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=568&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/289718/original/file-20190827-184252-1lj2qz7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=568&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In the wake of a mass shooting in Christchurch, New Zealand, that country’s prime minister, Jacinda Ardern, spearheaded a call to reduce violent and extremist content online.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/New-Zealand-Mosque-Shooting/6e9e9acf016745c5b6f813e0245995f1/4/0">AP Photo/Vincent Thian</a></span>
</figcaption>
</figure>
<p>Another international effort began in the aftermath of the March 2019 mass shooting at two mosques in Christchurch, New Zealand. The governments of 18 nations – along with more than a dozen well-known technology firms like Google and Facebook – adopted the <a href="https://www.christchurchcall.com/supporters.html">Christchurch Call</a> to Eliminate Terrorist and Violent Extremist Content Online. </p>
<p>This <a href="https://www.justsecurity.org/64189/why-the-christchurch-call-to-remove-online-terror-content-triggers-free-speech-concerns/">effort</a> has led many of the companies involved to <a href="https://www.cnet.com/news/youtube-to-ban-supremacist-and-hoax-videos-in-tougher-hate-speech-policy/">change their policies</a> governing hate speech and disinformation on their platforms. For example, YouTube, owned by Google parent company Alphabet, announced a <a href="https://youtube.googleblog.com/2019/06/our-ongoing-work-to-tackle-hate.html">new hate speech policy</a> prohibiting content “alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.” The Christchurch Call has also helped widen the discussion about cyber peace to include <a href="https://repository.law.umich.edu/mjlr/vol50/iss3/3">thorny questions about democracy</a>, such as how to balance freedom of speech with limits on extremist content.</p>
<h2>A digital Geneva Convention?</h2>
<p>A key element remains the need to <a href="https://www.unhcr.org/innovation/digital-geneva-convention-mean-future-humanitarian-action/">protect civilians from harm in a future cyber conflict</a>, such as attacks on the <a href="https://www.eenews.net/stories/1060281821">electricity grid</a>, <a href="https://time.com/4270728/iran-cyber-attack-dam-fbi/">dams</a> and other systems that affect daily life for much of the world.</p>
<p>One idea is to fashion an agreement along the lines of the <a href="https://www.icrc.org/en/war-and-law/treaties-customary-law/geneva-conventions">Geneva Conventions</a>, which with their predecessors have sought to protect innocent lives in military conflict <a href="https://theconversation.com/ban-killer-robots-to-protect-fundamental-moral-and-legal-principles-101427">for more than a century</a>. An international treaty along the lines of the <a href="https://2009-2017.state.gov/t/isn/5181.htm">Outer Space Treaty</a>, <a href="https://www.ats.aq/e/ats.htm">Antarctic Treaty</a> or the <a href="https://www.iucn.org/theme/marine-and-polar/our-work/international-ocean-governance/unclos">U.N. Convention on the Law of the Sea</a> <a href="https://www.hoover.org/research/cybersecurity-treaties-skeptical-view">may be</a> <a href="https://www.thenation.com/article/international-cyber-treaty-russia-china-dnc/">useful</a>. </p>
<p>There is not yet a grand “Treaty for Cyberspace,” though. The relevant international agreement with the highest number of ratifications so far is the 2004 Council of Europe Convention on Cybercrime, also called the <a href="https://www.coe.int/en/web/cybercrime/the-budapest-convention">Budapest Convention</a>, which guides international prosecution and extradition of cyber criminals. The U.N. has <a href="https://dig.watch/processes/un-gge">several</a> <a href="https://cyberstability.org/">groups</a> working on <a href="https://cyberstability.org/wp-content/uploads/2018/11/GCSC-Singapore-Norm-Package-3MB.pdf">aspects of international cybersecurity</a>. </p>
<p>But as with <a href="https://dx.doi.org/10.2139/ssrn.2630333">potential solutions to climate change</a>, there’s not a lot of political energy being put into the efforts.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=410&fit=crop&dpr=1 600w, https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=410&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=410&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=515&fit=crop&dpr=1 754w, https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=515&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/289722/original/file-20190827-184217-sre452.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=515&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Independent groups seek to help online users stay safe and avoid trouble.</span>
<span class="attribution"><a class="source" href="https://securityplanner.org/#/">Screenshot by The Conversation</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Making progress anyway</h2>
<p>In an attempt to avoid leaving people to fend for themselves in a perilous online world, the nonprofit Consumer Reports organization has launched a “<a href="https://www.consumerreports.org/media-room/press-releases/2017/03/consumer_reports_launches_digital_standard_to_safeguard_consumers_security_and_privacy_in_complex_marketplace/">Digital Standard</a>” program that will evaluate and rate the privacy and security features of various internet-connected devices and services. Academics are also helping out, such as the <a href="https://securityplanner.org/#/">Security Planner</a> tool created by Citizen Lab at the University of Toronto, which helps civil society groups and researchers protect their data.</p>
<p>There’s much more to be done to protect a digitally centered society, both <a href="https://www.csis.org/programs/technology-policy-program/significant-cyber-incidents">politically</a> and <a href="https://www.entrepreneur.com/article/325142">technically</a>. The key will be focusing on a more <a href="https://www.huffpost.com/entry/toward-a-positive-cyber-p_b_5511877">positive vision</a> of peace that includes better governance, respect for human rights, making internet access more widely available around the world, and teaching everyone how to protect themselves – and each other – online.</p>
<p>This will not happen overnight, and the path may not be a straight line. Consider that the often-derided 1928 <a href="https://history.state.gov/milestones/1921-1936/kellogg">Kellogg-Briand Pact</a>, also called the <a href="https://www.theguardian.com/books/2017/dec/16/the-internationalists-review-plan-outlaw-war">Pact of Paris</a>, outlawed aggressive war. It didn’t work, but did eventually help lay a <a href="https://courses.lumenlearning.com/suny-hccc-worldhistory2/chapter/the-kellogg-briand-pact/">foundation</a> for the United Nations and a more stable international system.</p>
<p>Similarly, a Cyber Peace Accord – building from efforts such as the Paris Call and the Cybersecurity Tech Accord – could, in time, lead the international community toward greater stability in cyberspace. One possibility could take inspiration from <a href="https://www.wearestillin.com/">efforts to fight climate change</a>, by asking individual nations, towns, groups and even individuals to announce “Cyber Peace Pledges,” to build momentum toward a more collective solution. </p>
<p>Working together, we may just be able to achieve cyber peace through a mix of shaming, outcasting and inspiring users, firms and policymakers to act.</p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/119419/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Scott Shackelford is a principal investigator on grants from the Hewlett Foundation, Indiana Economic Development Corporation, and the Microsoft Corporation supporting both the Ostrom Workshop Program on Cybersecurity and Internet Governance and the Indiana University Cybersecurity Clinic. </span></em></p>Dozens of countries and hundreds of firms and nonprofits are fed up with digital violence and are working toward greater cybersecurity for all.Scott Shackelford, Associate Professor of Business Law and Ethics; Director, Ostrom Workshop Program on Cybersecurity and Internet Governance; Cybersecurity Program Chair, IU-Bloomington, Indiana UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1215212019-08-07T20:02:56Z2019-08-07T20:02:56Z8chan’s demise is a win against hate, but could drive extremists to the dark web<figure><img src="https://images.theconversation.com/files/287151/original/file-20190807-84225-18pnemv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Don't mourn 8chan, but don't think online extremists aren't already taking their hate elsewhere.</span> <span class="attribution"><span class="source">Shutterstock.com</span></span></figcaption></figure><p>The news that 8chan, the far-right online community allegedly home to mass shooting manifestos, has been effectively removed from the internet is cause for celebration, but should also make us pause and consider the implications.</p>
<p>8chan has been linked to three mass shootings so far this year. In March, an account believed to belong to the gunman behind the Christchurch mass shooting posted an 87-page manifesto to the site. And in April, the suspected gunman behind the deadly shooting at a synagogue in California also posted to the site.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/from-across-the-globe-to-el-paso-changes-in-the-language-of-the-far-right-explain-its-current-violence-121468">From across the globe to El Paso, changes in the language of the far-right explain its current violence</a>
</strong>
</em>
</p>
<hr>
<p>Internet security and infrastructure company Cloudflare finally stopped servicing 8chan after it was alleged the El Paso shooting suspect posted a white nationalist rant on the site before killing more than 20 people. </p>
<p>The forum received a brief reprieve when another service, BitMitigate, stepped in before support was once again dropped.</p>
<p>Cloudshare and BitMitigate, the latter albeit briefly, both provided 8chan with protection from Distributed Denial of Service (DDoS) attacks by activists aiming to shut the site down. BitMitigate, <a href="https://www.fastcompany.com/90385939/8chan-is-getting-a-lifeline-from-a-company-that-services-far-right-sites-like-the-daily-stormer">which has also serviced neo-nazi site Daily Stormer</a> since Cloudshare dropped it in 2017, relied on infrastructure service Voxility. It was Voxility that ultimately shutdown 8chan (and Daily Stormer at the same time), removing its services from BitMitigate when 8chan’s move was made known.</p>
<h2>A win against hate</h2>
<p>The lack of mainstream options for these sites due to public pressure is a win in the war against online hate speech, and shows a growing understanding of the internet infrastructure that enables this behaviour. Since the advent of social media, studies have <a href="https://www.sciencedirect.com/science/article/pii/S1359178917301064">found</a> that hate speech tends to proliferate directly after terrorist attacks. The removal of 8chan will certainly help to “clean the stream”, not only removing the hate speech already posted there, but also perhaps helping reduce further contributions.</p>
<p>However, in my <a href="https://theconversation.com/trolls-fanboys-and-lurkers-understanding-online-commenting-culture-shows-us-how-to-improve-it-96538">research</a> examining factors that contribute to behaviour in online communities, I have found these communities are based on a complex relationship between social and technological factors. Removing 8chan does not mean this speech or the individuals spreading it will stop congregating online, and it may in fact create further challenges in combating online hate speech.</p>
<h2>Going underground</h2>
<p>8chan has now <a href="https://www.vice.com/en_us/article/wjwe34/8chan-forced-to-move-to-obscure-dark-web-service">reportedly</a> moved to the “dark web”, a network of unindexed sites that require a special browser to access, pushing its content and contributors further underground. This means fewer people could stumble on the site inadvertently and become radicalised by the content – a definite positive. But it also means the content will be far tougher to monitor and police.</p>
<p>The dark web made headlines in 2015 when it hosted the hacked personal data of 37 million users of Ashley Madison, an online dating service for people looking to have extramarital affairs. The Silk Road, a dark web black market for illegal drugs, also made headlines when the FBI finally shut it down after years of operation and takings of more than US$1.2 billion in bitcoin. </p>
<p>The dark web is also home to a <a href="https://www.cigionline.org/publications/tor-dark-net">vast range of illicit activities</a> including child pornography, credit card fraud, money laundering, identity theft, and illegal weapons sales. It is a magnet for nefarious activities because the technology allows website owners and visitors to obscure their location and internet address, making it harder for law enforcement to find them. </p>
<p>The dark web is <a href="https://www.tandfonline.com/doi/full/10.1080/1057610X.2015.1119546">increasingly used by terrorists</a> for activities ranging from psychological warfare and propaganda to fundraising, recruitment, data mining, and coordination of actions. </p>
<p>According to the <a href="https://www.quilliaminternational.com/shop/e-publications/jihad-trending-a-comprehensive-analysis-of-online-extremism-and-how-to-counter-it-2/">Quilliam Foundation</a>, a London-based counter-extremism think tank, attempts to block extremist material online result in terrorist material reappearing on the dark web almost as soon as it’s banished. The so-called Islamic State movement in particular has used the dark web for covert communication between jihadists.</p>
<h2>Freedom of speech concerns</h2>
<p>Another potential concern with the deplatforming of 8chan is that it could set a precedent for other sites being censored. One particular concern is that it could be exploited by repressive governments and other powerful actors to remove content that does not serve their interests, but is otherwise benign.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/technology-and-regulation-must-work-in-concert-to-combat-hate-speech-online-93072">Technology and regulation must work in concert to combat hate speech online</a>
</strong>
</em>
</p>
<hr>
<p>For example, <a href="https://www.tandfonline.com/doi/full/10.1080/15405702.2015.1021469">research</a> has shown Chinese political activists are increasingly moving to digital platforms to expose wrongdoing by government officials and other powerful individuals. This in turn has prompted the Chinese government to develop the most sophisticated online information-censoring mechanism in the network era.</p>
<p>Overall, we should celebrate the demise of 8chan as a win for the fight against online hate speech. But its removal does not mean the fight is over.</p><img src="https://counter.theconversation.com/content/121521/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Renee Barnes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We should celebrate the ‘deplatforming’ of the 8chan message board, linked to the El Paso shootings, as a win for the fight against online hate speech. But its removal does not mean the fight is over.Renee Barnes, Senior Lecturer, Journalism, University of the Sunshine CoastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1140482019-03-26T10:38:10Z2019-03-26T10:38:10ZWhy the next terror manifesto could be even harder to track<figure><img src="https://images.theconversation.com/files/265695/original/file-20190325-36270-q4akag.jpg?ixlib=rb-1.1.0&rect=0%2C167%2C3600%2C2527&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's difficult to track the spread of digital materials.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/people-networking-leader-197722088">bluebay/Shutterstock.com</a></span></figcaption></figure><p>Just before his shooting spree at two Christchurch, New Zealand mosques, the alleged mass murderer posted a hate-filled manifesto on several file-sharing sites, and <a href="https://www.nbcnews.com/news/world/new-zealand-shooting-death-toll-rises-50-attack-mosque-n984066">emailed the document to at least 30 people</a>, including New Zealand’s prime minister. He also <a href="https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12213056">posted on several social media sites</a> links to the manifesto and instructions on how to find his Facebook profile to watch an upcoming video. The video turned out to be a 17-minute Facebook livestream of preparing for and carrying out the first attack on March 15. In his posts, the accused killer urged people to make copies of the manifesto and the video, and share them around the internet.</p>
<p>On March 23, the New Zealand government <a href="https://www.classificationoffice.govt.nz/news/latest-news/christchurch-attacks-press-releases/#christchurch-attack-publication-the-great-replacement-classified-objectionable">banned possession and sharing of the manifesto</a>, and shortly thereafter <a href="https://www.nytimes.com/2019/03/21/world/asia/new-zealand-attacks-social-media.html">arrested at least two people</a> for having shared the video. By then, the original manifesto document and video file had long since been removed from the platforms where they were first posted. Yet plenty of people appear to have taken the shooter’s advice, making copies and spreading them widely. </p>
<p>As part of my <a href="https://scholar.google.com/citations?user=lKbka1UAAAAJ&hl=en">ongoing research into extremism</a> on social media – <a href="https://doi.org/10.1007/978-3-030-01129-1_25">particularly anti-Muslim sentiment</a> – I was interested in how other right-wing extremists would use the manifesto. Would they know that companies would seek to identify it on their sites and delete it? How would they try to evade that detection, and how would they share the files around the web? I wanted to see if computer science techniques could help me track the documents as they spread. What I learned suggests it may become even harder to fight hate online in the future.</p>
<h2>To catch a file</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/265693/original/file-20190325-36264-2j36ey.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What’s a hapax legomenon?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/katexic/22048430801">Katexic Clipping Newsletter/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>To find as many different versions of the manifesto as possible, I chose an unusual keyphrase, called a “<a href="https://mentalfloss.com/article/27617/elusive-hapax-legomenon">hapax legomenon</a>” in computational linguistics: a set of words that would only be found in the manifesto and nowhere else. For example, Google-searching the phrase “Schtitt uses an unamplified bullhorn” reveals that this phrase is used only in David Foster Wallace’s novel “<a href="https://www.littlebrown.com/titles/david-foster-wallace/infinite-jest/9780316920049/">Infinite Jest</a>” and nowhere else online (until now).</p>
<p>A few minutes of Google-searching for a hapax from the manifesto (which I’m intentionally not revealing) found copies of the document in Microsoft Word and Adobe PDF formats on dozens of file-sharing services, including DocDroid, DocumentCloud, Scribd, Mega and Dropbox. The file had been uploaded to blogs hosted on Wordpress and <a href="https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12214017">attached to message boards like Kiwi Farms</a>. I also found numerous broken links to files that had been uploaded and quickly deleted, like the original versions that the author had uploaded to Mediafire and Zippyshare.</p>
<p>To determine whether all the files were the same, I used a common file-identification technique, generating a <a href="https://en.wikipedia.org/wiki/Checksum">checksum</a>, or cryptographic hash, for each manifesto document. A hash is a mathematical description of a file. If two files are identical, their hashes will match. If they are different, they will produce different hashes. After reviewing the file hashes, it became clear that there were only a few main versions of the manifesto, and most of the rest of the files circulating around were copies of them. </p>
<p>A hash can only reveal that the files are different, not how or why they are different. Within the different versions of the manifesto files, I found very few instances where entirely new content was added. I did find a few versions that had color graphics and new cover art added, but the text content itself was left largely unchanged. Most of the differences between the originals could be chalked up to the different fonts and paper sizes set as defaults on the computer of whoever created the copies. Some of the versions also had slightly different line spacing, perhaps introduced as the file was converted from Word to PDF.</p>
<p>The video file was another story. At least one person who watched the Facebook video made a copy of it, and that original video was subsequently compressed, edited, restreamed and reformatted until <a href="https://www.theatlantic.com/technology/archive/2019/03/facebook-youtube-new-zealand-tragedy-video/585418/">at least 800 different versions</a> were circulating. </p>
<p>Any change to a file – even a small one like adding a single letter to the manifesto or one extra second of video – will result in an entirely different file hash. All those changes made my analysis of the spread of these artifacts difficult – and also complicated social media companies’ efforts to rid the internet of them. </p>
<p>Facebook and YouTube <a href="https://qz.com/1574293/facebook-and-twitter-couldnt-handle-the-new-zealand-shooting/">used some form of hash-matching</a> to block most of the video upload attempts. But with all those changes – and the resulting entirely new hashes – <a href="https://www.washingtonpost.com/world/facebook-removed-15-million-videos-of-the-christchurch-attacks-within-24-hours--and-there-were-still-many-more/2019/03/17/fe3124b2-4898-11e9-b871-978e5c757325_story.html">300,000 copies of the video escaped hash-based detection</a> at Facebook. Google also <a href="https://motherboard.vice.com/en_us/article/eve9ke/internal-google-email-christchurch-content-moderation-manifesto">lamented the difficulty</a> of detecting tiny text changes in such a lengthy manifesto. </p>
<h2>More tech, more problems</h2>
<p>Despite the internet companies’ claims that these <a href="https://www.cnet.com/news/heres-how-facebook-uses-artificial-intelligence-to-take-down-abusive-posts-f8/">problems will disappear as artificial intelligence matures</a>, a collection of “<a href="https://hopenothate.com/2018/11/04/alt-tech-far-right-safe-spaces-online/">alt-tech</a>” companies are working to ensure that hate-fueled artifacts like the manifesto and video can spread unbidden.</p>
<p>For example, <a href="https://www.huffingtonpost.com/entry/rob-monster-epik-gab-neo-nazi_us_5c17bb29e4b05d7e5d846f72">Rob Monster</a>, CEO of a company called Epik, has created a suite of software services that support <a href="https://www.splcenter.org/hatewatch/2019/01/11/problem-epik-proportions">a broad collection of hate sites</a>. Epik provides domain services for Gab, <a href="https://www.wired.com/story/how-right-wing-social-media-site-gab-got-back-online/">an online platform favored by violent extremists</a> like the accused Pittsburgh synagogue shooter, and <a href="https://www.columbian.com/news/2019/feb/15/epik-buys-vancouver-based-bitmitigate/">the company recently acquired BitMitigate</a>, which offers protection against online attacks to neo-Nazi site The Daily Stormer. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/8CMxDNuuAiQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">An introduction to IPFS.</span></figcaption>
</figure>
<p>Just 24 hours after the mosque attacks, Monster explained on Gab that he shared the manifesto and video file onto IPFS, or the “<a href="https://hackernoon.com/a-beginners-guide-to-ipfs-20673fedd3f">Interplanetary File System</a>,” a decentralized peer-to-peer file sharing network. Files on IPFS are split into many pieces, each distributed among many participants on the network, making the removal of a file nearly impossible. IPFS had previously been a niche technology, <a href="https://motherboard.vice.com/en_us/article/43bnzd/neo-nazis-propaganda-decentralized-weev">relatively unknown even among extremists</a>. Now, calling IPFS a “crazy clever technology” that makes files “effectively uncensorable,” Monster reassured Gab users that he was also developing software to make IPFS “easy for anyone … with no technical skills required.”</p>
<h2>A shift in tactics</h2>
<p>As in-person hate groups were <a href="https://www.splcenter.org/seeking-justice/case-docket/donald-v-united-klans-america">sued into obscurity in the 1980s</a>, extremism went underground. But with the advent of the commercial internet, <a href="https://theconversation.com/what-is-the-online-equivalent-of-a-burning-cross-83185">hate groups quickly moved online</a>, and eventually onto social media. The New Zealand attacker was part of a <a href="https://www.nytimes.com/2019/03/15/technology/facebook-youtube-christchurch-shooting.html">far-right social media “meme” culture</a>, where angry men (<a href="http://hdl.handle.net/10125/59663">and some women</a>) justify their grievances with violent, hateful rhetoric.</p>
<p>Widespread adoption of artificial intelligence on platforms and decentralized tools like IPFS will mean that the online hate landscape will change once again. Combating online extremism in the future may be less about “<a href="https://www.vice.com/en_us/article/a3y3vk/reddit-is-reeling-from-a-massive-meme-war">meme wars</a>” and user-banning, or “<a href="https://motherboard.vice.com/en_us/article/bjbp9d/do-social-media-bans-work">de-platforming</a>,” and could instead look like the <a href="https://www.realcleardefense.com/articles/2018/09/12/cybersecurity_as_attack-defense_113796.html">attack-and-defend</a>, cat-and-mouse technical one-upsmanship that has defined the cybersecurity industry since the 1980s. </p>
<p>No matter what technical challenges come up, one fact never changes: The world will always need more good, smart people working to counter hate than there are promoting it.</p><img src="https://counter.theconversation.com/content/114048/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Megan Squire does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Social media companies struggle to identify and remove hate speech when it’s posted. What can computer science reveal about how hate-filled texts and videos spread online?Megan Squire, Professor of Computer Science, Elon UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1140692019-03-26T01:09:57Z2019-03-26T01:09:57ZWhy new laws are vital to help us control violence and extremism online<figure><img src="https://images.theconversation.com/files/265739/original/file-20190325-36264-4cq6nb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Only the law can hold internet companies criminally accountable. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/close-male-lawyer-judge-hands-striking-682849789">from www.shutterstock.com</a></span></figcaption></figure><p>The terrorist attack in Christchurch is a horrific attack on society. We must consider all measures available to avoid something like this ever happening again, anywhere.</p>
<p>Now in Australia, Prime Minister Scott Morrison wants to introduce <a href="https://theconversation.com/morrison-flags-new-laws-to-stop-social-media-platforms-being-weaponised-114237">new criminal laws</a> for social media companies that fail to quickly remove footage like that broadcast by the gunman in the New Zealand massacre. The alleged gunman live-streamed his activities on Facebook, and the footage was <a href="https://theconversation.com/anxieties-over-livestreams-can-help-us-design-better-facebook-and-youtube-content-moderation-113750">republished across many platforms</a> in the days following. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/morrison-flags-new-laws-to-stop-social-media-platforms-being-weaponised-114237">Morrison flags new laws to stop social media platforms being 'weaponised'</a>
</strong>
</em>
</p>
<hr>
<p>This is an indication that Australian leaders may now be prepared to move beyond just blaming technology for its role in the Christchurch massacre.</p>
<p>Laws are typically based on social values and social duties. However, penalties can of course only stem from violations of law – not violations of social duties – and it is governments that make law. </p>
<h2>How is the internet regulated?</h2>
<p>Internet platforms such as Facebook and Google are already subject to a complex web of laws stemming from around the globe. </p>
<p>A project at Stanford University has started <a href="https://wilmap.law.stanford.edu/map">mapping out</a> this web of regulation. </p>
<p>The site points to several laws in Australia that apply to internet platforms. Of these, the <a href="http://classic.austlii.edu.au/au/legis/cth/consol_act/bsa1992214/sch5.html">Broadcasting Services Act 1992 (Cth)</a> is most relevant. But this is a largely untested legal provision providing certain protections for internet platforms handling content posted by users.</p>
<p>Prime Minister Scott Morrison has indicated he <a href="https://theconversation.com/morrison-flags-new-laws-to-stop-social-media-platforms-being-weaponised-114237">aims to create laws</a> that: </p>
<ul>
<li><p>make it a criminal offence to fail to remove the offending footage as soon as possible after it was reported or it otherwise became known to the company</p></li>
<li><p>allow the government to declare footage of an incident filmed by a perpetrator and being hosted on a site was “abhorrent violent material”. It would be a crime for a social media provider not to quickly remove the material after receiving a notice to do so. There would be escalating penalties the longer it remained on the social media platform.</p></li>
</ul>
<p>These laws would not prevent violent livestreaming from taking place in the first place, but if drafted carefully may help control its spread and impact. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/anxieties-over-livestreams-can-help-us-design-better-facebook-and-youtube-content-moderation-113750">Anxieties over livestreams can help us design better Facebook and YouTube content moderation</a>
</strong>
</em>
</p>
<hr>
<p>This is an important point, as there is a strong argument that banning live-streaming on the major platforms <a href="https://www.afr.com/news/economy/there-are-no-easy-fixes-for-the-live-streaming-of-real-hate-20190319-h1cjp9">will not prevent terrorists live-streaming</a> their acts via other outlets.</p>
<p>Along with Home Affairs Minister Peter Dutton, Attorney-General Christian Porter and Communications Minister Mitch Fifield, today the prime minister <a href="https://theconversation.com/morrison-flags-new-laws-to-stop-social-media-platforms-being-weaponised-114237">will meet with</a> representatives of Google, Facebook and Twitter and telcos including Telstra, Optus and Vodafone to discuss the responsibilities of social media companies when violence is streamed online. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/four-ways-social-media-companies-and-security-agencies-can-tackle-terrorism-78903">Four ways social media companies and security agencies can tackle terrorism</a>
</strong>
</em>
</p>
<hr>
<h2>Global examples for improving regulation</h2>
<p>Recent activity around the world shows increasing attention paid to regulating online hate and terrorist content. </p>
<p>In October 2018, the US Department of Justice launched a <a href="https://www.justice.gov/hatecrimes">new website</a> to improve the identification and reporting of hate crimes. </p>
<p>And in the European Union, work has advanced to stop terrorists from using the internet to radicalise, recruit and incite to violence. The <a href="http://europa.eu/rapid/press-release_IP-18-5561_en.htm">EU proposal</a> includes a framework for strengthened cooperation between hosting service providers, member states and Europol (the EU’s law enforcement agency). Within that framework, service providers must designate points of contact reachable 24/7 to facilitate the follow up to removal orders and referrals. </p>
<p>Using the powers of the Office of Film and Literature Classification, New Zealand has <a href="https://edition.cnn.com/2019/03/22/asia/new-zealand-bans-suspect-manifesto/index.html">banned</a> possession and distribution of the “manifesto” said to be written by the suspect behind the Christchurch mosque attack. (This accompanies other measures like stricter gun control <a href="https://theconversation.com/will-the-new-zealand-gun-law-changes-prevent-future-mass-shootings-113838">updated recently in New Zealand</a>).</p>
<p>Australia can draw upon these experiences, copying the good and developing what needs improvement.</p>
<h2>International cooperation is key</h2>
<p>Morrison has placed the matter of social media platforms being misused to promote violence on the <a href="https://theconversation.com/morrison-flags-new-laws-to-stop-social-media-platforms-being-weaponised-114237">G20 agenda</a>. This is a good step. The major tech companies are established overseas so this is an issue that can only be addressed via international cooperation.</p>
<p>However, the G20 is only one forum of many. Ultimately, what we need are multi-stakeholder discussions involving governments, the tech industry, civil society and academia. </p>
<p>A relevant example in this context is the work the Paris-based <a href="https://www.internetjurisdiction.net/work/content-jurisdiction">Internet & Jurisdiction Policy Network</a>, and more specifically its work on cross-border content take-down and blocking. Its work is advanced, and includes concrete suggestions aimed at managing globally available content in light of the diversity of local laws and norms applicable on the internet.</p><img src="https://counter.theconversation.com/content/114069/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dan Jerker B. Svantesson was an ARC Future Fellow (project number FT120100583) during 2012-2016. During this period he received funding from the Australian Research Council for a project dealing with the topic of this piece. Professor Svantesson is currently writing a Global Status Report - dealing with, amongst other things, the issue of this piece - on behalf of the Internet & Jurisdiction Policy Network. The views expressed herein are those of the author alone.</span></em></p>With new laws proposed, Australian leaders now seem prepared to move beyond just blaming technology for its role in online violence and extremism.Dan Jerker B. Svantesson, Co-Director Centre for Commercial Law, Bond UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1020772018-09-16T14:40:33Z2018-09-16T14:40:33ZBig Tech is overselling AI as the solution to online extremism<figure><img src="https://images.theconversation.com/files/236468/original/file-20180914-177962-8cwkj6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Facebook CEO Mark Zuckerberg takes his seat to testify on Capitol Hill in Washington in April 2018 about the use of Facebook data to target American voters in the 2016 election. </span> <span class="attribution"><span class="source">(AP Photo/Pablo Martinez Monsivais)</span></span></figcaption></figure><p>In mid-September <a href="http://fortune.com/2018/09/12/eu-juncker-terrorist-propaganda-google-facebook/">the European Union threatened to fine the Big Tech companies</a> if they did not remove terrorist content within one hour of appearing online. The change came because rising tensions are now developing and being played out on social media platforms. </p>
<p>Social conflicts that once built up in backroom meetings and came to a head on city streets, are now building momentum on social media platforms before spilling over into real life. In the past, governments tended to control traditional media, with little to no possibility for individuals to broadcast hate. </p>
<p>The digital revolution has altered everything.</p>
<p>Terrorist organizations, most notably Islamic State (ISIS) militants, <a href="https://www.scientificamerican.com/article/social-medias-stepped-up-crackdown-on-terrorists-still-falls-short/">have used social media platforms</a> such as Facebook, Instagram and Twitter for their propaganda campaigns, and to plan terrorist attacks against civilians.</p>
<p>Far right groups, including anti-refugee extremists in Germany, are also <a href="https://www.theguardian.com/commentisfree/2017/may/01/far-right-networks-nationalists-hate-social-media-companies">increasingly exploiting</a> tech platforms to espouse anti-immigrant views and demonize minorities. </p>
<p>From <a href="http://www.abc.net.au/news/2018-03-08/sri-lanka-blocks-social-media-as-buddhist-mobs-attack-mosques/9525572">Sri Lanka</a> to <a href="https://www.wired.com/story/how-facebooks-rise-fueled-chaos-and-confusion-in-myanmar/">Myanmar</a>, communal tensions — stoked online — have led to violence. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/unliked-how-facebook-is-playing-a-part-in-the-rohingya-genocide-89523">Unliked: How Facebook is playing a part in the Rohingya genocide</a>
</strong>
</em>
</p>
<hr>
<p>Due to the growing political will within Western countries to regulate social media companies, many tech titans are arguing they can self-regulate — and that artificial intelligence (AI) is one of the key tools to curtail online hate. Several years ago, we created the <a href="https://www.concordia.ca/research/migs/projects/dmap.html">Digital Mass Atrocity Prevention Lab</a> to work on improving public policy to curb the exploitation of tech platforms by violent extremists.</p>
<h2>Oversold abilities?</h2>
<p>Tech companies are painfully aware of the malicious use of their platforms. </p>
<p>In June 2017, Facebook, Microsoft, Twitter and YouTube announced the formation of the <a href="https://gifct.org/leadership">Global Internet Forum to Counter Terrorism</a>, which aims to disrupt extremist activities online. Yet as political pressure to remove harmful online content grows, these companies are beginning to realize the limits of their human content moderators. </p>
<p>Instead, they are increasingly developing and deploying <a href="https://techcrunch.com/2017/06/19/google-to-ramp-up-ai-efforts-to-id-extremism-on-youtube/">AI technologies</a> to automate the process of unwanted content detection and removal. But they are doing so with no oversight and little public information about how these AI systems work, a problem identified in a recent report by the <a href="https://www.ppforum.ca/publications/social-marketing-hate-speech-disinformation-democracy/">Public Policy Forum</a>.</p>
<p>Twitter, according to its most recent transparency report, claims <a href="https://www.ft.com/content/198b5258-9d3e-11e7-8cd4-932067fbf946">it used AI to take down</a> more than 300,000 terrorist-related accounts in the first half of 2017. </p>
<p>Facebook itself acknowledges that it is struggling to use make use of AI in an efficient manner on issues surrounding hate speech. CEO Mark Zuckerberg told members of the U.S. Congress earlier this year that AI still struggles to tackle the nuances of language dialects, context and whether or not a statement qualified as hate speech — and that it could take years to solve. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-canadas-response-to-the-facebook-scandal-has-been-so-weak-97071">Why Canada's response to the Facebook scandal has been so weak</a>
</strong>
</em>
</p>
<hr>
<p>However, <a href="https://www.politico.eu/article/facebook-youtube-terrorism-isis-release-data-on-extremist-content-removal/">the company also claims</a> to be able to remove 99 per cent of ISIS and al-Qaida affiliated content using AI-powered algorithms and human content moderators. Whether AI or humans are the key to the company’s claims of success has not yet been independently investigated.</p>
<h2>The failure of AI</h2>
<p>In 2017, <a href="https://www.businessinsider.com/why-advertisers-are-pulling-spend-from-youtube-2017-3">250 companies suspended</a> advertising contracts with Google over its alleged failure to moderate YouTube’s extremist content. A year later, Google’s senior vice president of advertising and commerce, Sridhar Ramaswamy, says the company is making strong progress in platform safety to regain the <a href="https://www.marketingweek.com/2018/03/21/google-is-confident-it-will-solve-brand-safety-issues/">lost confidence of its clients</a>.</p>
<p>However, <a href="https://www.counterextremism.com/press/icymi-cep-study-documents-youtube%E2%80%99s-failure-effectively-and-permanently-remove-extremist">a recent study by the NGO Counter Extremism Project refutes</a> the effectiveness of the company’s effort to limit and delete extremist videos. More transparency and accountability from YouTube is needed, given that the study found that over 90 per cent of ISIS videos were uploaded more than once, with no action taken against the accounts that violated the company’s terms of service. </p>
<p>Clearly there is no simple pathway forward. Removing content that is not harmful, offensive, extremist or illegal, even if it distasteful, is an impediment to free speech. In some cases, using AI to remove content has blocked legitimate material posted by human rights champions. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/235304/original/file-20180906-190653-58vxlw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Social media companies have deleted videos and posts documenting human rights abuses. In this September 2017 photo, Rohingya Muslims shuttle food towards a refugee camp, as a fire burns in Myanmar.</span>
<span class="attribution"><span class="source">(AP Photo/Dar Yasin, File)</span></span>
</figcaption>
</figure>
<p>For example, in 2017, Shah Hossain, a <a href="https://www.bbc.com/news/blogs-trending-41364633">human rights activist</a> found a significant number of his Facebook posts regarding the persecution of the Rohingya minority in Myanmar had been deleted. YouTube also erased his news channel, which had nearly 80,000 subscribers. Hossain was documenting human rights abuses, not espousing hate.</p>
<p>In Syria, where independent journalism is severely restricted by war, videos and photos posted online by activists are crucial to understanding the situation in the country. In an attempt to crackdown on extremist content, however, <a href="https://www.npr.org/2017/09/13/550757777/youtube-inadvertently-erases-syrian-war-videos-in-purge-of-extremist-propaganda">YouTube’s AI-powered algorithms removed</a> thousands of videos of atrocities against civilians. The videos were posted as evidence for the eventual prosecution of Syrian officials for crimes against humanity. This is quite troubling. </p>
<h2>Moving forward</h2>
<p>Well-known social media giants have said publicly that they’ll put more resources into policing their platforms. However, given the current results, it’s time to consider if this approach is ethical and effective.</p>
<p>The United Kingdom, France, Germany, the <a href="https://money.cnn.com/2018/08/20/technology/eu-social-media-terror-content/index.html">European Union</a> and the United States, among others, have begun to openly discuss and implement regulatory measures on the tech industry, not only pertaining to terrorism and hate speech, but also digital election interference, the spread of “fake news” and misinformation campaigns. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-not-nationalize-facebook-93816">Why not nationalize Facebook?</a>
</strong>
</em>
</p>
<hr>
<p>Canada has begun to take the issue seriously as well, forming the Digital Inclusion Lab at Global Affairs Canada, which works to strengthen the combined efforts of the G7. </p>
<p>These are much needed initiatives. The big tech giants have been overselling the effectiveness of AI in countering hate on their platforms. Our democratic and open societies must put aside the notion that AI is the panacea for the problem at hand. Social polarization and growing mistrust across the planet will continue unless elected officials regulate Big Tech.</p><img src="https://counter.theconversation.com/content/102077/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kyle Matthews receives funding from Global Affairs Canada</span></em></p><p class="fine-print"><em><span>Nicolai Pogadl does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Many tech titans say they can self-regulate online hate speech and extremism with artificial intelligence, but can they?Kyle Matthews, Executive Director, The Montreal Institute for Genocide and Human Rights Studies, Concordia UniversityNicolai Pogadl, Project Manager, Concordia UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/924772018-03-01T17:05:10Z2018-03-01T17:05:10ZThe NRA’s video channel is a hotbed of online hostility<figure><img src="https://images.theconversation.com/files/208515/original/file-20180301-152587-1fo7lth.png?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">NRA TV's content focuses on ideology rather than guns.</span> <span class="attribution"><a class="source" href="https://www.youtube.com/watch?v=aq7jnowk0kk">Screenshot from YouTube.com</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>As the <a href="https://home.nra.org/">National Rifle Association</a>, the most influential gun rights advocacy group in the U.S., comes under pressure from <a href="http://thehill.com/policy/technology/375556-groups-call-on-apple-amazon-to-ditch-nra-tv-channel">victims’ groups</a> and gun control advocates, internet companies like Amazon, Apple and YouTube are finding themselves uncomfortably close to the center of the controversy. These are among the companies that currently stream the NRA’s official video channel, <a href="https://www.nratv.com/">NRA TV</a>. </p>
<p>NRA TV has become a central focus in what could be a threshold moment in the national gun debate. In the wake of the <a href="https://www.cnn.com/2018/02/15/us/florida-shooting-victims-school/index.html">school shooting</a> in Parkland, Florida, that claimed 17 lives, a <a href="https://www.nytimes.com/2018/02/23/business/nra-boycott.html">consumer activist movement</a> has worked to peel back the <a href="http://www.bbc.com/news/world-us-canada-35261394">tight grip the NRA holds</a> over the country’s gun policy. The effort has driven some airlines, insurance companies, car rental companies and banks to <a href="http://money.cnn.com/2018/02/25/news/companies/companies-abandoning-nra-list/index.html">sever their commercial and professional ties</a> with the NRA. Now gun control activists are turning their full attention to the internet.</p>
<p>In the world of online politics, it’s not unusual to find videos <a href="http://video.dailymail.co.uk/preview/mol/2017/09/17/3149702971737159051/636x382_MP4_3149702971737159051.mp4">inciting hostility</a>. On Feb. 12, just days before the Parkland shooting, one such <a href="https://twitter.com/NRATV/status/963128750857641984">YouTube video</a> featured a pundit smashing a sledgehammer through a TV set that featured liberal commentators, later declaring, “If we want to take back this nation from socialists who are out to destroy it … you better believe we’ll be pushing the truth on them.” But that video was not the seething production of an obscure far-right blogger. It was the latest episode of the official video channel of the NRA.</p>
<p>NRA TV is not merely a platform for promoting Second Amendment rights or engaging gun enthusiasts. As a <a href="http://www.palgrave.com/us/book/9783319514239">researcher of online extremism</a>, I’d contend it has become one of the web’s most incendiary hotspots for stoking outrage at liberal America, attacking perceived enemies like <a href="https://blacklivesmatter.com/">Black Lives Matter</a> and the <a href="https://www.womensmarch.com/">Women’s March</a>, and promoting the message that America is under threat from the so-called “<a href="https://twitter.com/NRATV/status/895358165335789568">violent left</a>” – an especially alarming term, coming from a gun lobby.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=332&fit=crop&dpr=1 600w, https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=332&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=332&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=417&fit=crop&dpr=1 754w, https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=417&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/208364/original/file-20180301-36671-gsiga5.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=417&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">NRA TV presents itself as a part of a movement for truth and facts.</span>
<span class="attribution"><a class="source" href="https://www.youtube.com/channel/UCnPV7QPHfuwwPBn_mvI_Hzw">Screenshot from YouTube.com</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>What is NRA TV?</h2>
<p>Given the channel’s association with the NRA, a newcomer to NRA TV might reasonably expect information on gun safety, Second Amendment rights and a community for firearms enthusiasts and collectors. Its focus is none of those things. Instead, visitors find a virtual hornet’s nest of hard-right politics. </p>
<p><a href="http://www.palgrave.com/us/book/9783319514239">In my work</a>, I came across NRA TV while tracking far-right and far-left groups’ activities on Twitter. One such group had retweeted a video from NRA TV featuring host Dana Loesch calling the mainstream media “<a href="https://twitter.com/NRATV/status/801490119710621696">the rat bastards of the earth</a>” whom she was happy to see “curb stomped.”</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"801490119710621696"}"></div></p>
<p>The acidic tone of NRA TV represents an astonishing evolution of an organization that began as a rifle club to <a href="https://home.nra.org/about-the-nra/">promote marksmanship</a>. Even the NRA of the 1980s, which ran <a href="https://www.youtube.com/watch?v=Sr3tKACUBH8">TV ads on the right to bear arms</a>, would be hard to recognize as a forebear to today’s version. My study of 224 NRA TV videos and tweets over two months in 2017 found that only 34 dealt with topics related to direct gun advocacy or gun ownership. The remaining 190, or about five out of every six posts, were trained on perceived political enemies, trading the core mission of gun rights for incessant attacks on “<a href="https://twitter.com/NRATV/status/895358165335789568">crazed liberals</a>” and “<a href="https://twitter.com/NRATV/status/895487918701068288">hateful leftists</a>.”</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/Sr3tKACUBH8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">A TV ad from the NRA from the 1980s.</span></figcaption>
</figure>
<p>It is hard to recall an NRA that once viewed itself as a <a href="https://www.cnn.com/2018/02/24/politics/nra-partisan-bipartisan-republican/index.html">bipartisan body</a>. Its current online hosts warn that opponents of President Donald Trump will “<a href="https://www.mediamatters.org/blog/2017/10/20/new-nra-ad-warns-trump-opponents-will-perish-political-flames-their-own-fires/218283">perish in the political flames of their own fires</a>.” Even more provocative is the portrayal of the NRA’s declared adversaries, framed not as political foes, but as ideological and even existential threats. The Women’s March is labeled “<a href="https://twitter.com/NRATV/status/884490532168151040">a bigoted, fake feminist, jihad-supporting</a>” movement, while Black Lives Matter is described as “<a href="https://twitter.com/NRATV/status/884505310022385666">a dangerous, hateful, destructive ideology</a>.” </p>
<p>The dystopian picture that NRA TV portrays includes government officials <a href="https://www.facebook.com/NRATV/videos/10155145665227898/">encouraging violent protests</a> against conservative groups, and a media-sponsored “<a href="https://twitter.com/NRATV/status/889933589437009921">war on cops</a>.” The NRA believes it must be <a href="https://twitter.com/NRATV/status/882731146550820864">ready to defend</a> itself and the country against these and other forces.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=493&fit=crop&dpr=1 600w, https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=493&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=493&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=619&fit=crop&dpr=1 754w, https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=619&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/208326/original/file-20180228-36683-iegxan.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=619&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">In addition to publishing its own material, NRA TV also retweets others’ hostile messages.</span>
<span class="attribution"><a class="source" href="https://twitter.com/BreitbartNews/status/881491671237767168">Screenshot from Twitter.com</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>In <a href="https://twitter.com/NRATV/status/895358165335789568">a video that streamed</a> to NRA TV’s 260,000 Twitter followers in August 2017, host Grant Stinchfield asked his audience, </p>
<blockquote>
<p>“What scares me more than the North Korean crazed tyrant? The violent left and the crazed liberals who lead them. They like North Korea also pose a clear and present danger to America … Make no mistake, the lying leftist media, the elitist cringe-worthy celebrities, and the anti-American politicians – who make up the violent left – don’t just hate President Trump, they hate you.”</p>
</blockquote>
<p>The insinuation that left-wing forces are out to destroy the country by sabotaging its institutions is a demagogic refrain with echoes of the <a href="http://www.lib.berkeley.edu/MRC/murrowmccarthy.html">anti-communist McCarthy era</a>. But it is particularly unsettling when it emanates from a lobby that simultaneously promotes the necessity of <a href="https://twitter.com/NRATV/status/889598211492532224">gun ownership</a>. Which brings us back to Amazon.</p>
<h2>Pulling the plug</h2>
<p>After another shooting at an American high school at the hands of a <a href="http://time.com/5160267/gun-used-florida-school-shooting-ar-15/">19-year-old with an AR-15</a>, the gun-control advocacy movement has turned its attention to its chief opponent, the NRA. The strategy is to dislodge the <a href="https://www.cnn.com/2018/02/23/politics/nra-political-money-clout/index.html">influence of the NRA</a> by going after its support system. That has led activists to Amazon, Apple, <a href="https://www.cbsnews.com/news/nra-tv-roku-rejects-calls-to-cancel-channel/">Roku</a> and other services that stream NRA TV content. While other companies support the NRA financially, these internet giants provide perhaps a more valuable currency in their prominent platforms that allow the NRA to distribute its message. </p>
<p><a href="https://momsdemandaction.org/">Moms Demand Action for Gun Sense in America</a> is one organization leading the charge for internet companies to drop NRA TV, citing its “<a href="https://momsdemandaction.org/moms-demand-action-everytown-launch-dumpnratv-campaign-calling-on-google-amazon-apple-atts-directv-and-roku-to-stop-streaming-nratv/">violence-inciting programming</a>.” The group is joined by some of the survivors of the Parkland shooting, such as David Hogg, who is <a href="https://twitter.com/davidhogg111/status/968129989085421569">encouraging people to boycott tech companies</a> that carry NRA TV. A <a href="https://www.change.org/p/jeff-bezos-remove-nratv-from-amazon-s-streaming-service-website">petition on Change.org</a>, with 240,000 signatures as of March 1, is simultaneously calling on Amazon CEO Jeff Bezos to purge NRA content from his site’s offerings. And on Twitter, <a href="https://twitter.com/hashtag/dropNRATV">#dropNRATV</a> is gaining steam, even as the channel continues to host controversial content.</p>
<p>The growing wave of consumer activists has effectively placed the internet’s biggest gatekeepers in the middle of America’s hyperpolarized gun debate. As web hosts, their power to amplify or quiet controversial messages is unmatched in the modern media landscape. But in many ways, this is not strictly a gun issue. Rather, a closer look at NRA TV suggests that this is also an issue of community standards, which are well within a web host’s domain. </p>
<p>And in recent months, YouTube and Twitter have each demonstrated a willingness to <a href="https://www.salon.com/2017/12/18/twitter-is-starting-to-purge-its-alt-right-users/">enforce stricter terms of service</a> prohibiting hateful, dangerous or abusive material from their networks. So the real question that these internet companies now face is whether an NRA tirade about American liberals posing a “<a href="https://twitter.com/NRATV/status/895358165335789568">clear and present danger</a>” is legitimate gun advocacy, or barefaced incitement.</p><img src="https://counter.theconversation.com/content/92477/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam G. Klein does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Gun control advocates want to shut down the National Rifle Association’s online video channel, NRA TV. A scholar looks at what its videos are actually about.Adam G. Klein, Assistant Professor of Communication Studies, Pace University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/910242018-02-05T14:21:30Z2018-02-05T14:21:30ZExplainer: how Facebook has become the world’s largest echo chamber<figure><img src="https://images.theconversation.com/files/204616/original/file-20180202-162082-1nk3qoi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Is there an echo here?</span> <span class="attribution"><span class="source">Reuters/Benoit Tessier</span></span></figcaption></figure><p>I began my research career in the last century with an analysis of how news organisations were adapting to this strange new thing called “the Internet”. Five years later I signed up for Twitter and, a year after that, for Facebook. </p>
<p>Now, as it celebrates its 14th birthday, Facebook is becoming ubiquitous, and its usage and impact <a href="https://herts.academia.edu/MeganKnight">is central</a> to my (and many others’) research. </p>
<p>In 2017 the social network had <a href="https://www.statista.com/statistics/241552/share-of-global-population-using-facebook-by-region">2 billion members</a>, by its own count. Facebook’s relationship with news content is an important part of this ubiquity. Since 2008 the company has courted news organisations with features like “Connect”, “Share” and “Instant Articles”. As of 2017, 48% of Americans <a href="http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017/">rely primarily</a> on Facebook for news and current affairs information. </p>
<p>Social networks present news content in a way that’s integrated into the flow of personal and other communication. Media scholar <a href="http://alfredhermida.com/research/projects/">Alfred Hermida</a> calls this “<a href="http://alfredhermida.com/2010/05/03/ambient-journalism-paper-published/">ambient news</a>”. It’s a trend that has been considered promising for the development of civil society. Social media – like the Internet before it – has being hailed as the new “public sphere”: a place for civic discourse and political engagement among the citizenry. </p>
<p>But, unlike the Internet, Facebook is not a public space in which all content is equal. It is a private company. It controls what content you see, according to algorithms and commercial interests. The new public sphere is, in fact, privately owned, and this has far-reaching implications for civic society worldwide. </p>
<p>When a single company is acting as the broker for news and current affairs content for a majority of the population, the possibility for abuse is rife. Facebook is not seen as a “news organisation”, so it falls outside of whatever regulations countries apply to “the news”. And its content is provided by myriad third parties, often with little oversight and tracking by countries’ authorities. So civic society’s ability to address concerns about Facebook’s content becomes even more constrained.</p>
<h2>Getting to know all about you</h2>
<p>Facebook’s primary goal is to sell advertising. It does so by knowing as much as possible about its users, then selling that information to advertisers. The provision of content to entice consumers to look at advertising is not new: it’s the entire basis of the commercial media. </p>
<p>But where newspapers can only target broad demographic groups based on language, location and, to an extent, education level and income, Facebook can narrow its target market down to individual level. How? Based on demographics – and everything your “likes”, posts and comments have told it.</p>
<p>This ability to fine tune content to subsets of the audience is not limited to advertising. Everything on your Facebook feed is curated and presented to you by an algorithm seeking to maximise your engagement by only showing you things that it thinks you will like and respond to. The more you engage and respond, the better the algorithm gets at predicting what you will like.</p>
<p>When it comes to news content and discussion of the news, this means you will increasingly only see material that’s in line with your stated interests. More and more, too, news items, advertisements and posts by friends are blurred in the interface. This all merges into a single stream of information. </p>
<p>And because of the way your network is structured, the nature of that information becomes ever more narrow. It is inherent in the ideals of democracy that people be exposed to a <a href="http://www.expo98.msu.edu/innerindex.html?ideas">plurality of ideas</a>; that the public sphere should be open to all. The loss of this plurality creates a society made up of extremes, with little hope for consensus or bridging of ideas. </p>
<h2>An echo chamber</h2>
<p>Most people’s “friends” on Facebook tend to be people with whom they have some real-life connection – actual friends, classmates, neighbours and family members. Functionally, this means that most of your network will consist largely of people who share your broad demographic profile: education level, income, location, ethnic and cultural background and age. </p>
<p>The algorithm knows who in this network you are most likely to engage with, which further narrows the field to people whose worldview aligns with your own. You may be Facebook friends with your Uncle Fred, whose political outbursts threaten the tranquillity of every family get-together. But if you ignore his conspiracy-themed posts and don’t engage, they will start to disappear from your feed. </p>
<p>Over time this means that your feed gets narrower and narrower. It shows less and less content that you might disagree with or find distasteful.</p>
<p>These two responses, engaging and ignoring are both driven by the invisible hand of the algorithm. And they have created an echo chamber. This isn’t dissimilar to what news organisations have been trying to do for some time: <a href="http://journals.sagepub.com/doi/abs/10.1177/107769906704400301">gatekeeping</a> is the expression of the journalists’ idea of what the audience wants to read. </p>
<p>Traditional journalists had to rely on their instinct for what people would be interested in. Technology now makes it possible to know exactly what people read, responded to, or shared. </p>
<p>For Facebook, this process is now run by a computer; an algorithm which reacts instantly to provide the content it thinks you want. But this fine tuned and carefully managed algorithm is open to manipulation, especially by political and social interests.</p>
<h2>Extreme views confirmed</h2>
<p>In the last few years Facebook users have unwittingly become part of a massive social experiment – one which may have contributed to the equally surprising <a href="https://www.theguardian.com/technology/2017/oct/26/cambridge-analytica-used-data-from-facebook-and-politico-to-help-trump">election of Donald Trump</a> as president of the US and the UK <a href="https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy">electing to leave</a> the European Union. We can’t be sure of this, since Facebook’s content algorithm is secret and most of the content is shown only to specific users. </p>
<p>It’s physically impossible for a researcher to see all of the content distributed on Facebook; the company explicitly prevents that kind of access. Researchers and journalists need to construct model accounts (fake ones, violating Facebook’s terms of use) and attempt to trick the algorithm into showing what the social network’s most extreme political users see.</p>
<p>What they’ve <a href="https://medium.com/@richgor/why-every-american-should-look-at-blue-feed-red-feed-and-why-the-nation-needs-someone-to-build-f455ef17a0f2">found</a> is that the <a href="https://arxiv.org/pdf/1509.00189.pdf">more extreme the views</a> the user has already agreed with, the more extreme the content they saw was. People who liked or expressed support for leaving the EU were shown content that reflected this desire, but in a more extreme way. </p>
<p>If they liked that they’d be shown even more content, and so on, the group getting smaller and smaller and more and more insular. This is similar to how extremist groups would identify and court potential members, enticing them with more and more radical ideas and watching their reaction. That sort of personal interaction was a slow process. Facebook’s algorithm now works at lightning speed and the pace of radicalisation is exponentially increased.</p><img src="https://counter.theconversation.com/content/91024/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Megan Knight does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>More and more, news items, adverts and posts by friends are blurred in Facebook’s interface. This all merges into a single stream of information.Megan Knight, Associate Dean, University of HertfordshireLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/869092017-11-14T02:42:44Z2017-11-14T02:42:44ZHow social media fires people’s passions – and builds extremist divisions<figure><img src="https://images.theconversation.com/files/194018/original/file-20171109-13337-wt1fzf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Passionate feelings can lead to extreme divisions.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/quarrel-between-woman-man-screaming-each-645677146">pathdoc/Shutterstock.com</a></span></figcaption></figure><p>The people of the United States continue to learn how polarized and divided the nation has become. In one study released in late October by the Pew Research Center, Americans were found to have <a href="http://www.pewresearch.org/fact-tank/2017/10/23/in-polarized-era-fewer-americans-hold-a-mix-of-conservative-and-liberal-views">become increasingly partisan</a> in their views. On issues as diverse as health care, immigration, race and sexuality, Americans today hold more extreme and more divergent views than they did a decade ago. The reason for this dramatic shift is a device owned by <a href="http://techlatino.org/2017/01/pew-u-s-smartphone-ownership-broadband-penetration-reached-record-levels-in-2016/">more than three out of every four Americans</a>. </p>
<figure><img src="http://assets.pewresearch.org/wp-content/uploads/sites/12/2014/06/polarization505px_30fps.gif"><figcaption><span class="caption">Americans’ political beliefs have become increasingly polarized. <a href="http://www.pewresearch.org/fact-tank/2014/06/12/7-things-to-know-about-polarization-in-america/">Pew Research Center</a></span></figcaption></figure>
<p>As social media has emerged over the last two decades, I have been studying how <a href="https://doi.org/10.1177/0276146708325382">it changes innovation</a>, and researching the effects of internet communications on consumer opinions and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2259683">marketing</a>. I developed <a href="https://en.wikipedia.org/wiki/Netnography">netnography</a>, one of the most widely used qualitative research techniques for understanding how people behave on social media. And I have used that method to better understand a variety of challenging problems that face not only businesses but governments and society at large.</p>
<p>What I have found has shaken up some of the most firmly held ideas that marketers had about consumers – such as how <a href="https://doi.org/10.1016/S0263-2373(99)00004-3">internet interest groups</a> can drive online purchasing and the power of stories, utopian messages and moral lessons to <a href="https://doi.org/10.1509/jmkg.67.3.19.18657">connect buyers with brands</a> and each other. In one of my latest studies, my co-authors and I debunk the idea that technology might <a href="https://www.gsb.stanford.edu/insights/how-digital-age-rewrites-rule-book-consumer-behavior">make consumers more rational</a> and price-conscious. Instead, we found that smartphones and web applications were increasing people’s passions while also <a href="https://doi.org/10.1093/jcr/ucw061">driving them to polarizing extremes</a>. </p>
<h2>How social media divides people</h2>
<p>When people express themselves through social media, they communicate collectively. Rachel Ashman, Tony Patterson and I studied sharing of images of food in an intensive three-year ethnographic and netnographic study of a variety of online and physical sites. We collected and analyzed thousands of pictures, conducted 17 personal interviews and set up a dedicated research webpage where dozens of people shared their “food porn” stories. </p>
<p>Our results indicate that people share images of food for a number of reasons, including the desire to nurture others with photos of home-cooked food, to express belonging to certain interest groups like vegans or paleos, or to compete about, for example, who could make the most decadent dessert. But this sharing can become competitive, pushing participants to one-up each other, sharing images of food that look less and less like what regular people eat every day. </p>
<p>Here is how it works. Many people start by sharing food images only with people they know well. But once they broaden out to a wider group on social media, several unexpected and startling things begin to happen. First, they find sites where they can feel comfortable expressing their opinions to a like-minded “audience.” </p>
<p>This audience creates a community-type feeling, expressing respect and belonging for certain kinds of messages and outrage or contempt for others. Communications innovators in social media communities often also create new language forms, such as the frustrated guys in men’s-rights-oriented social media forums on Reddit bringing new life to the 19th-century word “<a href="https://qz.com/1092037/the-alt-right-is-creating-its-own-dialect-heres-a-complete-guide/">hypergamy</a>,” or young people creating sophisticated emoji codes in their <a href="https://www.wired.com/2016/08/how-teens-use-social-media/">relationship texting</a>. </p>
<p>Through language and example, community members educate one another. They reinforce each others’ thinking and communication. Members of social media communities direct raw emotions into particular interests. For example, a general fear about job security might become channeled through the feedback loops on Facebook into an <a href="http://www.pe.com/2017/09/15/immigration-talk-was-often-heated-but-social-media-experiment-proves-we-can-talk-to-one-another/">interest in immigrant jobs</a> and immigration policy.</p>
<p>Those feedback loops have even more sensational effects. People use social media to communicate their need for things like money, attention, security and prestige. But once those people become a part of a social media platform, our research reveals how they start to look for wider audiences. Those audiences show their interest and approval by liking, sharing and commenting. And those mechanisms drive future social media behavior.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=750&fit=crop&dpr=1 600w, https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=750&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=750&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=942&fit=crop&dpr=1 754w, https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=942&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/193619/original/file-20171107-1041-54fii7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=942&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A monstrous example of ‘food porn.’</span>
<span class="attribution"><a class="source" href="http://www.bitypic.com/media/1634833723450822346_1243351468">Priyan Shailesh Parab</a></span>
</figcaption>
</figure>
<p>In our study of food image sharing, we wondered why the most popular food porn images depicted massive hamburgers that were impossible to eat, dripping with bacon grease, gummy worms and sparklers. Or super pizza that contained tacos, macaroni and cheese and fried chicken. The answer was that the algorithms that drive participation and attention-getting in social media, the addictive “gamification” aspects such as likes and shares, invariably favored the odd and unusual. When someone wanted to broaden out beyond his or her immediate social networks, one of the most effective ways to achieve mass appeal turned out to be by turning to the extreme. </p>
<p>Taking an existing norm in the community (massive burgers, say) and expanding upon it almost guaranteed a poster a few hundred likes, a dozen supportive comments and 15 minutes of social media glory. As each user tried to top the outrageous image of the user coming before, the extremes of food porn ratcheted toward ever more sensational towering burgers and cakes. Desire for what was once the extremes began to seem normal. And the ends separated farther from the few who remained in the middle.</p>
<h2>The extreme state of the world</h2>
<p>In our research, we suggested that the exact same mechanisms are at work in general society. As the <a href="http://www.pewresearch.org/fact-tank/2017/10/23/in-polarized-era-fewer-americans-hold-a-mix-of-conservative-and-liberal-views/">Pew research</a> revealed, American beliefs have become more partisan and more extreme. Religious beliefs are more fundamentalist. Political figures around the world are more polarized. Language is more crude. </p>
<p>Although the divided state of Americans is a bellwether for some of these unwelcome developments, the phenomenon seems to be global. A recent <a href="http://mashable.com/2017/10/24/facebook-social-media-rohingya-muslim-myanmar-fake-news/">Mashable article</a> blamed social media for fueling the horrific ethnic cleansing of the <a href="https://theconversation.com/the-history-of-the-persecution-of-myanmars-rohingya-84040">Rohingya Muslims in Myanmar</a>, a country where Facebook viewed on mobile devices has become for many people the sole source of news. Hate speech on social media has been a major and growing problem in Europe and <a href="http://www.worldpolicy.org/blog/2015/04/21/addressing-hate-speech-african-digital-media">Africa</a> for several years now. Around the world, social media is feeding strong partisan talk with attention. Moderation and a balanced approach to ideas and discourse seem to be fading away.</p>
<p>The fault for these developments lies, at least in part, in people’s consumption of technology. Even without foreign interference, our research demonstrates that social media is built for polarization and extremes. The basic engagement mechanisms of popular social media sites like Facebook drive people to think and communicate in ever more extreme ways.</p>
<p>As people experience how these technological and social changes play out online, they will have to figure out how to adapt and change their behaviors – or risk becoming increasingly divided and driven to extremes.</p><img src="https://counter.theconversation.com/content/86909/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Kozinets does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The way people use social media – and the algorithms inside those systems – increases passions, and drives people to polarizing extremes.Robert Kozinets, Hufschmid Chair of Strategic Public Relations, USC Annenberg School for Communication and JournalismLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/826572017-08-21T09:07:31Z2017-08-21T09:07:31ZIslamic State’s Twitter network is decimated, but other extremists face much less disruption<figure><img src="https://images.theconversation.com/files/182566/original/file-20170818-7972-c811z3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">IS supporters are turning away from Twitter. </span> <span class="attribution"><span class="source">via shutterstock.com</span></span></figcaption></figure><p>Alongside the fierce battles that have been raging of late in key strongholds of so-called Islamic State (IS) in <a href="http://www.aljazeera.com/news/2017/08/lebanese-army-begins-offensive-isil-border-170819035851822.html">Syria</a> and <a href="https://theconversation.com/liberating-mosul-is-one-thing-next-comes-the-real-challenge-for-iraq-75425">Iraq</a>, intense battles against Islamist extremism have also been taking place online, particularly on social media platforms including Twitter, Facebook and YouTube. </p>
<p>In our recent <a href="http://www.voxpol.eu/download/vox-pol_publication/DCUJ5528-Disrupting-DAESH-1706-WEB-v2.pdf">research</a> we looked more closely at the fight against IS on Twitter. While we found the majority of IS accounts were being quickly suspended, accounts linked to other extremist groups were not. </p>
<p>The use of social media by a diversity of violent extremists and terrorists and their supporters has been a matter of concern for law enforcement and politicians for some time. In the aftermath of the London Bridge attack in June 2017, the British prime minister, Theresa May, <a href="https://www.ft.com/content/0ae646c6-4911-11e7-a3f4-c742b9791d43?mhq5j=e3">reiterated her warning</a> to online companies, including Twitter and Facebook, to eradicate extremist “safe spaces”.</p>
<p>As one of IS’s preferred social media spaces, Twitter has been subject to particular scrutiny. The company <a href="https://www.cbsnews.com/news/twitter-announces-it-has-suspended-235000-terror-linked-accounts/">maintains</a> that its strategies for disrupting pro-IS content and accounts have become more effective in recent years. From mid-2015 through to January 2016, Twitter claimed to have suspended in the region of <a href="https://blog.twitter.com/official/en_us/a/2016/combating-violent-extremism.html">15,000 to 18,000</a> IS-supportive accounts per month. This disruption activity ramped up considerably from mid-February to mid-July 2016, according to Twitter, increasing to an average of <a href="https://blog.twitter.com/official/en_us/a/2016/an-update-on-our-efforts-to-combat-violent-extremism.html">40,000</a> pro-IS account suspensions per month.</p>
<h2>Disrupting Daesh</h2>
<p>In our research, undertaken in February to April 2017, we sought to determine how effective Twitter’s disruption strategies actually are. Disruption refers here to the take down of content and the suspension of accounts. </p>
<p>Our dataset consisted of 722 pro-IS accounts. Some were identified based on their avatar or carousel image containing explicitly pro-IS imagery or text. Other accounts were included if they had at least one recent tweet (not just a reweet) by the user containing explicitly pro-IS content, such as referring to IS as <em>Dawlah</em>, meaning state, or their fighters as “lions”. The same parameters were used to categorise non-IS jihadist accounts. We excluded so-called “throwaway” accounts that had no followers. </p>
<p>Our <a href="http://www.taglaboratory.org/">software system</a> continuously monitored the pro-IS accounts in our dataset to ascertain if and when they were suspended. In total, 455, or 63%, of the pro-IS accounts in our dataset were suspended in just three months. However, an unknown number of accounts were disrupted with such speed and intensity that we were not able to include them in our dataset. Given this and other considerations, we estimate that the total loss of IS-supportive accounts over the period was probably greater than 90%.</p>
<p>These results are in stark contrast to the findings of seminal <a href="https://www.brookings.edu/wp-content/uploads/2016/06/isis_twitter_census_berger_morgan.pdf">research</a> undertaken by JM Berger and Jonathan Morgan on IS supporter accounts in 2014. In their analysis of 20,000 such accounts between September 2014 and January 2015, they observed suspension of just 678 accounts, which accounted for a total loss of just 3.4%.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=451&fit=crop&dpr=1 600w, https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=451&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=451&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/182739/original/file-20170821-23925-iau354.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Blocking some accounts faster than others.</span>
<span class="attribution"><span class="source">clasesdeperiodismo/flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>Differential disruption</h2>
<p>But while it appears that Twitter is now severely disrupting pro-IS accounts on its platform, our research found that other jihadists were not subject to the same levels of take down. </p>
<p>For comparison, we entered a sample of 451 other jihadist accounts into our database, including those supportive of extremist groups <a href="http://www.aljazeera.com/news/2017/07/hayat-tahrir-al-sham-control-syria-idlib-170723215932668.html">Hay'at Tahrir al-Sham</a>, <a href="http://web.stanford.edu/group/mappingmilitants/cgi-bin/groups/view/523">Ahrar al-Sham</a>, the <a href="https://theconversation.com/the-war-against-the-taliban-is-still-unwinnable-77881">Taliban</a>, and <a href="https://theconversation.com/has-shabaab-been-weakened-for-good-the-answer-is-yes-and-no-67067">al-Shabaab</a>. Of these, 163 were eventually suspended. But while more than 30% of pro-IS accounts were suspended within two days of their creation, less than 1% of other jihadist accounts met the same fate. </p>
<p>Their longer life meant that non-IS jihadist Twitter accounts had the opportunity to send six times as many tweets, follow or “friend” four times as many accounts and, critically, gain 13 times as many followers as pro-IS accounts. </p>
<p>We believe that our pro-IS Twitter account dataset is as close as possible – taking into account some caveats detailed in our report – to a full dataset of explicitly IS-supportive accounts with at least one follower for the period studied. The dataset of other jihadist accounts, on the other hand, no way reflects the true number of these accounts on Twitter.</p>
<h2>Wider considerations</h2>
<p>Our research shows that Twitter is no longer a conducive space for IS supporters. Twitter’s aggressive pro-IS account take-down activity means that the once vibrant and extensive IS Twitter network is now almost non-existent. </p>
<p>The almost exclusive focus on IS’s Twitter activity by researchers and others means that the online activity, including the Twitter activity of non-IS jihadis, the extreme right, and others, has largely gone under the radar. Professionals working in this area such as content moderators, law enforcement officers and researchers need to pay more careful attention to this activity going forward. This should include a focus on the diversity of other platforms besides Twitter being used by the whole range of other violent extremists and terrorists. </p>
<p>The migration of the pro-IS social media community from Twitter to the messaging service Telegram particularly bears watching. Telegram currently has a lower profile than Twitter with a smaller user base and higher barriers to entry, with users required to provide a mobile phone number to create an account. While this means that fewer people are being exposed to IS’s online content via Telegram, and are thereby in a position to be radicalised by it, it may mean that Telegram’s pro-IS community is more committed and therefore poses a greater security risk than its Twitter variant.</p><img src="https://counter.theconversation.com/content/82657/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Maura Conway is the Coordinator of the European Union Framework Programme 7-funded VOX-Pol project. The reported research also received funding from the UK Home Office. </span></em></p><p class="fine-print"><em><span>Suraj Lakhani does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New research has tracked how quickly Twitter accounts linked to extremism are being suspended.Suraj Lakhani, Lecturer in Criminology & Sociology, University of SussexMaura Conway, Professor of International Security, Dublin City UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/796822017-06-19T13:09:22Z2017-06-19T13:09:22ZFinsbury Park attack shows the harm Islamophobia continues to inflict on Muslim communities<p>Following the attack on a group of Muslim worshippers in <a href="http://www.bbc.co.uk/news/live/uk-40323279">Finsbury Park</a> that left one person dead and 11 injured, Londoners have once again demonstrated their strength and unity in the face of violence. </p>
<p>Neil Basu, deputy assistant commissioner for the Metropolitan Police and senior national coordinator for counter terrorism, <a href="http://news.met.police.uk/news/incident-in-seven-sisters-road-247036">commended the response of the Muslim community</a>, who stopped the man suspected to have carried out the attack before turning him over to police. One man who apprehended the man <a href="http://www.bbc.co.uk/news/av/uk-england-london-40323698/finsbury-park-attack-he-was-shouting-i-want-to-kill-all-muslims">told</a> the BBC that the individual had said he “wanted to kill Muslims”. </p>
<p>The Finsbury Park attack occurred just after the <a href="https://www.washingtonpost.com/local/fairfax-loudoun-police-searching-for-missing-17-year-old-reported-to-have-been-assaulted/2017/06/18/02e379ac-5466-11e7-a204-ad706461fa4f_story.html?utm_term=.d0fd7c0f10f5">murder</a> of a teenage Muslim girl in Virginia and an attempted <a href="https://www.thelocal.se/20170614/man-with-alleged-nazi-links-admits-driving-his-car-into-refugee-demonstration-in-malmo-sweden">vehicular attack on Iraqi migrants in Sweden</a>. There is little doubt that this incident targeted the Muslim community, and while we cannot speculate as to what exactly motivated this violence, such an incident demands that we reflect on the harm that Islamophobia can cause.</p>
<h2>Heightened tensions</h2>
<p>The attack took place near Finsbury Park Mosque and the Muslim Welfare House on Seven Sisters Road, north London. Finsbury Park Mosque is infamous because the violent extremist, Abu Hamza, preached there before his arrest in 2004. Since then, under new leadership, the mosque and its leaders have made outstanding contributions to the local community, which has been <a href="http://www.islingtongazette.co.uk/news/finsbury-park-first-mosque-to-win-prestigious-national-award-1-3836208">recognised nationally</a>. Regardless of this recognition, parts of the press continue to demonise the mosque. </p>
<p>On the night of the attack, Mail Online <a href="https://twitter.com/IlhanNur/status/876607979163987973/photo/1?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Fmic.com%2Farticles%2F180248%2Fdaily-mail-other-media-outlets-criticized-for-victim-blaming-muslims-for-finsbury-park-attack">referenced</a> Hamza – who was <a href="https://theconversation.com/abu-hamza-sentenced-to-life-in-prison-after-years-of-abusing-the-limits-of-free-speech-36087?sr=1">sentenced</a> to life in prison in the US in 2015 – in their headline for a report on the attack.</p>
<p>As a researcher on Islamophobia, I have had the opportunity to speak with members of the mosque’s leadership a few times. I recall a conversation I had with Mohammed Kozbar, then chairman of the mosque, in 2012 about a <a href="http://www.islamophobiawatch.co.uk/pigs-head-attack-on-finsbury-park-mosque/">pig’s head</a> left on the gate to the mosque in 2010 and a hoax anthrax <a href="http://bioprepwatch.com/stories/510507971-anthrax-hoax-at-london-mosque">threat</a> sent to the mosque in 2011. He told me then that the community was feeling vulnerable and fearful. He reminded me as well that the media rarely, if ever, reported on the positive contributions made by members of the mosque.</p>
<p>In 2015, in a <a href="https://www.tellmamauk.org/wp-content/uploads/pdf/tell_mama_2015_annual_report.pdf">report</a> for Tell MAMA (the UK’s primary watchdog for anti-Muslim hate) and the Metropolitan Police, I identified a cluster of nine anti-Muslim hate crimes and incidents targeting the mosque. The misplaced association of the congregation with violent extremism continues to make the site a target for hate. In this sense, it should sadly come as no surprise that Finsbury Park has been targeted once again.</p>
<p>Islamophobia and anti-Muslim hatred have demonstrably increased year on year. This is evident in police data that I have reviewed from <a href="https://tellmamauk.org/anti-muslim-hate-crimes-2012-2014-in-london-an-analysis-of-the-situation/">2012 to 2014</a> and in <a href="https://tellmamauk.org/category/reports/">reports by Tell MAMA</a> that include data from victims, charities, and police forces across the country. Between May 2013 and September 2016, 100 mosques <a href="https://tellmamauk.org/over-100-mosques-targeted-and-attacked-since-may-2013/">were targeted</a> and attacked. </p>
<p>Spikes of hate tend to follow attacks perpetrated by Muslims in the UK and abroad. These dynamics are evident in research on the attacks in Paris in 2015. The three atrocities that claimed lives in Westminster, Manchester, and London Bridge have led to a <a href="https://www.theguardian.com/uk-news/2017/jun/07/anti-muslim-hate-crimes-increase-fivefold-since-london-bridge-attacks">major increase in anti-Muslim hate</a> based both on police evidence and <a href="https://www.theguardian.com/uk-news/2017/may/24/muslim-leaders-in-manchester-report-rise-in-islamophobic-incidents">reports</a> from Muslim communities. </p>
<p>These spikes are not localised and they affect Muslim communities across the country. In this sense, the way that Muslims are framed in reporting on terrorism directly harms communities by putting them in the cross-hairs of lone criminals, angry citizens, and extreme right-wing terrorists.</p>
<h2>Anti-Muslim hate plays a role</h2>
<p>This attack, as the Metropolitan Police were quick to note, has all the hallmarks of a terrorist incident, and it is being <a href="http://news.met.police.uk/news/incident-in-seven-sisters-road-247036">investigated as such</a>. More details about the attacker’s motivation is likely to emerge as the investigation continues. </p>
<p>There is a blurry line between hate crime and terrorism. And it is difficult to impute any kind of causality between far-right extremists and such an attack.</p>
<p>What is clear, however, is that irresponsible sensationalism and the growth of Islamophobia inspires fear, anxiety, and hate towards Muslims. A <a href="http://www.parliament.uk/business/committees/committees-a-z/commons-select/home-affairs-committee/inquiries/parliament-2015/inquiry7/">report published in May</a> from the Home Affairs Select Committee showed that social media is an important medium for sharing and distributing these sentiments. Platforms such as Facebook and Twitter provide an environment in which cliques of users normalise and legitimate anti-Muslim ideologies.</p>
<p>It is important that the Finsbury Park investigation questions whether or not the attacker was influenced by extreme right-wing opinions disseminated online. However, it is also crucial to see if this individual was influenced by the press when he selected Muslims in the Finsbury Park area as his target.</p>
<p>Whether or not this incident is considered a terrorist attack should not distract us from the bigger problem: the failure of politicians and the media to effectively counter Islamophobia has caused Muslims to become targets of violence on their way home from prayer.</p><img src="https://counter.theconversation.com/content/79682/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bharath Ganesh's research at the Oxford Internet Institute receives funding for analysing the dynamics of online extremism as part of the VOX-Pol Network of Excellence, funded by the European Union. He was previously Senior Researcher at Tell MAMA.</span></em></p>A man has been arrested after driving a van into worshippers near a mosque in north London.Bharath Ganesh, Researcher, Oxford Internet Institute, University of OxfordLicensed as Creative Commons – attribution, no derivatives.