tag:theconversation.com,2011:/fr/topics/cyber-racism-9413/articlesCyber racism – The Conversation2020-06-25T13:10:00Ztag:theconversation.com,2011:article/1409972020-06-25T13:10:00Z2020-06-25T13:10:00ZSocial media helps reveal people’s racist views – so why don’t tech firms do more to stop hate speech?<figure><img src="https://images.theconversation.com/files/344037/original/file-20200625-33519-qkuujm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/woman-covering-head-hands-fear-terrible-1172364829">GoodStudio/Shutterstock</a></span></figcaption></figure><p><em>This article contains examples of racist, Islamophobic and threatening language</em></p>
<p>Twitter has <a href="https://www.theguardian.com/media/2020/jun/19/katie-hopkins-permanently-removed-from-twitter">finally permanently removed</a> right-wing commentator Katie Hopkins from its platform for violating its “<a href="https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy#:%7E:text=Hateful%20conduct%3A%20You%20may%20not,%2C%20disability%2C%20or%20serious%20disease.">hateful conduct</a>” policy. Many would ask why it took so long for Twitter to ban someone with such a <a href="https://www.bbc.co.uk/news/entertainment-arts-40061888">long record</a> of offensive comments.</p>
<p>Yet for every right-winger like Hopkins, there are many more people on social media who don’t command such a large following and might be seen in some respects as ordinary people, but who are in fact equally as dangerous. They may not share the motivation of the far right, but they still express and incite racial and religious hatred, often through social creativity and online manipulation. </p>
<p>As Black Lives Matter continues to draw attention to racism – and trigger pushback from people using social media to express sentiments against people of colour – it’s time internet companies did more to tackle all forms of bigotry. </p>
<p>A few years ago, I <a href="https://onlinelibrary.wiley.com/doi/10.1002/1944-2866.POI364">conducted research</a> on online Islamophobia following the <a href="https://www.bbc.co.uk/news/uk-22644857">2013 Woolwich terror attack</a>, identifying <a href="https://www.wired.co.uk/article/anti-muslim-twitter-trolls-study">eight types</a> of offender on Twitter who could be classed as racist. Most were not members of a far-right group. They included builders, plumbers, teachers and even local councillors. But many used the cover of social media to spread their own conspiracy theories and an “us and them” narrative.</p>
<p>Some people who fall into these categories still make very explicitly bigoted and even threatening comments. They can be people like Rhodenne Chand, a man of Indian origin who wasn’t a member of any far-right group but <a href="https://www.bbc.co.uk/news/uk-england-birmingham-44608876">was jailed</a> for posting a series of Islamophobic tweets after the 2017 Manchester Arena attack. These included the claim he wanted to “slit a Muslim throat”.</p>
<p>Meanwhile, rugby-playing student <a href="https://www.bbc.co.uk/news/uk-wales-18149852">Liam Stacey</a> was jailed for making racist tweets about footballer Fabrice Muamba. Both these cases show that you don’t have to be a far-right neo-Nazi with a hatred for all things multicultural, to make bigoted and indeed criminal statements. You can simply be someone who buys into the racist views and fake news spread by social media. </p>
<h2>Joining in</h2>
<p>You also don’t need to be this obviously racist to enact or encourage prejudiced behaviour online. My research showed some people simply join in with conversations targeting vulnerable figures. Others post messages that don’t say anything specifically racist but that they know will inflame racial tensions. </p>
<p>For example, I encountered a post asking: “What is your typical British breakfast?”. Out of context it seems harmless yet it led to a spiral of hateful comments about Muslims:</p>
<blockquote>
<p>For every sausage eaten or rasher of bacon we should chop of a Muslims head [sic].</p>
<p>Muslims are not human.</p>
<p>One day we will get you scum out.</p>
<p>Muslim men are pigs … I am all for annihilation of all Muslims.“ </p>
</blockquote>
<p>In this way, social media acts as an amplifing <a href="https://policy.bristoluniversitypress.co.uk/islamophobia">echo chamber</a> for such hateful rhetoric and racist views. It makes the way some people imagine the world seem more real. And it reinforces how they see the internet as a place where it’s acceptable to post comments with racially motivated language, often with the caveat that they are not racist but simply hate an ideology.</p>
<p>This can be seen as a form of <a href="https://www.routledge.com/Islamophobia-in-Cyberspace-Hate-Crimes-Go-Viral/Awan/p/book/9781472458094">social creativity</a> where people shape their online behaviour to try to position their in-group (a social group with which they identify) as dominant in society. Another phrase I would use to describe them is the "virtual cyber mob”.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/344039/original/file-20200625-33538-91cck9.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">You don’t have to be a stereotype neo-Nazi to make bigoted comments online.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/diverse-black-white-people-sitting-row-1064028812">fizkes/Shutterstock</a></span>
</figcaption>
</figure>
<p>As I’ve continued to research social media over the years, I’ve seen how such behaviour has become normalised even as its focus has changed. <a href="https://www.gov.uk/government/publications/extremism-online-analysis-of-extremist-material-on-social-media">In 2019</a>, I led an independent government-commissioned research project to see how people were using social platforms to spread racist views. This time we noted many posts centred around the media, fake news and conspiracy theories.</p>
<p>As part of the study, we collected hundreds of tweets posted in response to the 2019 <a href="https://www.bbc.com/news/world-asia-47578798">terrorist attack</a> against a mosque in Christchurch, New Zealand. Many portrayed it in terms of media bias against victims of other attacks. For example:</p>
<blockquote>
<p>A few dead muslims compared to millions of slaughtered innocents at the hands of islamic barbarians. #islamisevil #NewZealandTerroristAttack [sic]</p>
<p>Let us not forget the thousands upon thousands of victims killed by the real ‘terrorists’, propagating the Islamic ideology. #AntiIslamic #IslamIsEvil #EndIslam #Muslims</p>
</blockquote>
<p>It’s important to recognise that these comments on social media reflect wider attitudes that are endemic in the offline world. Social media can appear to act as a megaphone for racists, but these opinions are much more mainstream then you think. As a society we need to grapple with how these ideas have become normalised, and challenge and expose them.</p>
<p>Social media companies including <a href="https://www.theguardian.com/world/2018/mar/14/facebook-bans-britain-first-and-its-leaders">Facebook</a>, <a href="https://www.bbc.co.uk/news/technology-43572168">Twitter</a> and now <a href="https://english.alaraby.co.uk/english/news/2020/4/25/tiktok-bans-tommy-robinson-over-hate-speech-violations">TikTok</a> have taken active steps to block and remove those people clearly linked with the far right. But this is only a starting point. More needs to be done to identify other individuals who are less obviously spreading hatred, often under the protection of anonymity. Only then can we try and effectively change attitudes and reduce social media’s significant capacity for harm.</p><img src="https://counter.theconversation.com/content/140997/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Imran Awan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The likes of Katie Hopkins may be slowly disappearing from social media sites but less extreme and obvious racism is still widespread.Imran Awan, Professor of Criminology, Birmingham City UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1383892020-06-09T22:53:08Z2020-06-09T22:53:08ZZoom-bombings disrupt online events with racist and misogynist attacks<figure><img src="https://images.theconversation.com/files/340493/original/file-20200609-165349-xwloeg.jpg?ixlib=rb-1.1.0&rect=58%2C0%2C6530%2C4350&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The zoom-bombing of online meetings, classes and social events reflect a disturbing trend.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>As COVID-19 circulated the globe in March, reports emerged of another new, viral threat: “Zoom-bombing.” </p>
<p>The term derives from photo-bombing, which is defined as appearing “<a href="https://dictionary.cambridge.org/dictionary/english/photobomb">behind or in front of someone when their photograph is being taken, usually doing something silly as a joke</a>.” However, for many Zoom online meeting hosts, participants and computing infrastructure managers, Zoom-bombing was no joke. </p>
<p>The cancellation of in-person school and university classes prompted a stock market surge for Zoom, along with considerable scrutiny of <a href="https://www.vice.com/en_us/article/k7e599/zoom-ios-app-sends-data-to-facebook-even-if-you-dont-have-a-facebook-account">the video conferencing company’s startlingly weak privacy and security protocols</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/JEESnmEudkE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC News: The National takes a look at Zoom-bombing.</span></figcaption>
</figure>
<p>And yet, it has been the Zoom-bomb — the interruption of Zoom meetings — that has led to <a href="https://techcrunch.com/2020/03/17/zoombombing/">considerable news media attention since mid-March</a>. While the company sought to <a href="https://blog.zoom.us/wordpress/2020/03/20/keep-uninvited-guests-out-of-your-zoom-ev">communicate best practices to prevent Zoom-bombing</a>, it continued to proliferate, leading users and shareholders alike to <a href="https://campaigns.organizefor.org/petitions/demand-that-zoom-immediately-create-a-solution-to-protect-its-users-from-racist-cyber-attacks">organize an online petition</a> and <a href="https://www.classaction.org/media/johnston-v-zoom-video-communications-inc.pdf">threaten class-action lawsuits</a>.</p>
<p>Zoom-bombing gradually began to subside after the FBI issued a statement on March 30, <a href="https://www.fbi.gov/contact-us/field-offices/boston/news/press-releases/fbi-warns-of-teleconferencing-and-online-classroom-hijacking-during-covid-19-pandemic">characterizing it as a cybercrime that should be reported to law enforcement agencies</a>.</p>
<h2>Disrupting targets</h2>
<p>Given the fear, disruption and anxiety produced by the COVID-19 pandemic, the intentional disruption of online work and education raises some obvious questions. </p>
<p>First, what would motivate someone to cause such a disruption during an unprecedented global pandemic? Is this the work of isolated individuals or a coherent co-ordinated campaign, targeting democratic institutions and processes? What is the goal of such disruptions, and who has been targeted? </p>
<p>Our team of researchers at Ryerson University’s <a href="https://www.disinformnet.ca/">Infoscape Research Lab</a> set out to answer these questions by studying three popular social media platforms: Twitter, Reddit and YouTube. We anticipated that Zoom-bombing would take on different characteristics on each of these platforms, since each is designed to facilitate a different form of communication. </p>
<p>At the outset of our research, we employed digital humanities methods to track the language associated with Zoom-bombing on each of the platforms. Tracking keywords enabled our research to cast a wide net and collect as much user generated content as possible related to Zoom-bombing on the three platforms.</p>
<h2>Broader concerns</h2>
<p>From April 3-28, 2020, our study analyzed a random sample of 1,000 tweets that contained Zoom-bomb related terms. Over half of the tweets sought to organize and co-ordinate Zoom-bombing, often by sharing Zoom access codes, or posted information and advice on how to avoid such online disruptions. </p>
<p>Tweets often reflected broader social concerns over continuing work during COVID-19, the security of online meetings and the emerging challenges to online learning. A significant percentage of tweets (15.9 per cent) specifically named online targets, including Holocaust memorials, Asian community groups, Alcoholics Anonymous meetings and various religious services. </p>
<p>Some tweets (9.4 per cent) feature students sharing Zoom access ID codes for their own classes to target specific teachers. We found that Twitter users also used the occasion to post and comment on humorous content related to the Zoom-bombing phenomenon. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=569&fit=crop&dpr=1 600w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=569&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=569&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=715&fit=crop&dpr=1 754w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=715&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=715&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A chart showing the content breakdown posted on Twitter.</span>
<span class="attribution"><span class="source">(Greg Elmer, Anthony Glyn Burton, Stephen Neville)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>In contrast, the prevalence of popular keywords on Reddit such as “zoompranks” and “OnlineClassRaid” suggested that the platform was largely being used as a staging ground for the Zoom-bombing of online classes and meetings. An analysis of 300 random posts from Zoom-bombing subreddits (online discussion groups dedicated to specific topics) confirmed our suspicions.</p>
<p>Nearly 70 per cent of all posts served to co-ordinate Zoom-bombing, either by sharing practical advice, Zoom meeting ID access codes or other logistical information. If we include posts that offer short affective outbursts or reactions to Zoom-bombing, then this figure approaches 90 per cent of all posts. </p>
<p>By contrast, a mere 1.3 per cent of posts admonished those in the subreddits for launching such attacks. While the vast majority of Reddit posts sought to facilitate Zoom-bombing, we also found a small percentage (6.4 per cent) that targeted particular groups, including an LGBTQ social meeting and a breastfeeding support class. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=530&fit=crop&dpr=1 600w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=530&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=530&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=666&fit=crop&dpr=1 754w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=666&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=666&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A chart showing the content breakdown posted on Reddit.</span>
<span class="attribution"><span class="source">(Greg Elmer, Anthony Glyn Burton, Stephen Neville)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Of all the platforms we studied, YouTube offered the most jarring view of Zoom-bombing. This highlighted the popular and controversial role that the platform played in offering videos of the “funniest moments” in Zoom-bombing.</p>
<p>Following a similar method to our study of Reddit and Twitter, we analyzed a sample of 60 of the most viewed videos on YouTube. The large majority of these videos (85 per cent) were roughly 10-minute compilations of multiple clips of Zoom-bombs, many of which were initially shared on TikTok, a popular video-sharing platform. </p>
<p>The remaining videos, also compilations, include commentaries throughout by YouTube “influencers,” individuals with large numbers of online followers. While the viewer sees these micro-celebrities laugh throughout the video, Zoom-bomb disruptions are hard to watch: many Zoom meeting hosts and participants were confused, irritated or shocked by the actions and words of Zoom-bombers. Teachers of smaller children looked traumatized. </p>
<p>Seventy-two per cent of our sample videos included mob-like interruptions, with multiple voices, screams, profanities and other sounds occurring at the same time. Most troubling, however, was the objectionable language and images used by Zoom-bombers.</p>
<p>While we found that a few Zoom-bombs included light-hearted pranks that bemused some Zoom meeting participants, nearly 87 per cent of YouTube compilations also contained racist, misogynist, homophobic and other objectionable content. Much of this content was directed against female teachers in Zoom classroom meetings. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=569&fit=crop&dpr=1 600w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=569&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=569&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=715&fit=crop&dpr=1 754w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=715&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=715&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A chart showing the content breakdown posted on YouTube.</span>
<span class="attribution"><span class="source">(Greg Elmer, Anthony Glyn Burton, Stephen Neville)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Viral threats</h2>
<p>We can draw a number of conclusions based on our research to date. </p>
<p>The insecurity of Zoom and the quick transition to online learning created an insecure environment, ripe for disruption and abuse by computer-savvy, overwhelmingly male high school students. </p>
<p>Zoom-bombing should remind us of the technological divide between the highly skilled and creative generations that live much of their lives online and older generations that struggle with platform settings, protocols and practices. </p>
<p>But such a generational divide should <a href="https://www.nytimes.com/2020/03/20/style/zoombombing-zoom-trolling.html">not mask the most troubling aspects of Zoom-bombing</a>, the intentional disruption of important work and the abusive targeting of women and people of colour. </p>
<p>Such toxic practices of course pre-exist <a href="https://www.vox.com/culture/2020/1/20/20808875/gamergate-lessons-cultural-impact-changes-harassment-laws">internet videoconferencing</a> and will unfortunately persist long after the end of Zoom-bombing. We may all be experiencing the pandemic together, but Zoom-bombing has also reminded us that viral threats require social solutions.</p><img src="https://counter.theconversation.com/content/138389/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Greg Elmer receives funding from Heritage Canada, SSHRC and the Bell Media Research Chair, Ryerson University. </span></em></p><p class="fine-print"><em><span>Anthony Glyn Burton receives funding from the Social Sciences and Humanities Research Council of Canada. </span></em></p><p class="fine-print"><em><span>Stephen J. Neville receives funding from the Social Sciences and Humanities Research Council of Canada.</span></em></p>Zoom-bombing disrupts people’s use of the Zoom platform for work, study and socializing. Zoom-bombing events have included racist and misogynist attacks on users.Greg Elmer, Professor, Professional Communication, Toronto Metropolitan UniversityAnthony Glyn Burton, Master's student, Joseph-Armand Bombardier SSHRC Scholar, Toronto Metropolitan UniversityStephen J. Neville, PhD Student of Communication & Culture, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1353112020-04-03T05:03:43Z2020-04-03T05:03:43Z‘Zoombombers’ want to troll your online meetings. Here’s how to stop them<figure><img src="https://images.theconversation.com/files/324883/original/file-20200402-74904-xa7m2f.jpg?ixlib=rb-1.1.0&rect=14%2C28%2C1876%2C1043&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">StanWilliams/Pixabay</span>, <span class="license">Author provided</span></span></figcaption></figure><p>“<a href="https://www.nytimes.com/2020/03/20/style/zoombombing-zoom-trolling.html">Zoombombing</a>” in case you haven’t heard, is the unsavoury practice of posting distressing comments, pictures or videos after gatecrashing virtual meetings hosted by the videoconferencing app <a href="https://zoom.us/">Zoom</a>. </p>
<p>With hundreds of millions around the world now reliant on the app for work, this unfortunate trend is becoming more common, <a href="https://www.cbsnews.com/news/zoom-bombing-calls-hacked-racial-slurs-pornography/">often involving a bombardment of pornographic imagery</a>.</p>
<p>In some cases, online trolls have crashed alcohol support group meetings held via the app. “Alcohol is soooo good,” <a href="https://thehill.com/policy/technology/490467-zoom-deeply-upset-after-online-trolls-interrupt-virtual-aa-meetings">the trolls reportedly said</a> to one group of recovering alcoholics. </p>
<p>In another incident, a Massachusetts-based high school teacher conducting an <a href="https://www.fbi.gov/contact-us/field-offices/boston/news/press-releases/fbi-warns-of-teleconferencing-and-online-classroom-hijacking-during-covid-19-pandemic">online class</a> had someone enter the virtual classroom and shout profanities, before revealing the teacher’s home address. </p>
<h2>Easy targets</h2>
<p>The problem is that Zoom meetings lack password protection. Joining one simply requires a standard Zoom URL, with an automatically generated nine-digit code at the end. A Zoom URL looks something like this: https://zoom.us/j/xxxxxxxxx</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/working-from-home-risks-online-security-and-privacy-how-to-stay-protected-134599">Working from home risks online security and privacy – how to stay protected</a>
</strong>
</em>
</p>
<hr>
<p>Gatecrashers may only have to try a handful of code combinations before successfully landing a victim. The meeting’s host doesn’t need to grant permission for others to join. And while hosts can disable the screen share function, they’d have to be quick. Too slow, and the damage is done. </p>
<p>Last week, Zoom upgraded security on its <a href="https://support.zoom.us/hc/en-us/articles/360041591671-March-2020-Update-to-sharing-settings-for-Education-accounts,">default settings</a>, but only for education accounts. The rest of the world needs to do this manually.</p>
<h2>Video conferencing is incredibly valuable</h2>
<p>Video conferencing technology has matured in recent years, driven by <a href="https://www.theverge.com/2020/4/1/21202584/zoom-security-privacy-issues-video-conferencing-software-coronavirus-demand-response">massive demand</a> even before COVID-19. </p>
<p>With social distancing restriction, virtual meetings are now the norm everywhere. Platforms like Zoom, Microsoft’s Skype and <a href="https://www.uctoday.com/collaboration/video-conferencing/top-10-video-conferencing-providers-2019-whos-king-of-collaboration/">others</a> have stepped up to meet demand.</p>
<p>Zoom is a <a href="https://azure.microsoft.com/en-us/overview/what-is-cloud-computing/">cloud-based</a> service that allows users to freely talk to and share video (if bandwidth allows) with others online. Notes, images and diagrams can also be shared to collaborate on projects. And meetings can have up to <a href="https://zoom.us/pricing">hundreds, even thousands, of participants</a>.</p>
<h2>How to stop the trolls</h2>
<p>Zoom is primarily a corporate collaboration tool that allows people to collaborate without hindrance. Unlike social media platforms, it was not a service that had to engineer ways to manage the bad behaviour of users – until now.</p>
<p>In January, Zoom <a href="https://support.zoom.us/hc/en-us/articles/201361953-New-Updates-for-Windows">issued a raft of security patches</a> to fix some problems.
If you get a prompt from Zoom to install updates, you should – but only if these updates are from Zoom’s own app and website, or via updates from Google Play or Apple’s App Store. Third-party downloads may contain malware (software designed to cause harm).</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/coronavirus-could-spark-a-revolution-in-working-from-home-are-we-ready-133070">Coronavirus could spark a revolution in working from home. Are we ready?</a>
</strong>
</em>
</p>
<hr>
<p>While up-to-date software is your first line of defence, another is to keep your meeting URL away from public forums such as Twitter. Anyone with meeting’s URL can join, after which they’re free to post comments, pictures and videos at will. If you’re hosting a meeting that gets Zoombombed, disable the “screen sharing” option as quickly as possible. </p>
<p>Another option for more security is to use the “waiting room” function. This makes people wanting to join visible to the host, but keeps them out of the main meeting until they’re allowed in. This option is turned off by default. You can enable it by signing-in to your Zoom account at <em><a href="https://zoom.us/">https://zoom.us/</a></em> and clicking “Settings”. </p>
<p>Other tips:</p>
<ul>
<li><p>ensure screen sharing is possible for the host only</p></li>
<li><p>turn off the function that allows file transfer</p></li>
<li><p>turn off the “allow removed participants to rejoin” setting</p></li>
<li><p>turn off the “join before host” setting</p></li>
<li><p>turn on the “require a password” setting for meetings.</p></li>
</ul>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/XhZW3iyXV9U?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">This video explains the ins and outs of setting up a safe Zoom session.</span></figcaption>
</figure>
<h2>Who are the trolls?</h2>
<p>With many Zoomombing attacks being on educational institutions, it’s likely a large number of these trolls are simply mischievous students who obtain meeting URLs from other students or chatrooms. </p>
<p>But zoombombing is by no means restricted to the classroom. With the world in lockdown, extremists of all kinds are finding ways to relieve their confinement frustration. We’ve known for some time that being able to operate anonymously on the web <a href="https://www.psychologicalscience.org/observer/who-is-that-the-study-of-anonymity-and-behavior">does not bring out the best in people</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dark-web-not-dark-alley-why-drug-sellers-see-the-internet-as-a-lucrative-safe-haven-132579">Dark web, not dark alley: why drug sellers see the internet as a lucrative safe haven</a>
</strong>
</em>
</p>
<hr>
<p>At present, it doesn’t appear Zoombombing is an organised criminal activity. That said, it’s probably only a matter of time before someone finds a way to leverage financial reward from the practice. This could take the form of business intelligence gleaned from listening in to the meetings of rivals and competitors, in a similar fashion to planting a “bug” in the room. </p>
<p>Similarly, we could see a black market for Zoom URLs emerge among professional hackers, who would have new incentives to hack various systems to obtain valuable URLs. </p>
<p>Cybersecurity experts, privacy advocates, lawmakers and law enforcement are all <a href="https://www.theverge.com/2020/4/1/21202584/zoom-security-privacy-issues-video-conferencing-software-coronavirus-demand-response">concerned</a> Zoom’s default privacy settings don’t do enough to protect users from malicious actors. </p>
<h2>The bottom line</h2>
<p>As the COVID-19 pandemic leads the world to do their work online in isolation, the technology that allows this freedom must come under close scrutiny. </p>
<p>Zoombombing is progressing from a student prank to <a href="https://www.adl.org/blog/what-is-zoombombing-and-who-is-behind-it">more serious</a> incidents of <a href="https://www.buzzfeednews.com/article/salvadorhernandez/zoom-coronavirus-racist-zoombombing">racist, sexist</a> and <a href="https://www.huffingtonpost.com.au/entry/nazi-zoombombing-jewish-yeshiva-university_n_5e84f704c5b692780506d519?ri18n=true">anti-semitic</a> hate speech.</p>
<p>Fortunately, safeguards aren’t difficult to build into such videoconferencing technologies. This just requires a willingness to do so, and needs to be done as a matter of urgency.</p><img src="https://counter.theconversation.com/content/135311/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>‘Zoombombing’ trolls have started to infiltrate virtual meetings - bombarding unsuspecting victims with racist and sexist speech and in some cases, pornographic imagery.David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1090722019-01-29T19:08:45Z2019-01-29T19:08:45ZRacism in a networked world: how groups and individuals spread racist hate online<figure><img src="https://images.theconversation.com/files/255530/original/file-20190125-108342-1xg36s8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">We could see even sharper divisions in society in the future if support for racism spreads online. </span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/QozzJpFZ2lg">Markus Spiske/Unsplash</a></span></figcaption></figure><p>Living in a networked world has many advantages. We get our news online almost as soon as it happens, we stay in touch with friends via social media, and we advance our careers through online professional networks.</p>
<p>But there is a darker side to the internet that sees far-right groups exploit these unique features to spread divisive ideas, racial hate and mistrust. Scholars of racism refer to this type of racist communication online as “cyber-racism”. </p>
<p>Even the creators of the internet are aware they may have unleashed a technology that is causing a lot of harm. Since 2017, the inventor of the World Wide Web, Tim Berners-Lee, has focused many of his comments about the dangers of manipulation of the internet around the spread of hate speech, <a href="https://www.theguardian.com/technology/2018/nov/05/tim-berners-lee-launches-campaign-to-save-the-web-from-abuse">saying that</a>:</p>
<blockquote>
<p>Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken. </p>
</blockquote>
<p>Our team conducted a <a href="https://doi.org/10.1016/j.chb.2018.05.026">systematic review</a> of ten years of cyber-racism research to learn how different types of communicators use the internet to spread their views. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-the-use-of-emoji-on-islamophobic-facebook-pages-amplifies-racism-105285">How the use of emoji on Islamophobic Facebook pages amplifies racism</a>
</strong>
</em>
</p>
<hr>
<h2>Racists groups behave differently to individuals</h2>
<p>We found that the internet is indeed a powerful tool used to influence and reinforce divisive ideas. And it’s not only organised racist groups that take advantage of online communication; unaffiliated individuals do it too. </p>
<p>But the way groups and individuals use the internet differs in several important ways. Racist groups are active on different communication channels to individuals, and they have different goals and strategies they use to achieve them. The effects of their communication are also distinctive. </p>
<p>Individuals mostly engage in cyber-racism to hurt others, and to confirm their racist views by connecting with like-minded people (seeking “<a href="https://www.nature.com/articles/srep40391">confirmation bias</a>”). Their preferred communication channels tend to be blogs, forums, news commentary websites, gaming environments and chat rooms. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=487&fit=crop&dpr=1 600w, https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=487&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=487&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=612&fit=crop&dpr=1 754w, https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=612&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/253591/original/file-20190114-43532-gap2mr.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=612&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Channels, goals and strategies used by unaffiliated people when communicating cyber-racism.</span>
</figcaption>
</figure>
<p>Strategies they use include denying or minimising the issue of racism, denigrating “non-whites”, and reframing the meaning of current news stories to support their views. </p>
<p>Groups, on the other hand, prefer to communicate via their own websites. They are also more strategic in what they seek to achieve through online communication. They use websites to gather support for their group and their views <a href="https://spssi.onlinelibrary.wiley.com/doi/abs/10.1111/asap.12159">through racist propaganda</a>. </p>
<p>Racist groups manipulate information and use clever rhetoric to help build a sense of a broader “white” identity, which often goes beyond national borders. They argue that conflict between different ethnicities is unavoidable, and that what most would view as racism is in fact a natural response to the “oppression of white people”. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=416&fit=crop&dpr=1 600w, https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=416&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=416&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=523&fit=crop&dpr=1 754w, https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=523&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/253593/original/file-20190114-43517-a840km.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=523&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Channels, goals and strategies used by groups when communicating cyber-racism.</span>
</figcaption>
</figure>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-the-alt-right-uses-milk-to-promote-white-supremacy-94854">How the alt-right uses milk to promote white supremacy</a>
</strong>
</em>
</p>
<hr>
<p>Collective cyber-racism has the main effect of undermining the social cohesion of modern multicultural societies. It creates <a href="https://www.palgrave.com/gp/book/9783319643878">division, mistrust and intergroup conflict</a>. </p>
<p>Meanwhile, individual cyber-racism seems to have a more direct effect by negatively affecting the well being of targets. It also contributes to maintaining a hostile racial climate, which may further (indirectly) affect the well being of targets. </p>
<h2>What they have in common</h2>
<p>Despite their differences, groups and individuals both share a high level of sophistication in how they communicate racism online. Our review uncovered the disturbingly creative ways in that new technologies are exploited. </p>
<p>For example, racist groups make themselves attractive to young people by providing interactive games and links to music videos <a href="https://www.humanrights.gov.au/publications/examples-racist-material-internet#1.2">on their websites</a>. And both groups and individuals are highly skilled at manipulating their public image via various narrative strategies, such as humour and the interpretation of current news to fit with their arguments. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/race-cyberbullying-and-intimate-partner-violence-79627">Race, cyberbullying and intimate partner violence</a>
</strong>
</em>
</p>
<hr>
<h2>A worrying trend</h2>
<p>Our findings suggest that if these online strategies are effective, we could see even sharper divisions in society as the mobilisation of support for racism and far-right movements spreads online. </p>
<p>There is also evidence that currently unaffiliated supporters of racism could derive strength through online communication. These individuals might use online channels to validate their beliefs and achieve a sense of belonging in virtual spaces where racist hosts provide an uncontested and hate-supporting community. </p>
<p>This is a worrying trend. We have now seen several examples of violent action perpetrated offline by isolated individuals who radicalise into white supremacist movements – for example, in the case of <a href="https://en.wikipedia.org/wiki/Anders_Behring_Breivik">Anders Breivik</a> in Norway, and more recently of <a href="https://en.wikipedia.org/wiki/Pittsburgh_synagogue_shooting">Robert Gregory Bowers</a>, who was the perpetrator of the Pittsburgh synagogue shooting. </p>
<p>In Australia, unlike most other liberal democracies, there are effectively no government strategies that seek to reduce this avenue for the spread of racism, despite many Australians <a href="https://theconversation.com/australians-believe-18c-protections-should-stay-73049">expressing a desire</a> that this be done.</p><img src="https://counter.theconversation.com/content/109072/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ana-Maria Bliuc received funding from the Australian Research Council. </span></em></p><p class="fine-print"><em><span>Andrew Jakubowicz receives funding from the Australian research council, the Australian Human Rights Commission and VicHealth. </span></em></p><p class="fine-print"><em><span>Kevin Dunn receives funding from the Australian Research Council and Multicultural NSW.</span></em></p>Both organised groups and unaffiliated individuals spread racist hate online, but they use different channels, have different goals and use different strategies to achieve them.Ana-Maria Bliuc, Senior Lecturer in Social Psychology, Western Sydney UniversityAndrew Jakubowicz, Emeritus Professor of Sociology, University of Technology SydneyKevin Dunn, Dean of the School of Social Science and Psychology, Western Sydney UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/854482017-10-16T00:16:27Z2017-10-16T00:16:27ZHere’s how Australia can act to target racist behaviour online<figure><img src="https://images.theconversation.com/files/190229/original/file-20171014-3527-1vy4jnu.jpg?ixlib=rb-1.1.0&rect=2022%2C167%2C1404%2C1503&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Racists take advantage of social media algorithms to find people with similar beliefs. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/dislike-cloud-word-blue-sky-423666622?src=a6ECOOczQqxoigzMsGSV9g-1-2">from www.shutterstock.com </a></span></figcaption></figure><p>Although racism online feels like an insurmountable problem, there are legal and civil actions we can take right now in Australia to address it. </p>
<p>Racism expressed on social media sites provided by <a href="http://forward.com/news/world/353625/germany-investigates-mark-zuckerberg-and-facebook-over-slow-removal-of-hate/">Facebook</a> and the <a href="https://www.wired.com/2017/03/youtubes-ad-problems-finally-blow-googles-face/">Alphabet stable</a> (which includes Google and YouTube) ranges from advocacy of <a href="https://techcrunch.com/2017/08/16/hatespeech-white-supremacy-nazis-social-networks/">white power</a>, support of the <a href="https://www.technologyreview.com/the-download/608882/facebooks-anti-semitic-ad-targeting-disaster/">extermination of Jews</a> and the call for <a href="https://www.facebook.com/StopMosq/?ref=page_internal">political action against Muslim citizens</a> because of their faith. Increasingly it occurs within the now “private” pages of groups that “like” racism.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=927&fit=crop&dpr=1 600w, https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=927&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=927&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1165&fit=crop&dpr=1 754w, https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1165&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/190228/original/file-20171014-3555-1d9nzl8.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1165&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Simon Wiesenthal Center 2017 Digital Terrorism and Hate Report card.</span>
<span class="attribution"><a class="source" href="http://www.wiesenthal.com/site/apps/nlnet/content.aspx?c=lsKWLbPJLnF&b=8776547&ct=14988437&notoc=1">Simon Wiesenthal Center</a></span>
</figcaption>
</figure>
<p>At the heart of the problem is the clash between commercial goals of social media companies (based around creating communities, building audiences, and publishing and curating content to sell to advertisers), and <a href="https://www.facebook.com/zuck">self-ascribed</a> ethical responsibilities of companies to users.</p>
<p>Although some platforms show <a href="http://www.thetimes.co.uk/article/youtube-hate-preachers-share-screens-with-household-names-kdmpmkkjk">growing awareness</a> of the need to respond more quickly to complaints, it’s a very slow process to automate.</p>
<p>Australia should focus on laws that protect internet users from overt hate, and civil actions to help balance out power relationships.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/tech-companies-can-distinguish-between-free-speech-and-hate-speech-if-they-want-to-82695">Tech companies can distinguish between free speech and hate speech if they want to</a>
</strong>
</em>
</p>
<hr>
<h2>Three actions on the legal front</h2>
<p>At the global level, Australia could withdraw its reservation to Article 4 of the <a href="http://dfat.gov.au/about-us/publications/Documents/final-cerd-report.pdf">International Convention to Eliminate All Forms of Racial Discrimination</a>. Such a move has been <a href="http://www5.austlii.edu.au/au/journals/MurUEJL/1995/6.html">flagged in the past</a>, but stymied by opposition from an alliance of free speech and social conservative activists and politicians.</p>
<p>The convention is a global agreement to outlaw racism and racial discrimination, and Article 4 committed signatories to criminalise race hate speech. Australia’s reservation reflected the conservative governments’ reluctance to use the <a href="https://treaties.un.org/Pages/ViewDetails.aspx?src=IND&mtdsg_no=IV-4&chapter=4&clang=_en">criminal law</a>, similar to the <a href="https://www.theguardian.com/australia-news/2017/mar/21/turnbull-pursue-18c-changes-despite-warning-marginal-seats">civil law debate</a> over section 18C of the Racial Discrimination Act in 2016/7.</p>
<p><a href="http://www.sbs.com.au/news/article/2017/08/24/60-cent-young-australians-have-experienced-race-abuse-online">New data</a> released by the eSafety Commissioner showed young people are subjected to extensive online hate. Amongst other findings, 53% of young Muslims said they had faced harmful online content; Indigenous people and asylum seekers were also frequent targets of online hate. Perhaps this could lead governments and opposition parties to a common cause.</p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/australians-believe-18c-protections-should-stay-73049">Australians believe 18C protections should stay</a></em> </p>
<hr>
<p>Secondly, while Australian law has adopted the European Convention on Cyber Crime, it could move further and adopt the <a href="https://edoc.coe.int/en/cybercrime/6559-convention-on-cybercrime-protocol-on-xenophobia-and-racism.html">additional protocol</a>. This outlaws racial vilification, and the advocacy of xenophobia and racism.</p>
<p>The impact of these international agreements would be to make serious cases of racial vilification online criminal acts in Australia, and the executive employees of platforms that refused to remove them personally criminally liable. This situation has emerged in Germany where Facebook executives have been threatened with the use of such laws. Mark Zuckerberg <a href="https://www.theguardian.com/technology/2016/feb/26/mark-zuckerberg-hate-speech-germany-facebook-refugee-crisis">visited Germany</a> to pledge opposition to anti-immigrant vilification in 2016.</p>
<p>Finally, Australia could adopt a version of <a href="https://www.netsafe.org.nz/cyberbullyingandonlineharassment/">New Zealand’s approach</a> to harmful digital communication. Here, platforms are held ultimately accountable for the publication of online content that seriously offends, and users can challenge the failure of platforms to take down offensive material in the realm of race hate. Currently complaints via the Australian Human Rights Commission do elicit informal cooperation in some cases, but citizen rights are limited.</p>
<p>Taken together, these elements would mark out to providers and users of internet services that there is a shared responsibility for reasonable civility.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/190230/original/file-20171014-3505-1l48wp6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Digital platforms can allow racist behaviour to be anonymous.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/gang-teenagers-hanging-out-urban-environment-216281416?src=FMqN2k8xfszR8NsA-OGYjg-1-19">from www.shutterstock.com</a></span>
</figcaption>
</figure>
<h2>Civil strategies</h2>
<p>In addition to legal avenues, civil initiatives can empower those who are the targets of hate speech, and disempower those who are the perpetrators of race hate.</p>
<p>People who are targeted by racists need support and affirmation. This approach underpins the eSafety commissioner’s development of a <a href="https://esafety.gov.au/">Young and Safe portal</a>, which offers stories and scenarios designed to build confidence and grow skills in young people. This is extending to address concerns of women and children, racism, and other forms of bullying.</p>
<p>The Online Hate Prevention Institute (<a href="http://ohpi.org.au">OHPI</a>) has become a reservoir of insights and capacities to identify and pursue perpetrators. As proposed by OHPI, a CyberLine could be created for tipping and reporting race hate speech online, for follow up and possible legal action. Such a hotline would also serve as a discussion portal on what racism looks like and what responses are appropriate.</p>
<p>Anti-racism workshops (some have already been run by the E Safety commissioner) have aimed to push back against hate, and build structures where people can come together online. Modelling and disseminating best practice against race hate speech offers resources to wider communities that can then be replicated elsewhere.</p>
<p>The Point magazine (an online youth-centred publication for the government agency Multicultural New South Wales) reported <a href="http://www.thepointmagazine.com.au/post.php?s=2016-11-30-social-media-platforms-battle-online-haters">two major events</a> where governments sponsored industry/community collaboration to find ways forward against cyber racism.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/PZl5hqKFQhA?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">What makes a diverse Australia?</span></figcaption>
</figure>
<p>The growth of online racism marks the struggle between a dark and destructive social movement that wishes to suppress or minimise the recognition of cultural differences, confronted by an emergent social movement that <a href="http://alltogethernow.org.au/a-solution-to-racism/">treasures cultural differences and egalitarian outcomes</a> in education and wider society.</p>
<p>Advocacy organisations can play a critical role in advancing an agenda of civility and responsibility through the state, the economy and civil society. The social movements of inclusion will ultimately put pressure on the state and in the economy to ensure the <a href="https://www.welcometoaustralia.org.au/">major platforms</a> do in fact accept full responsibilities for the consequences of their actions. If a platform refuses to publish hate speech or acts to remove it when it receives valid complaints, such views remain a private matter for the individual who holds them, not a corrosive undermining of civil society.</p>
<p>We need to rebalance the equation between civil society, government and the internet industry, so that when the population confronts the industry, demonstrating it wants answers, we will begin to see responsibility emerge. </p>
<p>Governments also need to see their role as more strongly ensuring a balance between the right to a civil discourse and the profitability of platforms. Currently the Australian government seems not to accept that it has such a role, even though a number of states have <a href="http://removehatefromthedebate.com/">begun to act</a>.</p>
<hr>
<p><em>The Cyber Racism and Community Resilience Project <a href="https://www.westernsydney.edu.au/__data/assets/pdf_file/0008/1234736/CRaCR_2016_s18C-RDA-submission.pdf">CRaCR</a> explores why cyber racism has grown in Australia and globally, and what concerned communities have and can do about it. This article summarises the recommendations CRaCR made to industry partners.</em></p><img src="https://counter.theconversation.com/content/85448/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This material is drawn from research for the book “Cyber Racism and Community Resilience” published by Palgrave Macmillan, written by the author and eight colleagues. This research was funded by an ARC Linkage Grant, with partners Australian Human Rights Commission, VicHealth and Federation of Ethnic Communities Councils of Australia (FECCA).</span></em></p>Racism thrives online because of a clash between the commercial goals and ethical responsibilities of social media companies. But Australia can take legal and civil actions right now to address this.Andrew Jakubowicz, Professor of Sociology, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/831852017-08-31T00:06:43Z2017-08-31T00:06:43ZWhat is the online equivalent of a burning cross?<figure><img src="https://images.theconversation.com/files/184062/original/file-20170830-927-1qf7sdh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Online hate isn't always as easy to spot as it might appear.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/word-hate-written-red-keyboard-buttons-328962281">Lukasz Stefanski/Shutterstock.com</a></span></figcaption></figure><p>White supremacy is woven into the tapestry of American culture, online and off – in both physical monuments and online domain names. A band of <a href="https://www.nytimes.com/2017/08/11/us/white-nationalists-rally-charlottesville-virginia.html?mcubz=0">tiki-torch-carrying white nationalists</a> gathered first online, and then at the site of a Jim Crow-era Confederate monument in Charlottesville, Virginia.</p>
<p>Addressing white supremacy is going to take much more than toppling a handful of <a href="http://www.npr.org/sections/thetwo-way/2017/08/19/544678037/duke-university-removes-robert-e-lee-statue-from-chapel-entrance">Robert E. Lee statues</a> or <a href="https://www.nytimes.com/2017/08/21/magazine/how-hate-groups-forced-online-platforms-to-reveal-their-true-nature.html">shutting down a few white nationalist websites</a>, as technology companies have started to do. We must wrestle with what freedom of speech really means, and what types of speech go too far, and what kinds of limitations on speech we can endorse.</p>
<p>The First Amendment right to free speech was never meant to protect the kind of hate-filled rhetoric that summoned the mass gathering in Charlottesville, during which <a href="https://www.theguardian.com/us-news/2017/aug/13/woman-killed-at-white-supremacist-rally-in-charlottesville-named">anti-racist demonstrator Heather Heyer</a> was killed. In 2003, <a href="https://www.law.cornell.edu/supct/html/01-1107.ZS.html">the Supreme Court ruled</a>, in <a href="https://en.wikipedia.org/wiki/Virginia_v._Black">Virginia v. Black</a>, that “cross burning done with the intent to intimidate has a long and pernicious history as a signal of impending violence.” In other words, there’s no First Amendment protection because a burning cross is meant to intimidate, not start a dialogue. But what constitutes a burning cross in the digital era?</p>
<h2>Stormfront, the epicenter of hate online</h2>
<p>I’ve been researching white supremacists for more than 20 years, and that work has straddled either side of the digital revolution. In the 1990s, I explored their movement through printed newsletters culled from the Klanwatch archive at the <a href="https://www.splcenter.org/">Southern Poverty Law Center</a>. As the web grew, my research shifted to the way these groups and their ideas moved onto the internet. My studies have included two white supremacist websites, one decommissioned and the other still active – Stormfront and martinlutherking.org. One is widely viewed as having run afoul of free speech protections; the other, at least as disturbing, has not yet been seen that way.</p>
<p>The Stormfront website, the online progenitor of (as its tagline touted) “white pride worldwide,” launched in 1995. Over more than two decades, Stormfront amassed more than <a href="http://mashable.com/2017/08/28/stormfront-white-supremacist-site-down/">300,000 registered users</a> and offered a haven for hate online. Since 2009, there have been nearly <a href="https://www.splcenter.org/fighting-hate/extremist-files/group/stormfront">100 homicides</a> attributable to registered members of the site, prompting the Southern Poverty Law Center to call it “the <a href="https://www.splcenter.org/20140401/white-homicide-worldwide">murder capital</a> of the internet.” </p>
<p>All that time it was largely ignored by the tech companies that effectively allowed it to exist, by selling server space and offering domain name registration.</p>
<p>Since July 2017, the <a href="https://lawyerscommittee.org/mission/">Lawyers’ Committee for Civil Rights Under Law</a>, a civil rights nonprofit founded at the <a href="https://news.google.com/newspapers?id=zFtYAAAAIBAJ&sjid=YPoDAAAAIBAJ&pg=6608,1876816&dq=lawyers+committee+for+civil+rights+under+law&hl=en">suggestion of President John F. Kennedy</a>, had been trying to focus tech companies’ attention on the violent and hateful content on Stormfront. The argument the Lawyers’ Committee for Civil Rights Under Law and its allies made was that “Stormfront crossed the line of permissible speech and incited and promoted violence,” <a href="https://www.theguardian.com/technology/2017/aug/29/stormfront-neo-nazi-hate-site-murder-internet-pulled-offline-web-com-civil-rights-action">the group’s executive director told the Guardian</a>. </p>
<p>In the wake of the violence in Charlottesville, that effort gained significant traction, ultimately chasing Stormfront off the internet. First, there was a move to <a href="https://arstechnica.com/tech-policy/2017/08/racist-daily-stormer-goes-down-again-as-cloudflare-drops-support/">boot The Daily Stormer</a>, a different white supremacist site, offline. Then, Network Solutions responded to the Lawyers’ Committee’s requests and <a href="https://techcrunch.com/2017/08/28/another-neo-nazi-site-stormfront-is-shut-down/">revoked Stormfront’s domain name</a>. Without an active domain name, ordinary web users can’t access the site, even though the content still remains on Stormfront’s servers. </p>
<p>(The sites have not been completely silenced: Some of their content is accessible to <a href="https://techcrunch.com/2017/08/24/daily-stormer-has-officially-retreated-to-the-dark-web/">people using the Tor Network</a>, and some <a href="http://www.yesmagazine.org/people-power/what-happens-when-the-internet-tries-to-silence-white-supremacy-20170828">is being posted on the social networking site Gab</a>, which supporters are then distributing on larger social media sites like Twitter and Facebook.)</p>
<p>With its decades-long trail of destruction, Stormfront is certainly a digital-era version of a cross burning. That makes it a soft target for fighting white supremacy online: Of course we should hold its hosting companies accountable and demand that its advocacy of white supremacist terror and violence be taken offline.</p>
<p>But more foreboding in some ways, and more difficult to address, are what are called “<a href="https://doi.org/10.1177/1461444809105345">cloaked sites</a>,” those that conceal their authorship to disguise a political agenda – a precursor to today’s “fake news” sites.</p>
<h2>Looking for Dr. King</h2>
<p>At first glance, the martinlutherking.org website appears to be a clumsy tribute to the civil rights leadership of <a href="https://www.nobelprize.org/nobel_prizes/peace/laureates/1964/king-bio.html">Rev. Dr. Martin Luther King Jr</a>. “It looks, you know, just like an individual created it,” said one of the young people <a href="https://rowman.com/ISBN/9780742561588/Cyber-Racism-White-Supremacy-Online-and-the-New-Attack-on-Civil-Rights">I interviewed</a> about their impressions of the site. Only at the very bottom of the page – where most people would never see it – does the page reveal its true source: “Hosted by Stormfront.” </p>
<p>Don Black, an <a href="https://www.splcenter.org/fighting-hate/extremist-files/individual/don-black">ideologically committed white supremacist</a>, <a href="http://www.huffingtonpost.com/keith-thomson/white-supremacist-site-ma_b_809755.html">launched this cloaked site in 1999</a>, a few years after he started Stormfront, and it has been online continuously since then. As of August 30, the site <a href="https://web.archive.org/web/20170830180836/http://martinlutherking.org/">remains online</a>.</p>
<p>The site’s invitation to “Join the MLK Discussion Forum” might seem innocuous, but the discussion is not only about King himself or racial justice in America. The topics in the forum read like excerpts from the <a href="https://www.nytimes.com/2014/11/16/magazine/what-an-uncensored-letter-to-mlk-reveals.html">FBI’s efforts</a> to <a href="http://www.salon.com/2000/01/24/mlk/">defame King</a>, alleging communism, plagiarism and sexual infidelity. The site is an attempt to undermine hard-won legal, political, social and moral <a href="https://nmaahc.si.edu/explore/initiatives/civil-rights-history-project">victories of the civil rights era</a>. </p>
<h2>The harm of white supremacy</h2>
<p>The fact that Stormfront is offline but martinlutherking.org isn’t suggests that we aren’t very sophisticated yet in our thinking about what kinds of risks white supremacy poses. While Stormfront is an obvious, overt threat to people’s lives, the cloaked site is a more subtle and insidious threat to the underlying moral argument for civil rights. Both are dangers to democracy. </p>
<p>White supremacy is corrosive. <a href="https://theintercept.com/2017/01/02/i-dont-think-were-free-in-america-an-interview-with-bryan-stevenson/">Bryan Stevenson</a>, a legal scholar, activist and a leading critic of our failure to address racism in the U.S., <a href="http://www.truth-out.org/progressivepicks/item/30545-bryan-stevenson-on-mass-incarceration-racial-injustice-we-all-need-mercy-we-all-need-justice">says</a> “the era of slavery created a lasting ideology of white supremacy; a doctrine of ‘otherness’ got assigned to people of color with dreadful consequences. That narrative has never seriously been confronted.” </p>
<p>What is at stake in both the fight over monuments and domain names is the same: our collective decision to perpetuate – or undo – the system of ideas that claims those in the category “white” are more deserving than everyone else of citizenship, voting, jobs, health, safety, of life itself.</p>
<p>If Americans are serious about wanting to dismantle white supremacy (and this remains an open question), then we are going to have to learn to see burning crosses in our midst, and seriously confront how this destructive set of ideas is part of the fabric of our culture. But if we want a society that respects human rights and rejects white supremacy, we can begin, in my view, by refusing to grant platforms for harmful ideas, on white nationalist websites and in monuments to the Confederacy.</p><img src="https://counter.theconversation.com/content/83185/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jessie Daniels has received funding from The MacArthur Foundation, Mellon Foundation and Ford Foundation. </span></em></p>Two websites, one taken offline, the other still active, raise hard questions about how prepared Americans are to deal with free speech about white supremacy, in both monuments and domain names.Jessie Daniels, Professor, City University of New YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/825662017-08-21T10:49:56Z2017-08-21T10:49:56ZOver the years, Americans have become increasingly exposed to extremism<figure><img src="https://images.theconversation.com/files/182639/original/file-20170818-7956-1hx6owg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A man sporting a Nazi tattoo leaves Emancipation Park in Charlottesville, Virginia on Aug. 12, 2017.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Confederate-Monuments-Protest/66f774f9480c4ff89080b0e45e958985/376/0">Steve Helber/AP Photo</a></span></figcaption></figure><p>Extremism has always been with us, but the internet has allowed ideas that advocate hate and violence to reach more and more people. Whether it’s the deadly “Unite the Right” rally in Charlottesville or the 2015 Charleston church massacre, it’s important to understand the internet and social media’s role in spreading extremism – and what can possibly be done to prevent these views from leading to actual violence.</p>
<p>For six years, I’ve been director of the Center for Peace Studies and Violence Prevention at Virginia Tech, which researches the causes and consequences of violence in society. While I’ve been studying extremist ideologies for over a decade, I’ve focused on its online forms since 2013. From our research, we’ve been able to track the growth of these views on the internet – how they’re spread, who’s being exposed to them and how they’re reinforced. </p>
<h2>The internet’s fertile landscape</h2>
<p>The First Amendment allows us to express any ideas, no matter how extreme. So how should we define extremism? On the one hand, it’s similar to Supreme Court Justice Stewart’s <a href="https://blogs.wsj.com/law/2007/09/27/the-origins-of-justice-stewarts-i-know-it-when-i-see-it/">famous quote about pornography</a> – “I know it when I see it.” </p>
<p>Extremism is generally used to describe ideologies that support terrorism, racism, xenophobia, left- or right-wing political radicalism and religious intolerance. In a way, it’s a political term describing beliefs that don’t reflect dominant social norms and that reject – either formally or informally – tolerance and the existing social order. </p>
<p>Extremist groups went online almost immediately after the internet was developed and their numbers increased dramatically after 2000, <a href="http://www.splcenter.org/get-informed/intelligence-report/browse-all-issues/2011/spring/the-year-in-hate-extremism-2010">reaching over 1,000 by 2010</a>. But the data on organized groups don’t include the sheer number of individuals who maintain websites or make extremist comments on social media platforms. </p>
<p>As the number of sites spewing hate has grown, so have recipients of the messages, with younger people particularly vulnerable. The percentage of people between the ages of 15 and 21 who <a href="https://www.researchgate.net/publication/319142925_Status_Relations_and_the_Changing_Face_of_Extremism_in_the_United_States_Since_1960">saw online extremist messages increased</a> from 58.3 percent in 2013 to 70.2 percent in 2016. While extremism comes in many forms, the growth of racist propaganda has been especially pronounced since 2008: Nearly two-thirds of those who saw extremist messages online said they involved attacking or demeaning a racial minority.</p>
<h2>Bubbles of hate</h2>
<p>In recent years, the proliferation of social media – which gives users the ability to reach millions instantaneously – <a href="https://books.google.com/books?id=P46O984ATb8C&printsec=frontcover&dq=viral+hate&hl=en&sa=X&ved=0ahUKEwjXiJHI1OHVAhWK0YMKHcnsAC0Q6AEIJjAA#v=onepage&q=viral%20hate&f=false">has made it easier to spread extreme views</a>.</p>
<p>But it is in more subtle ways that our online experiences may amplify extremism. It’s now common practice for social networking sites <a href="https://books.google.com/books?hl=en&lr=&id=wcalrOI1YbQC&oi=fnd&pg=PT26&dq=praiser+filter+bubble&ots=I2a4vnKCKv&sig=i1cYW4mXiEODsGJJgaBiP42zQbs#v=onepage&q&f=false">to collect the personal information of users</a>, with search engines and news sites using algorithms to learn about our interests, wants, desires and needs – all of which influences what we see on our screens. This process can create <a href="https://vtechworks.lib.vt.edu/bitstream/handle/10919/46864/rfs_Hawdon_2012.pdf?sequence=1&isAllowed=y">filter bubbles that reinforce our preexisting beliefs</a>, while information that challenges our assumptions or points to alternative perspectives rarely appears. </p>
<p>Every time someone opens a hate group’s website, reads its blogs, adds its members as Facebook friends or views its videos, the individual becomes enmeshed in a network of like-minded people espousing an extreme ideology. In the end, this process can harden worldviews that people become comfortable spreading. </p>
<p>Unfortunately, this seems to be happening. When we began our research in 2013, only 7 percent of respondents admitted to producing online material that others would likely interpret as hateful or extreme. Now, <a href="https://www.researchgate.net/publication/319142925_Status_Relations_and_the_Changing_Face_of_Extremism_in_the_United_States_Since_1960">nearly 16 percent of respondents report producing such materials</a>. </p>
<p>While most people who express extremist ideas do not call for violence, many do. In 2015, <a href="http://www.sciencedirect.com/science/article/pii/S0747563216303600/pdfft?md5=2b2d70c36340a4123775583198e0946d&pid=1-s2.0-S0747563216303600-main.pdf">about 20 percent</a> of the messages people saw online openly called for violence against the targeted group; this number <a href="https://www.researchgate.net/publication/319142925_Status_Relations_and_the_Changing_Face_of_Extremism_in_the_United_States_Since_1960">nearly doubled</a> by 2016. Granted, not everyone who sees these messages will be affected by them. But given that the radicalization process <a href="http://gelfand.umd.edu/KruglanskiGelfand(2014).pdf">often begins with simply being exposed to extremism</a>, government authorities in the U.S. and around the world have been understandably <a href="https://www.state.gov/j/cve/">concerned</a>.</p>
<h2>The role of social control</h2>
<p>While all of this seems bleak, there is hope. </p>
<p>First, companies such as GoDaddy, Facebook and Reddit <a href="http://www.ibtimes.com/charlottesville-attack-facebook-reddit-google-godaddy-shut-down-hate-groups-2579027">are banning accounts associated with hate groups</a>. Perhaps more importantly – as we saw during and after Charlottesville – people are defending diversity and tolerance. <a href="http://www.lifescienceglobal.com/pms/index.php/ijcs/article/viewFile/4409/2522">Over two-thirds of our respondents</a> report that when they see someone advocating hate online, they tell the person to stop or defend the attacked group. Similarly, people are using social media to expose the identities of extremists, which is what happened <a href="https://www.washingtonpost.com/news/food/wp/2017/08/14/charlottesville-white-nationalist-demonstrator-fired-from-libertarian-hot-dog-shop/?utm_term=.9d5069fb746d">to some of those involved in the Charlottesville rally</a>. </p>
<p>Perhaps these acts of online and offline social control can convince extremists that, somewhat ironically, a tolerant society doesn’t tolerate extremist ideologies. This may create a more tolerant virtual world, and, with luck, disrupt the radicalization of the next perpetrator of hate-based violence.</p><img src="https://counter.theconversation.com/content/82566/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>James E. Hawdon receives funding from The National Institute of Justice.
This project was supported by Award No. 2014-ZA-BX-0014, awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. The opinions, findings, and conclusions or recommendations expressed in this publication/program/exhibition are those of the author(s) and do not necessarily reflect those of the Department of Justice.</span></em></p>Given recent events, you might have had an inkling that extremist views have been resonating. Researchers from the Center for Peace Studies and Violence Prevention have the hard data to back it up.James E. Hawdon, Director, Center for Peace Studies and Violence Prevention, Virginia TechLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/729192017-06-12T11:02:22Z2017-06-12T11:02:22ZIs there structural racism on the internet?<figure><img src="https://images.theconversation.com/files/173174/original/file-20170609-4794-1e86c1w.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Do people use the internet in ways that disadvantage nonwhites?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/horizontal-vector-illustration-big-number-people-491897284">magic pictures/shutterstock.com</a></span></figcaption></figure><p>The racial inequalities afflicting Americans and our society today are in many ways a result of <a href="http://www.temple.edu/tempress/titles/2089_reg.html">the result of spatial segregation</a>. White people and nonwhite people tend to live in different neighborhoods, go to different schools and have dramatically different economic opportunities based on their race. That physical manifestation of structural racism has been <a href="http://www.penguinrandomhouse.com/books/46131/invisible-man-by-ralph-ellison/9780679732761/">true historically in this country</a>, and is <a href="http://www.penguinrandomhouse.com/books/220290/between-the-world-and-me-by-ta-nehisi-coates/9780812993547/">still the case today</a>.</p>
<p>Today’s internet is built on a similar spatial logic. <a href="http://dx.doi.org/10.1111/soc4.12190">People travel from website to website</a> in search of content in the same way they travel from neighborhood to neighborhood looking for stuff to do and people to hang out with. <a href="http://dx.doi.org/10.1007/978-3-319-39426-8_8">Websites accrue and compound value</a> as visitor traffic and site visibility increases.</p>
<p>But there is a crucial difference: Internet users have – more or less – complete freedom to travel where they choose. Websites can’t see the color of a user’s skin and police incoming traffic in the same way human beings can and do in geographical spaces. Therefore, it’s easy to imagine that the internet’s very structure – the social environments it produces and the new economies it births – might not be racially segregated the way the physical world is.</p>
<p>And yet the internet does appear in fact segregated along racial lines. <a href="http://dx.doi.org/10.1080/1369118X.2016.1206137">My research</a> demonstrates that websites focusing on racial issues are visited less often, and are less visible in search result rankings than sites with different, or broader, focuses. This phenomenon is not based on anything that individual website producers do. Rather, it appears to be a product of how users themselves find and share information online, a process mediated mostly by search engines and, increasingly, social media platforms.</p>
<h2>Exploring online racism</h2>
<p>Words like “racist” and “racism” are loaded terms, primarily because people almost always associate them with individualized moral and cognitive failures. In recent years, though, the American public has become increasingly aware that racism can apply to cultures and societies at large. </p>
<p>My work looks for online analogues of this systemic racism, in which <a href="https://www.routledge.com/Racial-Theories-in-Social-Science-A-Systemic-Racism-Critique/Elias-Feagin/p/book/9781138645226">subtle biases permeate society and culture</a> in ways that yield overwhelming advantages for whites, at the expense of nonwhites. Specifically, I am trying to determine whether the online environment, one completely constructed by humans, systematically produces advantages and disadvantages along racial lines – whether intentionally or inadvertently. </p>
<p>This is a difficult question to approach, but I begin by assuming that today’s technological systems have developed within a culture and society that is systemically and structurally racist. This makes it possible – even likely – that existing biases operate in similar ways online.</p>
<p>In addition, the historical geographical configurations that produced and perpetuated racial inequality provide a useful guide to investigating what systemic racism might look like online. The online landscape, and how people travel through it, are both important factors to understand this picture.</p>
<h2>Understanding online navigation</h2>
<p>First, I wanted to look at the map – how the web itself is structured by website producers. I analyzed what Alexa.com characterizes as the internet’s <a href="http://www.alexa.com/topsites/category/Top/Society/Ethnicity/African/African-American">top 56 African-American sites</a> using a software program called <a href="http://uberlink.com/">Voson</a>. Voson crawls the web to identify what websites the source sites link to, and what sites link to the source sites.</p>
<p>Then I set out to determine the racial content, if any, of each of those thousands of websites, to begin measuring any inequalities that might exist in the online landscape.</p>
<p>Measuring spatial inequality offline typically involves measuring attributes of the people who live in a specific geographic location. For example, ZIP code 65035 designates a “white” neighborhood because <a href="https://factfinder.census.gov/bkmk/table/1.0/en/DEC/10_SF1/QTP3/8600000US65035">99.5 percent of the people residing there</a> (Freeburg, Missouri) are white, according to U.S. census data. By contrast, ZIP code 60619, an area in Chicago, would be considered “nonwhite,” because <a href="https://factfinder.census.gov/bkmk/table/1.0/en/DEC/10_SF1/QTP3/8600000US60619">0.7 percent of its residents are white</a>.</p>
<p>To make this type of distinction between websites, I relied on website metatags – website producers’ descriptions of the site coded to be picked up by and reflected in search engine results. I designated as “racial” websites with metatags including terms such as “african american,” “racism,” “hispanic,” “model minority” and “afro.” Sites without those terms in their metatags I designated “nonracial.” </p>
<p>By using website metatags, I was able to distinguish between racial and nonracial sites (and the segregated traffic between them) based on whether the site’s producers themselves define the site’s identity in racial terms.</p>
<h2>Understanding online navigation</h2>
<p>Once I had labeled each site as racial or nonracial, I looked at the links website producers created between them. There were three possible types of links: between two racial sites, between two nonracial sites, or between a racial site and a nonracial one.</p>
<p>How many of each type of link the data contained would reveal whether bias influenced website producers’ decisions. If there were no bias, the number of links would be proportional to the number of each type of site in the data set. If there were bias, the numbers of links would be disproportionately high or low.</p>
<p><iframe id="SL6kc" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/SL6kc/4/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>While I found slight differences between the ideal theoretical proportions and the actual number of links, they were not significant enough to indicate that any segregation in people’s internet behavior is caused by web producers. People who travel the web just clicking links on websites at random would not arrive at racial or nonracial sites substantially more or less than they should based on the number of such sites that exist. But people don’t just follow links; they exercise their preferences when navigating the web. </p>
<h2>Seeing segregation</h2>
<p>For my second inquiry, I wanted to find out how people actually move between websites. I looked at the same 56 sites as for the previous analysis, but this time used <a href="https://www.similarweb.com/">Similarweb</a>, a prominent web traffic metrics site. For each site, Similarweb produces data showing what websites people came from and what websites people navigated to next. I characterized those sites, too, as “racial” or “nonracial,” and identified three types of paths people took when clicking: between two racial sites, between two nonracial sites, or between a racial site and a nonracial one.</p>
<p><iframe id="UQjbw" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/UQjbw/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>In this analysis, the number of clicks between different types of sites would reveal whether bias influenced users’ decisions. I found significantly greater numbers of clicks between nonracial sites, and fewer numbers of clicks between racial and nonracial sites. That indicates that users are going out of their way to visit nonracial sites.</p>
<h2>Capitalizing on search engines</h2>
<p>This gets us closer to the whole story when it comes to segregated traffic patterns and potential inequalities along racial lines. My data also showed that nonracial sites rank significantly higher in search results, and therefore likely enjoy greater visibility, than racial sites. The racial sites are less visible, get less traffic and therefore likely reap fewer benefits from visibility (such as advertising revenue or higher search engine rankings).</p>
<p>It might be tempting to suggest that this merely reflects user preferences. That could be true if users knew what websites they want to go to, and then navigate directly to them. But usually, users don’t. It’s <a href="https://www.emarketer.com/Article/How-Much-Search-Traffic-Actually-Comes-Googling/1011814">much more likely</a> that people type a word or phrase into a search engine like Google. In fact, direct traffic accounts for only about one-third of the traffic flow to the web’s top sites. To quote a <a href="https://www.brightedge.com/sites/default/files/Cracking%20the%20Content%20Code.pdf">conclusion from search optimization firm Brightedge</a>, “overwhelmingly, organic search trumps other traffic generators.”</p>
<p>While more research is of course necessary, my work so far suggests that in conjunction with users’ preferred choices to navigate to nonracial sites more than racial sites, search engines do something with a similar effect: Nonracial sites rank significantly higher than racial sites. That can give racial sites less traffic and less financial support in the form of advertising revenue. </p>
<p>In both of these situations, people and search engines steer traffic in ways that give advantages to nonracial websites and disadvantages to racial sites. This approximates what, in the offline world, is called systemic, structural racism.</p><img src="https://counter.theconversation.com/content/72919/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Charlton McIlwain does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The physical world is racially segregated as a result of structural racism. A researcher examines whether similar problems exist online.Charlton McIlwain, Associate Professor of Media, Culture, and Communication, New York UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/743532017-03-16T02:22:29Z2017-03-16T02:22:29ZHow online hate infiltrates social media and politics<figure><img src="https://images.theconversation.com/files/160794/original/image-20170314-10763-ytfp4l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">From person to person, the spread of online hate can be rapid.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-illustration/crowd-small-symbolic-3d-figures-network-36074200">Connections via shutterstock.com</a></span></figcaption></figure><p>In late February, the headline of a news commentary website that receives more than 2.8 million monthly visitors announced, “<a href="https://web.archive.org/web/20170309200734/http://www.dailystormer.com/philly-jews-destroy-another-one-of-their-own-graveyards-to-blame-trump/">Jews Destroy Another One of Their Own Graveyards to Blame Trump</a>.” The story, inspired by the <a href="https://www.washingtonpost.com/news/post-nation/wp/2017/02/26/dozens-of-headstones-vandalized-at-philadelphia-jewish-cemetery/">recent desecration of a Jewish cemetery in Philadelphia</a>, was the seething fantasy of an anti-Semitic website known as the Daily Stormer. With only a headline, this site can achieve something no hate group could have accomplished 20 years ago: It can connect with a massive audience.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=298&fit=crop&dpr=1 600w, https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=298&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=298&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=375&fit=crop&dpr=1 754w, https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=375&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/160367/original/image-20170310-19247-x8cvg1.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=375&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hate speech moves rapidly from the fringe to the mainstream.</span>
<span class="attribution"><a class="source" href="https://web.archive.org/web/20170309200734/http://www.dailystormer.com/philly-jews-destroy-another-one-of-their-own-graveyards-to-blame-trump/">Screenshot of DailyStormer.com</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>To whom, and how many, this latest conspiracy may travel is, in part, the story of “<a href="https://theconversation.com/us/topics/fake-news-33438">fake news</a>,” the phenomenon in which biased propaganda is disseminated as if it were objective journalism in an attempt to corrupt public opinion. My recent book on digital hate culture, “<a href="http://www.palgrave.com/us/book/9783319514239">Fanaticism, Racism, and Rage Online</a>,” explores the online underworld from which many of those false narratives originate. I investigate the lesser-known source of all this hate-laced “news” simmering in our public debates, helping to cultivate a distorted reality for its ardent believers and a fractured polity for the rest of us.</p>
<p>Looking at the most-visited websites of what were once <a href="http://www.starnewsonline.com/news/20020805/skinhead-rally-is-laughable-in-southern-town">diminished movements</a> – white supremacists, xenophobic militants and Holocaust deniers, to name a few – reveals a much-revitalized online culture. For example, according to <a href="https://www.similarweb.com/website/stormfront.org">SimilarWeb analytics</a>, Stormfront, the longest-standing white supremacist site, receives more than two million monthly visitors. That is half a million more than the <a href="http://www.naacp.org/">NAACP</a>, <a href="http://www.glaad.org/">GLAAD</a>, the <a href="http://www.adl.org/">Anti-Defamation League</a> and <a href="http://www.nclr.org/">National Council of La Raza</a> websites, combined.</p>
<p>But size and scope alone do not account for the unprecedented reach that these websites have found in the digital age. Their ascent mirrors the improbable rise of former KKK Imperial Wizard <a href="https://www.splcenter.org/fighting-hate/extremist-files/individual/david-duke">David Duke</a>, who shed his Klan robes for an eventual seat in the Louisiana House of Representatives. Today’s radical right is also remaking its profile, swapping swastikas and white-power rock for political blogs and news forums. The trappings may have changed, but the bigotry remains.</p>
<h2>Looking the part</h2>
<p>The American Renaissance hate site opens with a quote from Thomas Jefferson and an offering of timely news articles. These include borrowed headlines from The New York Times about looming deportation policies and Associated Press stories on Texas voter ID laws. But there is an ever-present fixation on nationality and race, as in original commentaries like “How I Saw the Light About Race.” Weaving together real news with racist views, the site stealthily positions the fringe ideas as aligned with the mainstream.</p>
<p>On the Occidental Observer (tagline: “White Identity, Interests, and Culture”), white nationalist contributors and a few former scholars speculate on forum topics like “The Holocaust Industry,” “Jewish Influence” and the “Racialization of America.” The Observer looks much like the homepage of any policy think tank, except for the conspiracy-driving anti-Semitic subtexts. </p>
<p>For online hate groups like this, perception is reality. The common emphasis on news and politics reflects a shift in the messages racist groups promote. Many no longer focus on white supremacy, but rather take the more accessible position of white victimization.</p>
<p>The headlines emanating from websites like the Daily Stormer allow contemporary racists to imagine they are now a minority race under siege. These narratives include an imagined onslaught of illegal immigrants, a fear of <a href="https://web.archive.org/web/20170315192111/http://www.dailystormer.com/section/race-war/">black-on-white crime</a>, an equal rights movement that somehow infringes on religious freedom and a <a href="https://web.archive.org/web/20170315192142/http://www.dailystormer.com/?s=The+Jews+behind+the+pro-Globalist+Super+Bowl+Ads">Jewish globalist machine</a> supposedly behind it all.</p>
<p>Hate rhetoric repackaged as politics and housed in websites that look just like any other online blog can attract, or even persuade, more moderate ideologues to wade into extremist waters. This “user-friendly” hate community is joining forces in a way that could never happen in the offline world. Thanks in part to this connectedness, these poisoned narratives are now spreading well beyond racist websites. </p>
<h2>How it travels</h2>
<p>The speed with which online hate travels is breathtaking. Two days after that Daily Stormer story on “Jews Destroying Their Own Graveyards,” David Duke discussed “<a href="https://web.archive.org/web/20170315192214/http://davidduke.com/dr-duke-and-andrew-anglin-expose-the-jewish-media-falso-flag-psych-war-to-stifle-criticism-of-jewish-media-domination/">the likelihood that the recent string of ‘anti-Semitic hate incidents’ are in fact false flag hoaxes</a>” on his podcast.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"835136526048714752"}"></div></p>
<p>The conspiracy had also begun to echo around Twitter, where Duke was sharing a link to his podcast and spreading a new hashtag: <a href="https://twitter.com/hashtag/fakehatecrimes">#fakehatecrimes</a>. More people joined in, including followers tweeting “This is a hoax” and “Question the local rabbis.” A senior adviser to President Trump took to Twitter to advance his theory that ongoing threats to Jewish community centers could be <a href="http://www.rawstory.com/2017/02/trump-adviser-suggests-democrats-are-threatening-jewish-centers-to-make-conservatives-look-bad/">linked to the Democrats</a>.</p>
<p>This is but one example of how, despite recent efforts to <a href="http://www.usatoday.com/story/tech/news/2016/11/15/twitter-suspends-alt-right-accounts/93943194/">limit fanatical voices</a>, <a href="https://www.theatlantic.com/news/archive/2016/05/europe-hate-speech-social-media/484913/">social networks</a> have become incubators of toxic conspiracies. The topic of “hate crime hoaxes,” for example, has long been circulating through <a href="https://www.reddit.com/r/HateCrimeHoaxes/">Reddit</a>, <a href="https://www.youtube.com/results?search_query=hate+crime+hoax">YouTube</a> and even <a href="https://www.facebook.com/myiannopoulos/videos/744338082370756/">Facebook</a>. Meanwhile, in the far-right blogosphere, sites like <a href="http://www.breitbart.com/milo/2016/05/02/hate-crime-hoaxes-growing-epidemic/">Breitbart</a>, <a href="https://www.infowars.com/man-arrested-for-jewish-center-bomb-threats-is-an-anti-trump-muslim-convert/">InfoWars</a> and <a href="http://www.wnd.com/2016/11/big-spike-in-hate-crimes-not-so-fast/">WorldNetDaily</a> dedicate more space to obsessively “debunking” hate crimes than actually reporting on them. These two worlds seamlessly come together on Twitter, where conspiracies intermix with <a href="https://twitter.com/ReturnOfTheOrb/status/836704421576912898">political diatribes</a>. </p>
<p>For hate groups, this is an unprecedented opportunity to finally plug their fringe movements into a mainstream circuit. As false narratives flow through the internet’s popular networks, they intermingle with legitimate information and gradually become washed of their radical origins in the process. It’s the same trajectory that drove the <a href="http://www.politico.com/story/2011/04/birtherism-where-it-all-began-053563">birther conspiracy</a>. Questions about President Obama’s “true birthplace” began on the fringes of the web, found support in more traditional right-wing blogs like Free Republic, and then made their way onto television.</p>
<p>Technology columnist Farhad Manjoo <a href="https://www.pbs.org/newshour/amp/bb/does-the-internet-help-or-hurt-democracy">described this phenomenon</a>, which we’ve now seen morph into fake news:</p>
<blockquote>
<p>“The extreme points of views that we’re getting that couldn’t have been introduced into national discussion in the past are being introduced now by this sort of entry mechanism … people put it on blogs, and then it gets picked up by cable news, and then it becomes a national discussion.”</p>
</blockquote>
<h2>Opportunistic politicians lend credibility</h2>
<p>There is little doubt that a key reason so much bad information has spilled over into today’s national discourse is politicians who embrace and perpetuate these narratives. Of course, doing so only gives the authors of conspiracy the very exposure they seek. </p>
<p>When, a year before the 2016 election, Donald Trump <a href="http://www.politifact.com/truth-o-meter/statements/2015/nov/23/donald-trump/trump-tweet-blacks-white-homicide-victims/">tweeted false statistics</a> about the number of “whites killed by blacks” in America, white nationalists were listening. The evidence could be in seen in the celebratory headlines to follow in websites like Stormfront and Daily Stormer.</p>
<p>Credibility has always been an ultimate but elusive goal for extremists. But online, they’re learning how to dilute the message of bigotry with heavy doses of political conspiracy for which there is apparently a welcoming audience. They achieve victory simply by injecting enough fake news into the system to produce doubt and discord around our most critical cultural debates.</p>
<p>When he was asked about the recent anti-Semitic threats and vandalism, President Trump told the Pennsylvania attorney general the incident was “reprehensible.” But he then went on to speculate that it might have been committed “<a href="https://www.washingtonpost.com/news/the-fix/wp/2017/02/28/trump-is-reportedly-hinting-that-anti-semitic-incidents-are-false-flags-it-wouldnt-be-the-first-time/">to make others look bad</a>.” That feeds the very doubt that extremist groups thrive on. And the cycle continues.</p><img src="https://counter.theconversation.com/content/74353/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adam G. Klein does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Today’s radical right is remaking its profile, using online communications to spread its message farther and deeper into our society than ever possible before.Adam G. Klein, Assistant Professor of Communication Studies, Pace University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/621052016-07-13T02:16:31Z2016-07-13T02:16:31ZHow apps and other online tools are challenging racist attacks<figure><img src="https://images.theconversation.com/files/130169/original/image-20160712-9289-zgug4n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Online and offline activism are merging, as recognised by this protest against the 2014 police shooting of Michael Brown in Ferguson, Missouri. </span> <span class="attribution"><a class="source" href="https://twitter.com/THESOURCE/status/500653210114490368/photo/1">Twitter</a></span></figcaption></figure><p>In the aftermath of <a href="https://theconversation.com/au/topics/brexit">Brexit in the UK</a> and the <a href="https://theconversation.com/defiant-hanson-will-test-a-coalition-government-61985">success of Pauline Hanson</a> in the Australian Senate elections, racism seems to be a more present threat than ever. </p>
<p>As First Nations people and people of colour in Australia well know, racial violence never went away. But, for others, recent events may serve as a needed reminder that racist attacks and abuses of police power also happen outside the US. </p>
<p>The Brexit fallout has included <a href="http://mashable.com/2016/06/27/facebook-brexit-incidents-hate-crime-london-britain/#ufAyyr7biOqL">a sharp rise in racist attacks</a> on people of colour and migrants, including eastern Europeans. Anti-racists in the UK have quickly responded. The <a href="http://www.istreetwatch.co.uk/">iStreetWatch</a> website now allows users to report and map racist incidents across the UK. </p>
<p>People are increasingly using online spaces and digital tools such as anti-racism apps to strategise, challenge racist views and strengthen anti-racist solidarity. </p>
<p>The post-Brexit Twitter handle <a href="https://twitter.com/postrefracism">@PostRefRacism</a> has nearly 10,000 followers. It encourages users “to document the increase in racism in the UK following the vote for Brexit”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"748111885388636160"}"></div></p>
<p>But as <a href="https://twitter.com/prerefracism">@PreRefRacism</a> observes, far from being new, racism has merely become more visible to white people since Brexit.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"748289789053509633"}"></div></p>
<h2>Defining, discussing and countering racism</h2>
<p>Activists and scholars have always argued race is a complex formation that needs to be set in historical context. However, the popular view is racism is a matter of bad attitudes that anyone can hold.</p>
<p>In online discussions, reductive approaches to racism can be challenged in real time. It is due to the prominence of many black feminists on Twitter, for example, that the term <a href="http://everydayfeminism.com/2015/01/why-our-feminism-must-be-intersectional/">intersectionality</a> has become more widely understood.</p>
<p>Social media provide an important space in which racism is being defined, discussed and countered. These are <a href="http://www.sl.nsw.gov.au/events/scholarly-musings-indigenous-activism-and-social-media-spaces">key sites</a> for observing how discussions of race take shape.</p>
<p>However, as media scholar Gavan Titley notes, this has also led to racism becoming “<a href="https://raster.fi/2016/02/17/the-debatability-of-racism-networked-participative-media-and-postracialism/">debatable</a>” – to the detriment of a clear delineation of what racism is and is not. </p>
<p>While “cyber-racism” is important to challenge, the persistence of street violence and the intertwining of “offline” and “online” worlds call for new methods for opposing racism in public. </p>
<p>Mobile apps for anti-racism interventions and education have been around for a number of years and several more are in development. As <a href="http://www.uws.edu.au/ics/research/projects/anti-racism_apps">our research on apps</a> in Australia, the UK and France has shown, they have diverse functions: to report racist incidents; to educate; and as news sources for racialised communities. </p>
<p>The “phone in your pocket”, with its built-in geolocative and image-capturing capabilities, can be a powerful anti-racism tool, enabling immediate reactions to racist events. As with the recent <a href="http://www.bbc.com/news/world-us-canada-36732908">police shooting of Philando Castile</a>, mobile video live-streamed online can generate almost immediate widespread condemnation and reaction.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"751074188786630657"}"></div></p>
<h2>Tracking Islamophobic abuse</h2>
<p>The Australian <a href="http://islamophobiawatch.com.au/">Islamophobia Watch</a> is a reporting app modelled on one developed by the French anti-Islamophobia association, the <a href="https://fr.wikipedia.org/wiki/Collectif_contre_l%27islamophobie_en_France">CCIF</a>. The app was launched in reaction to the <a href="https://theconversation.com/mosques-muslims-and-myths-overcoming-fear-in-our-suburbs-31822">2014 police raids on Muslim homes</a> and subsequent <a href="http://www.smh.com.au/national/dozens-of-antimuslim-attacks-as-islamic-leaders-warn-of-community-fear-20141009-113tmk.html">attacks on Muslim people in public</a>, women in particular. </p>
<p>Like iStreet Watch, the app allows users to report incidents of Islamophobic abuse. A <a href="http://islamophobiawatch.com.au/map/main">map is created</a> to visualise these incidents by category such as physical or verbal aggression, discrimination and vandalism. This representation of racial violence is itself a primary purpose of these apps.</p>
<p>The CCIF spokespeople in Paris told us that, in addition to enabling the reporting of racist events, the app-generated data draw attention to the existence of Islamophobia as a category of racism, which is highly contested in France. By cataloguing abusive events, CCIF makes the point that Islamophobia cannot go ignored. </p>
<p>The app includes a feed that provides an alternative news source for an embattled community. Against a backdrop of increased state-sanctioned Islamophobia – bans on hijabs and burqas, the imposition of pork on school canteen menus and heightened policing of Muslims in a hyper-securitised landscape – the resource generates community solidarity. </p>
<p>In this way, users may experience the app as a more concrete response to racism than fleeting online hashtag campaigns.</p>
<h2>What are the risks of these apps?</h2>
<p>Our research will now turn to the US and Canada where app development has focused on <a href="http://www.cbc.ca/news/politics/shootings-police-race-america-1.3670654">police violence against the black community</a>. Tools such as the <a href="http://www.nyclu.org/app">NYCLU Stop and Frisk app</a> allow users to film police violence, report incidents and alert users when others are being stopped and frisked in their area. </p>
<p>While such apps purport to put the power in the hands of those on the receiving end, the rise of formalised digital platforms that capture and store data and evidence of racism also raises legitimate concerns: </p>
<ul>
<li><p>As our research shows, the conduit between the reporting of incidents, the police and the courts necessarily appeals to the same systems in which institutionalised racism so often plays out. </p></li>
<li><p>Despite the apps we studied providing confidential and anonymised reporting, the real and perceived risks of the technology being used (in the wrong hands) to profile and literally locate and track individual reporters and activists is a genuine concern. This may act as a barrier to take-up and use. </p></li>
<li><p>The ease with which incidents can be filmed and uploaded online, while certainly raising awareness, runs the risk of causing people to switch off. </p></li>
</ul>
<p>Digital technology can have the dual effect of informing about and banalising racism. As comic <a href="https://twitter.com/harikondabolu">Hari Kondabolu</a> tweeted following the US police <a href="https://theconversation.com/how-a-live-streamed-police-killing-revealed-the-power-of-representation-62238">shootings on successive days</a> of two black men, Alton Sterling and Philando Castile:</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"751135764436713472"}"></div></p>
<p>As more apps are developed, more questions will emerge. What is clear is that these will be a main player in the fight against racism as it morphs and spreads into online and mobile-mediated everyday spaces.</p><img src="https://counter.theconversation.com/content/62105/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Racial abuse and violence and the intertwining of ‘offline’ and ‘online’ worlds call for new methods for opposing racism in public.Alana Lentin, Associate Professor in Cultural and Social Analysis, Western Sydney UniversityJustine Humphry, Lecturer in Cultural and Social Analysis, Western Sydney UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/242802014-03-16T19:37:24Z2014-03-16T19:37:24ZWhat do Australian internet users think about racial vilification?<figure><img src="https://images.theconversation.com/files/43838/original/q4ry3fff-1394709346.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Federal attorney-general George Brandis argues that the current debate on racial vilification laws centres on the regulation of free speech.</span> <span class="attribution"><span class="source">AAP/Daniel Munoz</span></span></figcaption></figure><p>Some time in the near future, federal attorney-general George Brandis will <a href="http://www.sbs.com.au/news/article/2014/02/24/brandis-defends-planned-changes-racial-discrimination-act">take a proposal</a> to cabinet to amend or repeal the racial vilifications provisions (Sections <a href="http://www.austlii.edu.au/au/legis/cth/consol_act/rda1975202/s18c.html">18C</a> and <a href="http://www.austlii.edu.au/au/legis/cth/consol_act/rda1975202/s18d.html">18D</a>) of the <a href="http://www.austlii.edu.au/au/legis/cth/consol_act/rda1975202/">Racial Discrimination Act</a>. </p>
<p>Brandis does not believe that debate on the laws is about racial vilification. Instead, he <a href="http://www.abc.net.au/tv/qanda/txt/s3946770.htm">argues</a> that the issue is about the regulation of free speech. He claims to be concerned about vilification, but intimates vilification should be limited to words that would cause a reasonable person to undertake a criminal act against the targeted group.</p>
<p>Our new study of internet users has direct relevance to the possible changes. In 2012-13, the Australian Human Rights Commission <a href="https://www.humanrights.gov.au/sites/default/files/document/publication/ahrc_annual_report_2012-13.pdf">received</a> 192 complaints of racial hatred, of which 79 (about 41%) were about internet hate – all lodged under Section 18C. </p>
<p>The first results of our online survey, which is yet to be published, of over 2100 Australian internet users reveals a place where racism is rampant, according to our respondents. The findings presents some challenging data for those who propose to weaken the laws that prohibit the offending, insulting, humiliating and intimidating of people on the basis of race. </p>
<p>According to our survey, only 10% support making it lawful to offend without a legitimate defence (as provided for by Section 18D). Only 5% believe people should be totally free to intimidate up to the edge of criminality. This latter position is in line with the <a href="http://www.ipa.org.au/publications/1985/coalition's-free-speech-reform-welcome-but-needs-to-go-futher-ipa-launches-repeal-18c-campaign">views espoused</a> by influential free-market think-tank the Institute of Public Affairs (IPA) and News Corp columnist Andrew Bolt, who was <a href="http://www.abc.net.au/news/2011-09-28/bolt-found-guilty-of-breaching-discrimination-act/3025918">found to be in breach</a> of Section 18C in 2011. </p>
<p>Nearly 80% support laws against racial vilification. Close to 70% support laws against religious vilification.</p>
<p>These are not anti-free speech ideologues, as can be seen in the table below. Nearly half of those surveyed (47%) believe that freedom to speak your mind is more important than freedom from hate speech (neutral 32%; disagree 21%). An overwhelming majority, 73%, put the onus on websites such as Facebook and YouTube to report the complaints they receive about racism to the relevant authorities.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=447&fit=crop&dpr=1 600w, https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=447&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=447&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=562&fit=crop&dpr=1 754w, https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=562&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/43985/original/ngpvnw42-1394842668.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=562&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Authors/The Conversation</span></span>
</figcaption>
</figure>
<p>Our sample proved to be not dissimilar in its broader views and attitudes on race, ethnicity and racism to those revealed in <a href="http://www.uws.edu.au/ssap/ssap/research/challenging_racism">previous studies</a> of Australians undertaken by two of our research team, Kevin Dunn and Yin Paradies.</p>
<p>The responses were overwhelming on whether those surveyed supported or opposed retaining as unlawful the four criteria that underpin Section 18C. We had expected a more equal spread of opinions, given the claims made by those who wish to repeal 18C about community attitudes that support the move. In 2011, Brandis <a href="http://www.theaustralian.com.au/national-affairs/opinion/section-18c-has-no-place-in-a-society-that-values-freedom-of-expression/story-e6frgd0x-1226152196836">claimed</a> that 18C:</p>
<blockquote>
<p>… as presently worded, has no place in a society that values freedom of expression and democratic governance.</p>
</blockquote>
<p>Currently, there is <a href="http://www.smh.com.au/federal-politics/political-news/rightwing-think-tank-ipa-says-george-brandis-is-backtracking-on-race-hate-laws-20140311-34kbu.html">strong lobbying</a> from the IPA and Bolt to remove the provisions that make certain forms of racial vilification unlawful: a civil, not a criminal matter. Brandis’ intentions – there are no proposals yet – have been <a href="http://www.smh.com.au/federal-politics/political-news/indigenous-ethnic-groups-unite-against-law-changes-20131120-2xvwe.html#ixzz2lFDcfls0">met with opposition</a> from a broad coalition of ethnic, religious and human rights groups. It appears Australians are comfortable with the current legislation, if our study is anything to go by.</p>
<p>Both sides of the debate argue principles as to whether the current framing of racial vilification should be made lawful. The arguments are best encapsulated by the public positions taken on the question by Human Rights Commissioner <a href="http://www.timwilson.com.au/articles/foundations-must-be-principles-not-worthy-aspirations">Tim Wilson</a>, who argues that freedom of speech should trump all other freedoms. Race Discrimination Commissioner <a href="https://www.humanrights.gov.au/news/stories/too-much-heat-not-enough-light-free-speech-debate">Tim Southphommasane</a> posits that some protection from hate speech should trump absolute freedom of speech.</p>
<p>These are directly opposed principles. Yet, according to our research, Australians can cope with apparently contradictory positions so long as these are not pushed to an irrational extreme. </p>
<p>They like the idea that people have a fair amount of freedom to say what they think, to express their opinions. But they also indicate real concern for the more vulnerable and most often targeted groups in the community.</p>
<p>Australians distrust big websites to protect the interests of these vulnerable people, and they like the idea of some contemporary and focused regulation that would control vicious speech. The government has <a href="http://www.theaustralian.com.au/national-affairs/tony-abbott-stands-up-to-cyber-giants-on-bullying/story-fn59niix-1226832114621">already indicated</a> it plans to do exactly that in relation to cyber-bullying.</p>
<p>Do most Australians want one principle or a balance of principles to prevail? Can we reasonably determine a shared ethics of civility on language and cultural differences – and, if so, what might that entail?</p>
<p>Unfortunately, the federal government has so far given no sign that it is interested in pursuing a balanced way forward.</p><img src="https://counter.theconversation.com/content/24280/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The project on which this article is based receives financial support under the Australian Research Council Linkage Scheme. Community partners on the project include VicHealth (<a href="http://vichealth.vic.gov.au">http://vichealth.vic.gov.au</a>), and the Federation of Ethnic Communities Councils of Australia (<a href="http://fecca.org.au">http://fecca.org.au</a>), in conjunction with the Online Hate Prevention Institute (<a href="http://ohpi.org.au">http://ohpi.org.au</a>) and Alltogether Now (<a href="http://alltogethernow.org.au">http://alltogethernow.org.au</a>). Andrew Jakubowicz does not receive any support from organisations that might financially benefit from this article.</span></em></p><p class="fine-print"><em><span>Kevin Dunn, Rosalie Atie, and Yin Paradies do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Some time in the near future, federal attorney-general George Brandis will take a proposal to cabinet to amend or repeal the racial vilifications provisions (Sections 18C and 18D) of the Racial Discrimination…Andrew Jakubowicz, Professor of Sociology and Codirector of Cosmopolitan Civil Societies Research Centre, University of Technology SydneyKevin Dunn, Dean of the School of Social Science and Psychology, Western Sydney UniversityRosalie Atie, Research Assistant, Challenging Racism Project, School of Social Sciences and Psychology, Western Sydney UniversityYin Paradies, Professor of Race Relations, Deakin UniversityLicensed as Creative Commons – attribution, no derivatives.