tag:theconversation.com,2011:/us/topics/trolling-3815/articlesTrolling – The Conversation2024-01-19T00:43:22Ztag:theconversation.com,2011:article/2214002024-01-19T00:43:22Z2024-01-19T00:43:22ZGolriz Ghahraman’s exit from politics shows the toll of online bullying on female MPs<p>The high-stress nature of working in politics is increasingly <a href="https://www.rnz.co.nz/news/political/494224/parlimentary-workplace-culture-improved-significantly-since-damning-2019-review-report">taking a toll on staff and politicians</a>. But an additional threat to the personal wellbeing and safety of politicians resides outside Parliament, and the threat is ubiquitous: online violence against women MPs. </p>
<p>Since her election in 2017, Green Party MP Golriz Ghahraman has been subject to <a href="https://www.1news.co.nz/2024/01/16/ghahraman-faced-continuous-sexual-physical-threats-shaw/">persistent online violence</a>. </p>
<p>Ghahraman’s <a href="https://www.greens.org.nz/statement_from_golriz_ghahraman">resignation</a> following allegations of shoplifting exposes the toll sustained online violence can have on a person’s mental health. In an <a href="https://www.vice.com/en/article/zm9gn8/biography-as-a-battleground-what-it-means-to-be-new-zealands-first-refugee-mp">interview with Vice</a> in 2018, Ghahraman expressed how the online abuse was overwhelming and questioned how long she would continue in Parliament. </p>
<p>Resigning in 2024, Ghahraman said <a href="https://www.greens.org.nz/statement_from_golriz_ghahraman">in a statement</a> </p>
<blockquote>
<p>it is clear to me that my mental health is being badly affected by the stresses relating to my work</p>
</blockquote>
<p>and</p>
<blockquote>
<p>the best thing for my mental health is to resign as a Member of Parliament. </p>
</blockquote>
<p>Ghahraman is not alone in receiving torrents of online abuse. Many other women MPs have also been targeted, including former Prime Minister <a href="https://www.auckland.ac.nz/en/news/2023/01/24/data-shines-a-light-on-the-online-hatred-for-jacinda-ardern.html">Jacinda Ardern</a>, Green Party co-leader <a href="https://www.rnz.co.nz/news/national/361341/green-party-co-leader-receives-rape-and-death-threats-on-social-media">Marama Davidson</a>, National MP <a href="https://www.rnz.co.nz/national/programmes/lately/audio/2018836535/female-politicians-face-sexist-abuse-online">Nicola Willis</a> and Te Pāti Māori co-leader <a href="https://www.rnz.co.nz/national/programmes/lately/audio/2018836535/female-politicians-face-sexist-abuse-online">Debbie Ngarewa-Packer</a>.</p>
<p>Words can not only hurt, but they can seriously endanger a person’s wellbeing.</p>
<p>Online violence against women MPs, particularly against women of colour, is a concerning global trend. In <a href="https://www.tandfonline.com/doi/full/10.1080/13218719.2022.2142975">an Australian study</a>, women MPs were found to be disproportionately targeted by public threats, particularly facing higher rates of online threats involving sexual violence and racist remarks. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-zealand-newsrooms-saw-the-rise-of-mob-censorship-in-2023-as-journalists-faced-a-barrage-of-abuse-219583">New Zealand newsrooms saw the rise of 'mob censorship' in 2023, as journalists faced a barrage of abuse</a>
</strong>
</em>
</p>
<hr>
<p>Similar online threats face women MPs in the <a href="https://www.theguardian.com/politics/2023/feb/17/how-female-mps-cope-with-misogynistic-abuse">United Kingdom</a>. Studies show that women of colour receive <a href="https://www.amnesty.org.uk/online-violence-women-mps">more intense abuse</a>.</p>
<p>Male politicians are also subject to online violence. But when directed at women the violence frequently exhibits <a href="https://www.tandfonline.com/doi/full/10.1080/14680777.2023.2181136">a misogynistic character</a>, encompassing derogatory gender-specific language and menacing sexualised threats, constituting <a href="https://www.unwomen.org/en/what-we-do/ending-violence-against-women/faqs/tech-facilitated-gender-based-violence">gender-based violence</a>. </p>
<h2>Our legal framework is not enough</h2>
<p>New Zealand’s current legal framework is not well equipped to respond to the kind of online violence experienced by women MPs like Ghahraman. </p>
<p>The <a href="https://www.legislation.govt.nz/act/public/2015/0063/latest/whole.html">Harmful Digital Communications Act 2015</a> is designed to address online harassment by a single known perpetrator. But the most distressing kind of abuse comes from the sheer number of violent commentators, most of whom are unknown to the victim or <a href="https://www.compassioninpolitics.com/three_quarters_of_those_experiencing_online_abuse_say_it_comes_from_anonymous_accounts">intentionally anonymous</a>. This includes “<a href="https://rm.coe.int/the-relevance-of-the-ic-and-the-budapest-convention-on-cybercrime-in-a/1680a5eba3">mob style</a>” attacks, where large numbers of perpetrators coordinate efforts to harass, threaten, or intimidate their target.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/analysis-shows-horrifying-extent-of-abuse-sent-to-women-mps-via-twitter-126166">Analysis shows horrifying extent of abuse sent to women MPs via Twitter</a>
</strong>
</em>
</p>
<hr>
<p>Without legal recourse, women MPs have two options – tolerate the torrent of abuse, or resign. Both of these options <a href="https://www.cigionline.org/articles/when-women-are-silenced-online-democracy-suffers/">endanger</a> representative democracy. </p>
<p>Putting up with abuse may mean serious impacts on mental health and personal safety. It may also have a <a href="https://www.theguardian.com/technology/2016/jun/18/vile-online-abuse-against-women-mps-needs-to-be-challenged-now">chilling effect</a> on what topics women MPs choose to speak about publicly. Resigning means losing important representation of diverse perspectives, especially from minorities.</p>
<p>Having to tolerate the abuse is a breach of the right <a href="https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-recommendation-no-35-2017-gender-based">to be free from gender-based violence</a>. Being forced to resign because of it also breaches women’s rights to <a href="https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-elimination-all-forms-discrimination-against-women">participate in politics</a>. Therefore, the government has duties under international human rights law to prevent, respond and redress online violence against women. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1745702227761664002"}"></div></p>
<h2>Steps the government can take</h2>
<p>United Nations human rights bodies provide <a href="https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-recommendation-no-35-2017-gender-based">some guidance</a> for measures the government could implement to fulfil their obligations and safeguard women’s human rights online. </p>
<p>As one of the drivers of online violence against women MPs is prevailing patriarchal attitudes, the government’s first step should be to correctly label the behaviour: gender-based violence. </p>
<p>Calling online harassment “trolling” or “cyberbullying” downplays the harm and risks normalising the behaviour. “Gender-based violence” reflects the systemic nature of the abuse.</p>
<p>Secondly, the government should urgently review the Harmful Digital Communication Act. The legislation is now nine years old and should be updated to reflect the harmful online behaviour of the 2020s, such as targeted mob-style attacks. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-misogyny-narcissism-and-a-desperate-need-for-power-make-men-abuse-women-online-95054">How misogyny, narcissism and a desperate need for power make men abuse women online</a>
</strong>
</em>
</p>
<hr>
<p>New Zealand is also now out of step with other countries. <a href="https://www.austlii.edu.au/cgi-bin/viewdb/au/legis/cth/consol_act/osa2021154/">Australia</a>, <a href="https://www.legislation.gov.uk/ukpga/2023/50/enacted">the UK</a> and the <a href="https://www.eu-digital-services-act.com/">European Union</a> have all recently strengthened their laws to tackle harmful online content. </p>
<p>These new laws focus on holding big tech companies accountable and encourage cooperation between the government, online platforms and civil society. Greater collaboration, alongside enforcement mechanisms, <a href="https://www.unwomen.org/en/digital-library/publications/2022/08/intensification-of-efforts-to-eliminate-all-forms-of-violence-against-women-report-of-the-secretary-general-2022#:%7E:text=Pursuant%20to%20UN%20General%20Assembly,as%20on%20broader%20efforts%20to">is essential</a> to address systemic issues like gender-based violence. </p>
<p>Thirdly, given the <a href="https://newsroom.co.nz/2022/07/12/digital-harm-soaring-year-on-year">increasing scale</a> of online violence, the government should ensure adequate resourcing for police to investigate serious incidents. Resources should also be made available for social media moderation among all MPs and training in online safety. </p>
<p>More than ever, words have the power to break people <a href="https://theconversation.com/disinformation-campaigns-are-undermining-democracy-heres-how-we-can-fight-back-217539">and democracies</a>. It is now the urgent task of the government to fulfil its legal obligations toward women MPs.</p><img src="https://counter.theconversation.com/content/221400/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cassandra Mudgway does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Representative democracy is under threat as females – particularly from minority groups – leave or choose not to enter politics. Many say the mental toll of online abuse has become overwhelming.Cassandra Mudgway, Senior Lecturer in Law, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2108742023-11-06T21:04:35Z2023-11-06T21:04:35ZTrolling and doxxing: Graduate students sharing their research online speak out about hate<iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/trolling-and-doxxing-graduate-students-sharing-their-research-online-speak-out-about-hate" width="100%" height="400"></iframe>
<p>An <a href="https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/">increasingly volatile online environment</a> is affecting our society, including members of the academic community and research they pursue.</p>
<p>Graduate students are especially vulnerable to online hate, because cultivating a visible social media presence is <a href="https://www.universityaffairs.ca/career-advice/from-phd-to-life/guest-post-grad-students-need-social-media/">considered essential</a> for mobilizing their research, gaining credibility and finding opportunities as they prepare to compete in an <a href="https://www.universityaffairs.ca/news/news-article/the-mismatch-continues-between-phd-holders-and-their-career-prospects/">over-saturated job market</a>. </p>
<p>Our research <a href="https://bearingwitness.site">has examined the experiences of graduate students</a> who have encountered online hate while conducting their research or disseminating it online, and a wider landscape of university protocol and policies.</p>
<p>This research suggests faculty supervisors and university staff responsible for students’ development and well-being are often ill-prepared to support students through online harassment experiences. This means graduate students are left frightened, discouraged and with nowhere to turn for help.</p>
<figure>
<iframe src="https://player.vimeo.com/video/876457075" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<figcaption><span class="caption">Documentary ‘Bearing Witness: Hate, Harassment and Online Public Scholarship.’</span></figcaption>
</figure>
<h2>New policies needed to support researchers</h2>
<p>Research by communications scholars George Veletsianos and Jaigris Hodson, who are part of the <a href="https://harassment.thedlrgroup.com/team/">Public Scholarship and Online Abuse</a> research group, finds that scholars online may be targeted for a range of reasons, but “<a href="https://www.insidehighered.com/views/2018/05/29/dealing-social-media-harassment-opinion">women in particular are harassed partly because they happen to be women who dare to be public online</a>.”</p>
<p>Online hatred <a href="https://www.coe.int/en/web/cyberviolence/cyberviolence-against-women">disproportionately affects</a> women, <a href="https://www.ohchr.org/en/stories/2021/03/report-online-hate-increasing-against-minorities-says-expert">Black, Indigenous, racialized</a>, <a href="https://abcnews.go.com/US/lgbtq-community-facing-increased-social-media-bias-author/story?id=85463533">queer, trans and</a> other marginalized scholars.</p>
<p>New frameworks and policies are required that protect and care <a href="https://theconversation.com/free-speech-on-campus-means-universities-must-protect-the-dignity-of-all-students-124526">for increasingly diverse academic communities</a> to foster equity and diversity.</p>
<h2>Impacts and inadequate support</h2>
<p>Nearly any discipline or research topic can become a target for harassment: from <a href="https://www.universityaffairs.ca/features/feature-article/the-growing-problem-of-online-harassment-in-academe/">English literature to game studies</a> to <a href="https://www.bbc.co.uk/programmes/w3ct369y">virology</a> and <a href="https://www.vice.com/en/article/g5ybw3/climate-scientists-online-abuse">climate science</a>. </p>
<p>Online harassment restricts which research projects are able to proceed and who is able to pursue them. It affects <a href="https://doi.org/10.1080/17439884.2021.1878218">not only researchers’ well-being</a> and career prospects, but by extention, their fields of study and members of the public served by it.</p>
<p>Institutions have yet to develop adequate supports for both faculty and students, even as the <a href="https://blogs.lse.ac.uk/impactofsocialsciences/2023/06/13/its-as-if-it-didnt-exist-is-cyberbullying-of-university-professors-taken-seriously/">pervasiveness of online harassment in academic life</a> has begun to receive greater attention. </p>
<p>Research by Hodson and Veletsianos with Chandell Gosse finds university policies designed to protect community members <a href="https://theconversation.com/post-secondary-workplace-harassment-policies-need-to-adapt-to-digital-life-161325">have not evolved to address the complex forms of harassment that unfold via social media</a>. </p>
<h2>Lack of clear and accessible structures, procedures</h2>
<p>Research from 2020 by Alex Ketchum of McGill University’s Institute for Gender, Sexuality, and Feminist Studies on <a href="https://publicscholarshipandmediawork.blogspot.com/p/report.html">resources provided by media relations offices at Canadian universities</a> indicates that universities’ publicly accessible information about doxxing, trolling and scholarship is scarce. Ketchum addresses challenges related to public scholarship in her book <em><a href="https://www.concordia.ca/press/engage.html#order">Engage in Public Scholarship!: A Guidebook on Feminist and Accessible Communication</a></em>.</p>
<p>Without clear structures and procedures for reporting harassment and supporting community members at an institutional level, harassment is treated by universities as isolated incidents without grasping the scale of the issue.</p>
<h2>‘Bearing Witness’</h2>
<p>We have facilitated a number of <a href="https://www.yorku.ca/laps/events/laps-research-to-impact-workshop-confronting-online-hate-and-harassment-of-academic-researchers">workshops</a> and <a href="https://www.yorku.ca/research/robarts/events/emerging-scholar-online/?fbclid=IwAR0rlJdnD-2um6XWzQzWpC5vvnJMvHHMW-DFZwbJwEx0v5LxoOJqMWbk0Y4">events</a> that foreground experiences of online harassment among graduate students. This work has been done with support from the <a href="https://irdl.info.yorku.ca/">Institute for Research on Digital Literacies</a>, under the direction of Natalie Coulter. </p>
<p>As part of a multi-stage project titled <a href="https://bearingwitness.site/">Bearing Witness</a>, we conducted one-on-one interviews with seven York University students who have encountered hatred in response to sharing or conducting their research online. </p>
<p>To protect participants from further harassment, we invited student artist-researchers to interpret the anonymized interview transcripts and create original artworks that reflected upon and echoed the stories of their peers. </p>
<p>These stories formed the basis of an exhibition and panel discussion at <a href="https://www.federationhss.ca/en/congress/bearing-witness-hate-harassment-and-public-scholarship">Congress 2023</a>, a national conference of academic researchers held at the end of May and beginning of June 2023, and will inform <a href="https://bearingwitness.site/symposium/">a symposium</a> on Nov. 7 and a <a href="https://irdl.info.yorku.ca/events/">a pop-up exhibition</a> in the Media Creation Lab in the Scott Library at York University.</p>
<h2>Researcher experiences of harassment</h2>
<p>In our study, participants described receiving threats of physical and sexual violence, directed not only towards them, but to their families and research participants. These encounters severely impacted students’ mental health and led them to fear for their physical well-being on campus and at conferences. </p>
<p>Each student we spoke with described feeling under-supported by the university, in particular <a href="https://education.macleans.ca/feature/inside-the-mental-health-crisis-at-canadian-universities/">struggling to access mental-health services</a>. Participants also said research methods seminars, research ethics board certification courses and conversations with supervisory committees had not addressed the possibility of encountering online harassment.</p>
<p>The online harassment students encountered also derailed or significantly curtailed their research projects. Students reported that the effects of the harassment forced them to drastically alter, if not entirely halt, their course of study and degree progress.</p>
<h2>Resources to help protect from harassment</h2>
<p>There are many online resources graduate students can consult to protect themselves from online harassment. Resources <a href="https://onlineharassmentfieldmanual.pen.org">from PEN America</a> and <a href="https://gameshotline.org/online-free-safety-guide">gaming communities</a> provide cybersecurity tips to prevent doxxing, assess threats and report harassment to platforms and law enforcement. </p>
<p>However, universities must take steps to lessen the burden for individual victims.</p>
<p>Media relations and knowledge-mobilization offices must develop clear protocols for protecting community members and supporting them in the wake of encountering hatred online. It is equally essential that these policies are readily available and easy to locate for scholars in distress.</p>
<h2>Important work begins with witness</h2>
<p>Faculty must be made aware of the realities of online harassment and available university resources — including campus security, legal clinics and mental health services. </p>
<p><a href="https://datasociety.net/pubs/res/Best_Practices_for_Conducting_Risky_Research-Oct-2016.pdf">Supervisors should be prepared</a> to have frank discussions with graduate students about the potential risks associated with their research and develop a pre-emptive action plan that can be implemented quickly.</p>
<p>This important work must begin with institutions bearing witness to graduate students’ experiences. University staff and faculty must listen to individual voices so that the issue of online harassment can be understood in its full scale and complexity.</p><img src="https://counter.theconversation.com/content/210874/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alex Borkowski receives funding from SSHRC. </span></em></p><p class="fine-print"><em><span>Natalie Coulter receives funding from SSHRC, as well as from internal grants at York University.</span></em></p><p class="fine-print"><em><span>Marion Tempest Grant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To inform university responses to online harassment affecting graduate students, artist-researchers created original artworks in response to interviews with their peers who experienced online hate.Alex Borkowski, PhD Candidate, Communication & Culture, York University, CanadaMarion Tempest Grant, PhD Candidate, Communication & Culture, York University, CanadaNatalie Coulter, Associate Professor of Communication Studies, and Director of the Institute for Research on Digital Literacies, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2148282023-10-06T01:57:14Z2023-10-06T01:57:14ZCampaign trail threats and abuse reinforce the need to protect NZ’s women politicians – before they quit for good<p>A female candidate slapped after a public debate, another whose home was vandalised, a man trespassed for entering the same house, shouts and jeers directed at another woman candidate for using te reo Māori – the 2023 election has certainly had its <a href="https://www.theguardian.com/world/2023/oct/03/racism-threats-and-home-invasions-candidates-face-abuse-on-new-zealands-campaign-trail">uglier moments</a>.</p>
<p>But reports of abuse, threats and violence on the campaign trail shouldn’t surprise anyone. Over the past five years, female politicians have consistently spoken about the often violent and sexist harassment they receive online. </p>
<p>A recent <a href="https://www.icfj.org/sites/default/files/2023-02/ICFJ%20Unesco_TheChilling_OnlineViolence.pdf">United Nations study</a> examining the experiences of female journalists established a clear link between online and real-world violence, particularly stalking. Another <a href="https://decoders.amnesty.org/projects/troll-patrol/findings">study</a> found female politicians and journalists in Britain and the United States are abused on Twitter (now X) every 30 seconds.</p>
<p>This is backed up by local politicians’ experiences. Green Party MPs <a href="https://www.rnz.co.nz/news/national/361341/green-party-co-leader-receives-rape-and-death-threats-on-social-media">Marama Davidson</a> and Golriz Ghahraman have both spoken about the serious abuse they receive online. Ghahraman needed a <a href="https://www.stuff.co.nz/national/politics/112882626/security-escort-for-green-mp-golrizghahraman-after-acts-david-seymour-called-her-a-menace">security escort</a> following a series of death threats.</p>
<p>In 2021, Christchurch city councillor Sara Templeton and other female leaders, including mayor Lianne Dalziel and Labour MPs Sarah Pallet and Megan Woods, were subjected to a <a href="https://www.stuff.co.nz/the-press/news/125676849/enough-is-enough-christchurch-city-councillor-calls-out-online-bullying">relentless campaign</a> of online harassment and increasingly gendered abuse.</p>
<p>Similar experiences have been shared by National MPs <a href="https://www.rnz.co.nz/national/programmes/lately/audio/2018836535/female-politicians-face-sexist-abuse-online">Nicola Willis</a> and <a href="https://www.nzherald.co.nz/nz/paula-bennett-why-i-didnt-put-my-hand-up-to-be-the-mayor-of-auckland/RSXOVPXZZWMYNM4GRSKTDJ46EE/">Paula Bennett </a>. Former prime minister Jacinda Ardern also had to tolerate high levels of <a href="https://www.auckland.ac.nz/en/news/2023/01/24/data-shines-a-light-on-the-online-hatred-for-jacinda-ardern.html">online vitriol</a>. What has happened during the election campaign is part of a clear trend.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1683576144723861510"}"></div></p>
<h2>Normalised gender-based violence</h2>
<p>The <a href="https://www.tandfonline.com/doi/full/10.1080/14680777.2023.2181136">often misogynistic</a> nature of online abuse, from sexist name-calling to threats of rape and death, makes it a form of <a href="https://www.unwomen.org/en/what-we-do/ending-violence-against-women/faqs/tech-facilitated-gender-based-violence">gender-based violence </a>. And the New Zealand government has made international and domestic commitments to create a safe political environment for women.</p>
<p>But this would require the development of a concrete plan to address online violence – something most political parties have been largely silent about during the election campaign.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/online-abuse-could-drive-women-out-of-political-life-the-time-to-act-is-now-214301">Online abuse could drive women out of political life – the time to act is now</a>
</strong>
</em>
</p>
<hr>
<p>And it’s not a new issue. The <a href="https://www.parliament.nz/en/visit-and-learn/how-parliament-works/office-of-the-speaker/corporate-documents/independent-external-review-into-bullying-and-harassment-in-the-new-zealand-parliamentary-workplace-final-report/">independent review</a> into bullying and harassment in parliament was released in 2019. It found online harassment and abuse of MPs by members of the public, including sexist and violent threats, was increasingly common and even accepted as par for the course.</p>
<p>Since then, there have been <a href="https://www.rnz.co.nz/news/political/494224/parlimentary-workplace-culture-improved-significantly-since-damning-2019-review-report">significant improvements</a> to combat workplace bullying, but essentially nothing has been done about online abuse.</p>
<p>This is especially concerning given the way violent online behaviour <a href="https://www.unwomen.org/en/digital-library/publications/2022/08/intensification-of-efforts-to-eliminate-all-forms-of-violence-against-women-report-of-the-secretary-general-2022">may embolden</a> some people to act out such behaviours in real life.</p>
<h2>A weak legal framework</h2>
<p>That said, there are some rules governing online abuse. The current legal framework includes the <a href="https://www.legislation.govt.nz/act/public/2015/0063/latest/whole.html">Harmful Digital Communications Act</a>, which was designed to address harmful online communication such as cyberbullying, harassment and threats. It established legal mechanisms for reporting and prosecuting harmful digital content.</p>
<p>But the law has two key weaknesses when it comes to gender-based violence. </p>
<p>Firstly, to prove a criminal offence, the harmful content must cause “serious emotional distress” to the victim. This may be difficult to prove from a single comment from a single person, because the real harm lies in the barrage of abusive comments from numerous people all at once. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-misogyny-narcissism-and-a-desperate-need-for-power-make-men-abuse-women-online-95054">How misogyny, narcissism and a desperate need for power make men abuse women online</a>
</strong>
</em>
</p>
<hr>
<p>It must also be proved that the content would cause “serious emotional distress” to an “ordinary reasonable person”. So the law does not fully consider the gendered nature of online abuse, and may not account for the specific ways in which women are targeted.</p>
<p>Secondly, the normalisation of online abuse against female politicians means they often do not report the abuse. This leaves perpetrators to continue with impunity. Overall, the law seems to have failed to deter people from engaging in online gender-based violence.</p>
<p>In turn, this puts New Zealand offside with its responsibilities as a signatory to important United Nations human rights conventions. Online abuse violates women’s <a href="https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-recommendation-no-35-2017-gender-based">right to be free from violence</a> and the right of women to <a href="https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-elimination-all-forms-discrimination-against-women">participate in political and public life</a>.</p>
<h2>Public education needed</h2>
<p>Although some political leaders have expressed deep concern about online abuse <a href="https://www.rnz.co.nz/news/political/464375/national-launches-troll-hunt-online-abuse-unacceptable">in the past</a>, the issue is not currently a priority for any major party. The risk is that women will simply leave the political arena, something already <a href="https://www.cigionline.org/articles/when-women-are-silenced-online-democracy-suffers/">observed overseas</a>. </p>
<p>Whichever party or coalition forms the next government should act urgently to address gender-based violence, both online and offline. It needs to review the legal framework to allow better protection for women, and find ways to enlist the general public’s support in making such abuse socially unacceptable. </p>
<p>This will require a comprehensive plan involving public education, schools, law enforcement, the judiciary and parliamentarians. But without more urgent action, the likelihood of online violence spilling over into the real world only increases.</p><img src="https://counter.theconversation.com/content/214828/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cassandra Mudgway does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Ugly incidents in the run-up to the election mirror the rise of online violence against women in politics. The next government needs a plan to tackle the problem before it’s too late.Cassandra Mudgway, Senior Lecturer in Law, University of CanterburyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2008212023-03-01T19:05:30Z2023-03-01T19:05:30ZInterviews with journalists can seem daunting – but new research shows 80% of subjects report a positive experience<figure><img src="https://images.theconversation.com/files/512823/original/file-20230301-24-nrewd.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C3840%2C2160&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><blockquote>
<p>Every journalist who is not too stupid or too full of himself to notice what is going on knows that what he does is morally indefensible. He is a kind of confidence man, preying on people’s vanity, ignorance, or loneliness, gaining their trust, and betraying them without remorse.</p>
</blockquote>
<p>So begins Janet Malcolm’s renowned book, <a href="https://www.penguinrandomhouse.com/books/106480/the-journalist-and-the-murderer-by-janet-malcolm/">The Journalist and the Murderer</a>. It was written more than 30 years ago, yet this negative notion has endured.</p>
<p>Journalists are still frequently condemned for how they interact with the people they interview. Indeed, with the advent of televised press conferences, journalists are facing more scrutiny and criticism than ever about their interviewing techniques.</p>
<p>It’s a perception that’s rarely challenged, even by journalists. But our <a href="https://giwl.anu.edu.au/research/publications/going-record-gendered-experiences-media-engagement">new research</a> suggests giving news interviews is generally a positive experience.</p>
<h2>What we found</h2>
<p>With colleagues from the Global Institute for Women’s Leadership at ANU, we surveyed 220 Australian adults who had given news interviews or who have the potential to do so.</p>
<p>Some were subject experts. Others were spokespeople for organisations or communities. We asked them about their willingness to speak to the news media and what may influence that decision. We also asked open-ended questions about what makes for a positive or negative interview.</p>
<p>More than 80% of participants reported their overall experience of giving news interviews was positive. Only 6% reported an overall negative experience. A female university expert said</p>
<blockquote>
<p>I’ve had a really positive experience with news media, which is not something I would have expected as someone who is actually quite shy and introverted.</p>
</blockquote>
<p>And a male community spokesperson said</p>
<blockquote>
<p>99% of my media experiences have been very positive and rewarding.</p>
</blockquote>
<p>While most people also reported some issues such as rude journalists or rushed interviews, these tended to be the exception rather than the norm.</p>
<p>There’s little research about the attitudes of “sources” or “talents” who are approached by journalists to provide news interviews. Most of it has focused on people who frequently engage with the media, such as politicians.</p>
<p>The limited other research that considers <a href="https://www.researchgate.net/publication/5225936_Interactions_with_the_Mass_Media">subject experts</a> and “<a href="https://journals.sagepub.com/doi/10.1177/1464884916636125">ordinary people</a>” who engage with the news media aligns with our findings. Even though they may have found inaccuracies in the reporting, the sources considered the overall experience to be positive and beneficial.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/albanese-wants-to-change-the-way-politics-is-done-this-means-the-way-politics-is-reported-will-have-to-change-too-187778">Albanese wants to change the way politics is done. This means the way politics is reported will have to change too</a>
</strong>
</em>
</p>
<hr>
<h2>Women are just as willing</h2>
<p>When I <a href="https://journals.sagepub.com/doi/full/10.1177/14648849211007038">interviewed 30 female academics</a> about their attitudes towards engaging with the media a few years ago, 90% described their overall experience as positive. All but one said they were willing to give news interviews.</p>
<p>This finding was replicated in our new research. More than 80% of people surveyed were willing to give news interviews. Women were just as willing as men.</p>
<p>This is significant because numerous studies from around the world have found news coverage is dominated by the voices of men. Around <a href="https://waccglobal.org/our-work/global-media-monitoring-project-gmmp/">75% of people quoted, heard or seen in the news are men</a>, according to research by the Global Media Monitoring Project.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1390542643738779649"}"></div></p>
<p>Some argue this is because women are less willing to do media interviews. Our research refutes this argument, but it does highlight some notable gender differences in experiences and attitudes.</p>
<p>Women reported significantly lower confidence than men. Only 5% were “very confident”, compared to 20% of men. Women were more likely to refuse an interview request due to concerns about their appearance, a perceived lack of expertise, and fear of online harassment.</p>
<p>Concerns about online harassment were legitimate, with 38% of participants saying they had experienced trolling in response to giving a media interview. Men and women were both targeted, but women were more likely to receive sexist abuse.</p>
<h2>Generally a valuable experience</h2>
<p>Despite these issues and reservations, the participants were generally willing to speak to the media, which makes sense – people usually welcome the opportunity to talk about their area of expertise or share their experience. Inclusion in the news signals credibility and authority. Yes, there are risks to speaking out, but there are significant benefits too. </p>
<p>And there are certain ways journalists can approach a prospective source and carry out interviews to make them feel more comfortable and confident. Our research outlines some of these strategies and techniques, based on feedback from our participants. For example, when you approach a source for an interview:</p>
<ul>
<li><p>be clear about what you are seeking from the source and why you want to speak to them</p></li>
<li><p>demonstrate that you’ve done your research</p></li>
<li><p>provide a quick run-through of what to expect</p></li>
<li><p>be courteous and flexible regarding timing</p></li>
<li><p>and provide a few questions beforehand.</p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/is-it-ever-okay-for-journalists-to-lie-to-get-a-story-196358">Is it ever okay for journalists to lie to get a story?</a>
</strong>
</em>
</p>
<hr>
<p>I’m looking forward to sharing these findings with my journalism students, who tend to believe that asking someone to give an interview is always a major imposition. This research is good news for established journalists too, who rarely get direct feedback about the interview experience.</p>
<p>But perhaps more importantly, it’s encouraging for people who engage with the media or have the potential to do so. The way journalists interact with politicians (who, they would argue, typically avoid answering questions) during press conferences is not reflective of the usual interview experience.</p>
<p>It might be intimidating to speak to the news media but our research suggests it’s generally a good and valuable experience.</p><img src="https://counter.theconversation.com/content/200821/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kathryn Shine does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Though concerns about online harassment were legitimate, with 38% of participants saying they had experienced trolling in response to giving a media interview.Kathryn Shine, Associate professor, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1945032022-11-15T20:11:08Z2022-11-15T20:11:08ZImpersonation and parody: Shitposters satirically mock Elon Musk’s chaotic Twitter takeover<figure><img src="https://images.theconversation.com/files/495170/original/file-20221114-22-1p6e2m.jpg?ixlib=rb-1.1.0&rect=10%2C41%2C6979%2C4610&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Twitter users have been shitposting on the social media site to challenge Elon Musk's takeover of the platform.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Posting on Twitter has changed since Elon Musk <a href="https://www.nytimes.com/2022/10/27/technology/elon-musk-twitter-deal-complete.html">finalized his $44 billion takeover</a> of the micro-blogging platform. </p>
<p>One of Musk’s first orders as CEO: Adding <a href="https://help.twitter.com/en/managing-your-account/about-twitter-verified-accounts">opt-in paid verification</a> to the social networking platform’s Twitter Blue program. Previously, account verification was used to credibly identify people or organizations of public interest and did not require payment. </p>
<p>Musk’s changes allowed anyone on Twitter to get a blue check on their account for a monthly fee. Musk claimed the change would <a href="https://www.cnbc.com/2022/11/11/how-many-twitter-blue-subscribers-elon-musk-needs-to-make-up-losses.html">support Twitter’s revenue</a>. </p>
<p>However, the <a href="https://mashable.com/article/twitter-blue-elon-musk-subscriber-numbers">opposite appears to have taken place</a>. Within weeks of Musk’s takeover, verified users from <a href="https://www.forbes.com/sites/gustavlundbergtoresson/2022/11/14/as-twitters-changes-shakes-creators-and-brands-balenciaga-just-left-the-group-chat/?sh=418c9e0b2d15">luxury fashion house Balenciaga</a> to <a href="https://www.hollywoodreporter.com/tv/tv-news/whoopi-goldberg-quits-twitter-elon-musk-1235256829/">Whoopi Goldberg</a>, <a href="https://deadline.com/2022/11/stephen-fry-leaves-twitter-2022-goodbye-scrabble-1235167470/">Stephen Fry</a> and <a href="https://twitter.com/shondarhimes/status/1586399694896390147">showrunner Shonda Rhimes</a> announced their departures. Meanwhile, several <a href="https://fortune.com/2022/11/10/advertisers-unconvinced-after-musk-tries-to-reassure-them-that-twitter-chaos-wont-hurt-them/">major brands have paused advertising on the platform</a>. Their reasoning? Concerns around Musk permitting a “<a href="https://www.newsweek.com/leaving-elon-musk-twitter-star-trek-1758710">cesspool of hate speech</a>” to proliferate on the platform.</p>
<h2>Parody chaos</h2>
<p>Shortly after Musk introduced the new blue check program, a tweet purporting to be from American pharmaceutical giant Eli Lilly <a href="https://www.washingtonpost.com/technology/2022/11/14/twitter-fake-eli-lilly/">announced that insulin would be free</a>. Several users rejoiced in the comments, excited for the promise of accessible health care. But the tweet was never sent out by Eli Lilly, <a href="https://twitter.com/lillypad/status/1590813806275469333">whose official account is @LillyPad</a>. Instead, the tweet came from an account that registered with Twitter Blue’s paid verification program. </p>
<p>Though the fake Eli Lilly account was removed, the pharmaceutical company lost billions in value and the company’s <a href="https://www.forbes.com/sites/brucelee/2022/11/12/fake-eli-lilly-twitter-account-claims-insulin-is-free-stock-falls-43/">stock fell 4.37 per cent</a> within days.</p>
<p>Musk’s companies Tesla and SpaceX were also parodied, <a href="https://www.newsweek.com/fake-spacex-tweets-troll-elon-musk-over-government-subsidies-penis-size-1759117">with numerous tweets directly mocking Musk</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1590813806275469333"}"></div></p>
<p><a href="https://www.forbes.com/sites/brianbushard/2022/11/11/kari-lake-lockheed-martin-and-eli-lilly-here-are-the-companies-celebrities-and-politicians-impersonated-in-twitter-blue-chaos/?sh=3d90aac83871">Banana producer Chiquita</a>, <a href="https://mashable.com/article/twitter-fake-verified-posts-worse-elon-musk">American Girl</a>, <a href="https://economictimes.indiatimes.com/markets/stocks/news/billions-of-dollars-lost-how-twitter-blue-troubled-investors-on-wall-street/articleshow/95474009.cms">Lockheed Martin</a> and other corporations also found themselves satirized by Twitter Blue accounts.</p>
<p>In response to the impersonations, Musk <a href="https://www.cnbc.com/2022/11/11/twitter-blue-subscription-disappears-from-app.html">paused the paid verification program</a>. Musk has also <a href="https://mashable.com/article/twitter-gray-check-back">been inconsistent about the new gray “official” check verification</a>. The new check was also brought in to deal with the impersonations, but Musk soon tweeted that he’d “<a href="https://twitter.com/elonmusk/status/1590383366213611522">killed it</a>” after the announcement instigated more trolling. The gray check has since returned to some verified accounts.</p>
<h2>What is shitposting?</h2>
<p>These playful impersonations aren’t coincidental: they are a dissent against Musk’s leadership. In response to Musk becoming CEO, users used the platform to challenge dominant ideas about capitalism and power. </p>
<p>The fake verified accounts are <a href="https://www.urbandictionary.com/define.php?term=Shitposting">forms of shitposting</a>, or crass, provocative digital communication styles. Relying on parody and mocking, shitposting <a href="https://techcrunch.com/2016/09/23/papa-whats-a-shitpost/">attempts to disturb and derail</a> typical ways of posting on social media platforms.</p>
<p>Shitposting traces its roots to <a href="https://medium.com/swlh/a-brief-history-of-internet-culture-and-how-everything-became-absurd-6af862e71c94">early 2000s internet cultures</a>. It’s often associated with <a href="https://doi.org/10.5204/mcj.2786">trolling and other forms of hate speech</a> circulating on message board platforms like 8chan. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1590817833503842306"}"></div></p>
<p>However, shitposting can also be a form of digital protest. Communication scholar <a href="https://doi.org/10.1080/10462930500382708">Josh Gunn</a> explains that “shitTexts” are rhetorical practices that use irony and detraction to catalyze conversations about power and capitalism. Similarly, shitposts can help us blow off steam about political events we have little control over.</p>
<p>Likened to <a href="https://www.polygon.com/2018/12/17/18142124/shitposting-memes-dada-art-history">the Dadaist art movement</a>, shitposts also use play, absurdity and irony to challenge grand narratives about art and economic life. </p>
<h2>Digital public square</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Book cover showing a cartoon drawing on a young Black person with purple hair looking at a virtual red screen in their hand." src="https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=899&fit=crop&dpr=1 600w, https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=899&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=899&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1130&fit=crop&dpr=1 754w, https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1130&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/495219/original/file-20221114-24-87a1zb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1130&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Distributed Blackness, African American Cybercultures by André Brock, Jr.</span>
<span class="attribution"><a class="source" href="https://nyupress.org/9781479829965/distributed-blackness/">(NYU Press)</a></span>
</figcaption>
</figure>
<p>Since its launch in 2006, Twitter has been a <a href="https://www.technologyreview.com/2022/11/11/1063162/twitters-imminent-collapse-could-wipe-out-vast-records-of-recent-human-history/">digital public square</a> for its <a href="https://financesonline.com/number-of-twitter-users/">330 million monthly users</a>. Users build community in different <a href="https://doi.org/10.1080/19376529.2015.1083373">enclaves</a>, groups organized around shared identities or common interests.</p>
<p>In his groundbreaking work on Black Twitter, media scholar <a href="https://nyupress.org/9781479829965/distributed-blackness/">André Brock, Jr.</a> explains how the platform’s longevity is sustained by ordinary users whose playful use of Twitter gives them power and agency in ways offline spaces can’t. </p>
<p>Twitter, host to digital movements like <a href="https://disabilityvisibilityproject.com/2017/01/23/dvp-interview-gregg-beratan-andrew-pulrang-alice-wong/">#CripTheVote</a>, amplifies important conversations that don’t always get attention in mainstream media. </p>
<h2>A chaotic takeover and the many Musks</h2>
<p>Many dramatic changes accompanied Musk’s arrival. The self-proclaimed “<a href="https://mashable.com/article/chief-twit-elon-musk-twitter-layoffs">Chief Twit</a>” dismissed nearly <a href="https://www.washingtonpost.com/business/chief-twit-elon-musk-makesa-mostly-disastrous-start/2022/10/31/2b32490a-5943-11ed-bc40-b5a130f95ee7_story.html">half of Twitter’s employees</a>. Content moderation and harassment issues quickly rose, <a href="https://www.hrw.org/news/2022/11/12/musk-chaos-raises-serious-rights-concerns-over-twitter">threatening safety and security</a> for marginalized users. </p>
<p>But Twitter users are not always concerned with reproducing offline hierarchies of power — even public-facing personas regularly interact with everyday users.</p>
<p>The limited character count of a tweet means all users rely on creative strategies to communicate their messages. For instance, on a pre-Musk Twitter, verified users had the option to edit their display names. The name section is a playful space: used to creatively conceal an identity or temporarily partake in a viral platform trend (<a href="https://www.theverge.com/2017/10/5/16430496/twitter-halloween-names-best-memes">like spooky Halloween names</a>).</p>
<p>In response to Musk’s verification changes, many users including <a href="https://www.cbc.ca/radio/day6/elon-musk-twitter-impersonations-1.6648071">cartoonist Jeph Jacques</a> and <a href="https://www.theguardian.com/technology/2022/nov/07/twitter-will-ban-permanently-suspend-impersonator-accounts-elon-musk-says-as-users-take-his-name">comedian Kathy Griffin</a> changed their display names to “Elon Musk,” tweeting out provocative and offensive statements impersonating the CEO. <a href="https://www.theverge.com/2022/11/7/23446171/screen-name-twitter-musk-parody-whoops">Many other users joined them</a>. Jacques and Griffin’s accounts have both been suspended. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1592330798740668419"}"></div></p>
<p>After assuming leadership, Musk, a self-proclaimed <a href="https://www.theguardian.com/technology/2022/apr/14/how-free-speech-absolutist-elon-musk-would-transform-twitter">free-speech absolutist</a>, publicly announced that “<a href="https://twitter.com/elonmusk/status/1586104694421659648?lang=en">comedy is now legal on Twitter</a>.” </p>
<p>But his desire for the platform to be a space for free speech was short lived. On Nov. 6, Musk, <a href="https://twitter.com/elonmusk/status/1590884973535711232">who previously warned users against parodying</a> his likeness, announced <a href="https://twitter.com/elonmusk/status/1589401231545741312">verified users would lose their blue checks</a> if they attempted to change their display name. Verified users are now <a href="https://www.cnet.com/news/social-media/twitter-disables-ability-to-change-account-names-remove-blue-checkmarks/">unable to change their display names</a>.</p>
<p>Singer Doja Cat, <a href="https://www.insider.com/doja-cat-begs-elon-musk-change-twitter-name-back-2022-11">whose name was stuck as “Christmas,”</a> publicly tweeted at Musk for assistance. When he permitted the change, Doja Cat, <a href="https://www.rollingstone.com/music/music-news/doja-cat-stuck-as-christmas-twitter-elon-muck-1234628574/">changed her display name to “fart”</a> and thanked Musk.</p>
<h2>Twitter’s future</h2>
<p>The platform’s future remains uncertain. Some claim Musk is <a href="https://www.pbs.org/newshour/show/twitter-faces-uncertain-future-after-tumultuous-start-to-elon-musks-ownership">driving Twitter into the ground</a>, while others fear it will become <a href="https://www.revolt.tv/article/2022-10-28/248845/black-twitter-is-stunned-by-racist-tweets-taking-over-app-after-elon-musk-purchase/">yet another space for white supremacist hate</a>. </p>
<p>Things aren’t just chaotic online. Musk’s warning about <a href="https://www.npr.org/2022/11/12/1136205315/musk-twitter-bankruptcy-how-likely">Twitter’s looming bankruptcy</a> complicates the flows of impersonation and hate speech that followed his takeover. </p>
<p>Is shitposting the most pragmatic way to engage in public dissent? Probably not. However, through small acts of play, satire and parody, Twitter shitposters demonstrate the platform’s unique potential to spark cultural conversations about power — with a twist of provocation. </p>
<p><a href="https://theconversation.com/topics/social-media-and-society-125586" target="_blank"><img src="https://images.theconversation.com/files/479539/original/file-20220817-20-g5jxhm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=144&fit=crop&dpr=1" width="100%"></a></p><img src="https://counter.theconversation.com/content/194503/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jess Rauchberg previously received funding from the Natural Sciences and Engineering Research Council (NSERC).</span></em></p>Elon Musk’s new paid verification program resulted in the widespread use of parody accounts by shitposters on the social media platform.Jess Rauchberg, Doctoral Candidate, Communication Studies and Media Arts, McMaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1879482022-08-02T10:36:09Z2022-08-02T10:36:09ZLove Island: the psychological challenges contestants – and viewers – could face after the show is over<figure><img src="https://images.theconversation.com/files/477120/original/file-20220802-24-ufavuu.jpg?ixlib=rb-1.1.0&rect=126%2C14%2C1790%2C1060&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Love Island winners Ekin-Su and Davide will leave the villa £50,000 richer.</span> <span class="attribution"><a class="source" href="https://www.itv.com/presscentre/itvpictures/galleries/love-island-ep57-week-31-2022-sat-30-jul-fri-05-aug">ITV Plc</a></span></figcaption></figure><p>The finale of ITV’s Love Island was watched by millions of fans, many commenting live on social media as Ekin-Su Cülcüloğlu and Davide Sanclimenti were <a href="https://www.theguardian.com/tv-and-radio/2022/aug/01/ekin-su-culculoglu-and-davide-sanclimenti-voted-love-island-winners">awarded the £50,000 prize</a>. The four couples who made the final will now leave the Majorca villa where they’ve kissed, cried and cracked on for the past eight weeks. When they enter the outside world, they will be met with massive amounts of attention. </p>
<p>Some of this is positive – lucrative business opportunities, partnerships with popular brands and thousands of new followers on social media. Other attention will be in the form of online abuse and trolling from viewers. </p>
<p>Love Island (and indeed, all reality television) is an interesting case study in psychology, from the social experiment of isolating people in one house for a period of time, to the relationship between audience and contestant. The blurred line between reality and fiction creates a strong fan attachment to the show, but also contributes to mental health issues for contestants themselves.</p>
<hr>
<figure class="align-right ">
<img alt="Quarter life, a series by The Conversation" src="https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/451343/original/file-20220310-13-1bj6csd.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p><em><strong><a href="https://theconversation.com/uk/topics/quarter-life-117947?utm_source=TCUK&utm_medium=linkback&utm_campaign=UK+YP2022&utm_content=InArticleTop">This article is part of Quarter Life</a></strong>, a series about issues affecting those of us in our twenties and thirties. From the challenges of beginning a career and taking care of our mental health, to the excitement of starting a family, adopting a pet or just making friends as an adult. The articles in this series explore the questions and bring answers as we navigate this turbulent period of life.</em></p>
<p><em>You may be interested in:</em></p>
<p><em><a href="https://theconversation.com/how-your-spanish-holiday-could-be-quite-different-this-year-and-why-that-matters-186073?utm_source=TCUK&utm_medium=linkback&utm_campaign=UK+YP2022&utm_content=InArticleTop">How your Spanish holiday could be quite different this year – and why that matters</a></em></p>
<p><em><a href="https://theconversation.com/five-dating-tips-from-the-georgian-era-186847?utm_source=TCUK&utm_medium=linkback&utm_campaign=UK+YP2022&utm_content=InArticleTop">Five dating tips from the Georgian era</a></em></p>
<p><em><a href="https://theconversation.com/three-ways-to-tackle-the-sunday-scaries-the-anxiety-and-dread-many-people-feel-at-the-end-of-the-weekend-187313?utm_source=TCUK&utm_medium=linkback&utm_campaign=UK+YP2022&utm_content=InArticleTop">Three ways to tackle the ‘Sunday scaries’, the anxiety and dread many people feel at the end of the weekend</a></em></p>
<hr>
<p>Like soap operas, reality shows are made up of storylines that follow characters (though they may be real people). Viewers watching hours of these programmes can develop attachments to the characters, where they feel they are “one” with the people on screen. </p>
<p>Psychologists describe this as a parasocial relationship, a one-sided, unreciprocated friendship or connection to a person they only know through a screen. <a href="https://reader.elsevier.com/reader/sd/pii/S2352250X22000082?token=3767850506D9AC64E15AB5191E2D26520C40804612BFC33289C69D2CE5549CCEE2D198DBA31467619B1E49B83F79C047&originRegion=eu-west-1&originCreation=20220729141227">Research has found</a> that following celebrities and media figures on social media platforms may blur the lines between social and parasocial relationships. Our interaction and engagement with social media posts no longer significantly differs between close friends or famous people.</p>
<p>Viewers’ previous experiences reflect what they think of a character, creating either empathy or disdain. In a parasocial relationship, a viewer may feel a closeness and connection in their lives with a person who does not know that they exist, and based solely on the storyline of the television show.</p>
<p>Soap actors have <a href="https://www.pressreader.com/uk/daily-star-sunday/20190825/283175790156683">discussed</a> being shouted at in the street by “fans”, because of their characters’ behaviour on a scripted, fictional show. Eastenders star Louisa Lytton said the abuse is <a href="https://www.digitalspy.com/soaps/eastenders/a34542557/eastenders-louisa-lytton-fan-abuse-ruby-stacey-storyline/">a daily occurence</a>.</p>
<p>Love Island contestants enter the villa a relatively unknown person in society, and come out to a barrage of messages from viewers, all responses to the show’s editing, of which the contestants themselves may not know the full extent. This exponential rise in awareness of them as a person, a character and a celebrity creates a dramatic and fundamental shift in their lives. Psychological support is paramount to successfully navigating their newfound fame.</p>
<p>ITV <a href="https://www.itv.com/presscentre/press-releases/love-island-confirms-duty-care-protocols">provides mental health support</a> and other resources to contestants during the filming process. As of 2022, this includes giving islanders training on “the impacts of social media and handling potential negativity”. </p>
<h2>The psychology of trolls</h2>
<p>Love Island has had a long history of mental health challenges, including the <a href="https://www.vanityfair.com/style/2022/06/how-love-island-became-a-tv-reality-of-sex-fame-and-sometimes-tragedy">deaths of two former contestants</a> and former host Caroline Flack by suicide. Alex George, an ex-islander, has become the government’s first <a href="https://www.varsity.co.uk/interviews/23850">youth mental health ambassador</a>.</p>
<p>Many of the psychological challenges that have been associated with Love Island have been linked with the social media barrage directed at contestants. Former islanders Kem Cetinay and Amber Gill now host a <a href="https://inews.co.uk/culture/television/love-island-2021-mental-health-social-media-kem-cetinay-amber-gill-interview-the-full-treatment-itv-1088557">mental health series</a>, The Full Treatment, where they discuss the experience of abuse that comes from tweets and forums during and after the show airs. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/jJyHA7GBfDs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Psychologists <a href="https://www.themckeownclinic.co.uk/the-psychology-of-an-online-troll/">define</a> so-called keyboard warriors or trolls as individuals with a sense of emotional inner turmoil, using their perceived power to invisibly belittle others as a way to self-satisfy their internal crisis. Recent <a href="https://www.researchgate.net/profile/Delroy-Paulhus/publication/260105036_Trolls_just_want_to_have_fun/links/59e3389b0f7e9b97fbeacaf1/Trolls-just-want-to-have-fun.pdf">research</a> found keyboard warriors have personality traits associated with the dark triad of personality: narcissism, machiavellianism and psychopathy.</p>
<p>The apparent safety behind the keyboard allows people to say what they feel without the repercussions of the negative emotional and verbal abuse that would be socially unacceptable face-to-face. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/love-islands-tasha-is-the-shows-first-deaf-contestant-heres-what-you-should-know-about-deaf-accents-187109">Love Island's Tasha is the show's first deaf contestant – here's what you should know about deaf accents</a>
</strong>
</em>
</p>
<hr>
<p>Love Island is about contestants looking for love, but it is also about looking for public approval in the form of votes to ultimately win the £50,000 prize. This thrusts contestants straight into the path of viewers’ unfiltered thoughts and comments, filled with envy, admiration and vitriol. This need for public attention makes <a href="https://www.vice.com/en/article/g5gb5b/you-cant-fix-online-troll-culture-until-you-fix-reality-tv">reality shows and their aftermath</a> a psychological minefield for participants.</p>
<h2>Responsible viewing</h2>
<p>Love Island is on six nights a week for eight weeks straight. This might also cause mental health issues for regular viewers. <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7908146/">Research</a> suggests that people who binge-watch shows become so invested in the characters’ lives and storylines that when it’s over, they can face feelings of depression, emptiness, anxiety and even loneliness. </p>
<p>But due to the 24/7 world of social media, Love Island never truly ends. Fans have ample opportunity to comment on the show and its contestants on social media. The show itself encourages this, sponsoring a forum on Reddit. </p>
<p>The contestants’ social profiles are also kept up to date by friends and family while they are in the villa, further blurring the lines between the contestants’ lives before, during and after the show. </p>
<p>It’s perfectly fine to watch the show and discuss it with friends (and strangers) online. But viewers of Love Island (or any reality programme) must remember when commenting that islanders are human too.</p><img src="https://counter.theconversation.com/content/187948/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rachael Molitor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Reality shows can be a psychological minefield for both participants and fans.Rachael Molitor, Behavioural Psychologist, Coventry UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1781482022-05-12T02:10:20Z2022-05-12T02:10:20ZMorrison says his anti-trolling bill is a top priority if he’s re-elected – this is why it won’t work<figure><img src="https://images.theconversation.com/files/462208/original/file-20220510-12-hrc0wj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Mick Tsikas/AAP</span></span></figcaption></figure><p>Prime Minister Scott Morrison <a href="https://www.liberal.org.au/latest-news/2022/05/01/prime-minister-transcript-press-conference-parramatta-nsw">says</a> one of his “great missions” is to make social media a safer place for young people. </p>
<p>If the Coalition is re-elected, Morrison says one of the <a href="https://www.smh.com.au/politics/federal/coalition-sets-ultimatum-for-big-tech-over-online-safety-20220501-p5ahgy.html">first pieces of legislation</a> will be an anti-trolling bill, after it was introduced but not passed in the last parliament. </p>
<p>In March, Labor said the bill needed “significant amendments”. </p>
<p>To understand if this bill will be effective in targeting trolling, we need to understand why people troll. I have been researching the psychology of internet trolls for more than seven years – this is what I have found.</p>
<h2>What does the bill propose?</h2>
<p>Last September, the High Court <a href="https://www.gtlaw.com.au/knowledge/vollers-case-high-court-affirms-media-are-liable-third-party-comments">ruled</a> Australians with a social media page can be liable for defamatory posts others people make on their page – even if they are not aware of the posts.</p>
<figure class="align-center ">
<img alt="The front entrance of the High Court in Canberra." src="https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/462593/original/file-20220511-22-rwga4q.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The High Court made the so-called ‘Voller’ decision in September 2021.</span>
<span class="attribution"><span class="source">Mick Tsikas/AAP</span></span>
</figcaption>
</figure>
<p>In response, the anti-trolling bill was introduced. The bill aims to <a href="https://www.ag.gov.au/legal-system/social-media-anti-trolling-bill">make it easier</a> to obtain contact details of anonymous social media users and “unmask” them. However, the online safety comissioner <a href="https://www.abc.net.au/news/2022-03-10/esafety-commissioner-anti-trolling-laws-confusion/100899130">has questioned</a> whether the bill will actually target trolling. Lawyers <a href="https://www.smh.com.au/national/top-defamation-judge-says-proposed-anti-troll-laws-a-recipe-for-disaster-20211208-p59fws.html">have also warned</a> the bill could increase legal costs and waste court time.</p>
<p>My research shows trolls have complex motivations for their behaviour, which are not addressed by the bill. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/high-court-rules-media-are-liable-for-facebook-comments-on-their-stories-heres-what-that-means-for-your-favourite-facebook-pages-167435">High Court rules media are liable for Facebook comments on their stories. Here's what that means for your favourite Facebook pages</a>
</strong>
</em>
</p>
<hr>
<h2>Who are the trolls?</h2>
<p>Today, trolling is understood to be a <a href="https://doi.org/10.1089/cyber.2019.0652">malicious, antisocial act</a> where the “troll” seeks to cause their target distress or harm. Commonly, it is a <a href="https://doi.org/10.1177/2056305120928512">form of online harassment</a>. In my research, I describe this as “malevolent trolling”.</p>
<p>In our Australian-first 2016 <a href="https://doi.org/10.1016/j.paid.2016.06.043">study</a>, we found people who engage in more trolling behaviours, such as disrupting comment sections and upsetting people, were more likely to be callous, lack guilt and personal responsibility for their actions, and enjoy harming others. That is, they had higher scores on the personality traits of <a href="https://www.sciencedirect.com/topics/neuroscience/psychopathy">psychopathy</a> and <a href="https://www.sciencedirect.com/topics/psychology/sadism">sadism</a>. We also found trolls were more likely to feel rewarded when engaging in antisocial behaviour, and enjoyed being cruel to others and creating a sense of social mayhem. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-feed-the-trolls-really-is-good-advice-heres-the-evidence-63657">'Don't feed the trolls' really is good advice – here's the evidence</a>
</strong>
</em>
</p>
<hr>
<p>We have <a href="https://doi.org/10.1016/j.paid.2017.06.038">also shown</a> that people who troll have lower <a href="https://link.springer.com/article/10.1007/s42761-021-00062-w">affective empathy</a> - the ability to share the emotions of others. We expected people who troll to also have low <a href="https://link.springer.com/article/10.1007/s42761-021-00062-w">cognitive empathy</a> - the ability to analytically understand the emotions of others. </p>
<p>However, we found people with high cognitive empathy combined with high psychopathy were more likely to troll. This paints a rather dangerous, malevolent portrait of the internet troll – they know what can hurt you but are less likely to experience guilt about their behaviour. </p>
<figure class="align-center ">
<img alt="Young woman looking worried on a phone." src="https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/462595/original/file-20220512-26-k5n5fe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Trolls are not likely to feel guilty for hurting others.</span>
<span class="attribution"><span class="source">www.shutterstock.com</span></span>
</figcaption>
</figure>
<p>We <a href="https://doi.org/10.1089/cyber.2019.0652">have also found</a> self-esteem is unrelated to trolling. Interestingly (and concerningly) we found self-esteem to interact with sadism - the higher an individual’s level of sadism and the higher their self-esteem, the more likely they are to troll. So, the more someone enjoys harming others and the greater their sense of self-worth, the more likely they are to troll. </p>
<p>Taken together, our findings suggest people who troll are callous, enjoy harming others, lack the ability to share the emotional pain they inflict on their targets, have a good understanding of what will hurt their targets and do not have low self-worth. </p>
<p>Based on these findings, we suggest “don’t feed the trolls” could be good advice, because letting trolls know they have caused harm likely reinforces their behaviour. </p>
<h2>Why do people troll?</h2>
<p>We can also understand trolling by applying theoretical frameworks. </p>
<p>According to <a href="https://doi.org/10.1007/978-1-4614-5690-2_218">General Strain Theory</a>, when we experience something stressful we may have an aggressive response. So trolling could be seen as a response to experiencing stress. Indeed, during the 2020 COVID lockdowns in Australia there was a <a href="https://securitybrief.com.au/story/almost-300-increase-in-harmful-online-content-cases-reported-during-pandemic">300% increase in cyber abuse reports</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/new-research-shows-trolls-dont-just-enjoy-hurting-others-they-also-feel-good-about-themselves-145931">New research shows trolls don't just enjoy hurting others, they also feel good about themselves</a>
</strong>
</em>
</p>
<hr>
<p>The <a href="https://www.abc.net.au/radionational/programs/scienceshow/understanding-internet-trolls/13277198">Broken Windows Theory</a> is also helpful here. According to this theory, the more antisocial behaviour we see, the more likely we are to engage in the behaviour ourselves. Simply, the behaviour becomes normalised. </p>
<p>In combination, General Strain Theory and Broken Windows Theory suggest people who are stressed and who are exposed to more instances of trolling, are more likely to troll. This, in turn, normalises the behaviour, leading to even more trolling. </p>
<p>This effect can be seen in in an <a href="https://doi.org/10.1145/2998181.2998213">experiment</a> by researchers from Stanford and Cornell universities. The researchers primed participants to be in a good or bad mood and then had them look at online discussions forms, some with primarily negative comments. Participants were then asked to post their own comment on the discussion forum. Those who were primed to be in a bad mood and who then viewed trolling were more likely to troll. </p>
<h2>What does this mean for the bill?</h2>
<p>The anti-trolling bill dangerously fails to address the complexity of the issue. Equating trolling with just defamation means the many other behaviours associated with trolling – harassment, disruption, intention to harm – would remain unlegislated. </p>
<p>But perhaps most concerning is the apparent ongoing lack of an evidence-based approach to targeting this harmful online behaviour. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-governments-planned-anti-troll-laws-wont-help-most-victims-of-online-trolling-172743">The government's planned 'anti-troll' laws won't help most victims of online trolling</a>
</strong>
</em>
</p>
<hr>
<p>This includes <a href="https://theconversation.com/how-empathy-can-make-or-break-a-troll-80680">more empathy training</a> throughout schools, with a particular focus on <a href="https://edtechnology.co.uk/comments/how-to-teach-digital-empathy/">digital empathy</a>. Developing digital empathy includes increasing understanding of how the online environment can impair empathy and connection, and what strategies you can employ to overcome this. This knowledge and skill development could be embedded in all digital school curriculum. </p>
<p>Cyber abuse, such as trolling and cyberbullying, have remained unchecked for too long. There is an urgent need to address and manage these harmful behaviours in a meaningful way.</p><img src="https://counter.theconversation.com/content/178148/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Evita March does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A psychologist who has been researching internet trolling for seven years explains why people troll.Evita March, Senior Lecturer in Psychology, Federation University AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1779432022-03-07T10:05:26Z2022-03-07T10:05:26ZSix things social media users and businesses can do to combat hate online<figure><img src="https://images.theconversation.com/files/449784/original/file-20220303-11-1il281n.jpg?ixlib=rb-1.1.0&rect=34%2C5%2C3888%2C2578&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/qANvvc543Tg">Lala Azizli/Unsplash</a></span></figcaption></figure><p>Online hostility has become a bigger problem over recent years, particularly with people spending more time on <a href="https://www.statista.com/topics/7863/social-media-use-during-coronavirus-covid-19-worldwide/">social media</a> during the COVID-19 pandemic. A US survey found <a href="https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/">four in ten Americans</a> have experienced harassment online – with three-quarters reporting that the most recent abuse happened on social media.</p>
<p>When online hostility happens on a continued basis it can be classified into a range of behaviours such as <a href="https://www.emerald.com/insight/content/doi/10.1108/INTR-08-2020-0462/full/html">trolling</a>, <a href="https://www.tandfonline.com/doi/full/10.1080/01972243.2021.1981507">bullying</a> and <a href="https://link.springer.com/article/10.1007/s10551-013-1806-z">harassment</a>.</p>
<p>More severe forms of online hostility can have <a href="https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/">real-world consequences</a> for those affected, such as mental and emotional distress.</p>
<p>Debates about who should be responsible for the management of online hostility have been taking place over the last decade, but with little agreement. I would argue that three different sectors need to be involved: social media platforms, the companies that host business pages on social media, and users themselves.</p>
<p>The foundation of online hostility moderation lies with social media platforms. They must continuously update their processes and features to minimise the problem. We regularly hear that social media platforms are not doing enough <a href="https://www.bbc.co.uk/news/technology-60264178">to counter online hostility</a>, and this may be true. In particular, I believe platforms could do more to educate companies and people about the available features designed to address hostility, and how to implement these appropriately. </p>
<h2>What you can do</h2>
<p>While social media platforms and businesses each play crucial roles in moderation, it’s social media users who experience hostility first-hand, either as observers or victims. </p>
<p>There is no one-size-fits-all approach to responding to online hostility, but here are three courses of action you might consider.</p>
<p><strong>1. Defend the victims</strong></p>
<p>Providing support to the victims of hostility by challenging the aggressor and asking them to stop could be a viable option in less severe instances of online hostility. Recent <a href="https://kar.kent.ac.uk/92684/">research</a> has shown that this can make the victim feel satisfied with the online brand community (for example, the Facebook fanpage) where the hostility occurred.</p>
<p>While this can be an effective way to combat hostility, and can make the victim feel supported, there’s also a risk that it can escalate the situation, with the aggressor continuing to attack the victim, or attacking you. In this case, the two options below may be better.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-helps-reveal-peoples-racist-views-so-why-dont-tech-firms-do-more-to-stop-hate-speech-140997">Social media helps reveal people's racist views – so why don't tech firms do more to stop hate speech?</a>
</strong>
</em>
</p>
<hr>
<p><strong>2. Hide, mute or block hostile content</strong></p>
<p>Hiding, muting or blocking <a href="https://www.facebook.com/help/207042374708584/?helpref=related_articles">hostile content</a> or users could be appropriate where users feel less comfortable to respond, but don’t want to continue to be exposed to harmful content. </p>
<p>This isn’t just for victims. We know harassment doesn’t have to be <a href="https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/">experienced directly</a> to be upsetting. This option puts the user in control of the situation and allows them to either temporarily or permanently block hostility (depending on whether it’s a one-off or happening frequently).</p>
<p><strong>3. Report hostile content</strong></p>
<p>In instances of severe and repeated hostility, <a href="https://help.twitter.com/en/safety-and-security/report-abusive-behavior">reporting content and users</a> to companies or platforms is a suitable option. This requires the user to describe the incident and type of hostility that has occurred.</p>
<figure class="align-center ">
<img alt="A woman looks at her smartphone, appears unhappy." src="https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=405&fit=crop&dpr=1 600w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=405&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=405&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=509&fit=crop&dpr=1 754w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=509&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=509&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Experiences of online hostility can affect a person’s mental wellbeing.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/unhappy-african-american-lady-looking-her-1823290100">Prostock-studio/Shutterstock</a></span>
</figcaption>
</figure>
<h2>What businesses can do</h2>
<p>Companies that manage social media pages can also block and report content and users, but they have other tools at their disposal, too.</p>
<p>For example, social media platforms enable companies to self-moderate their business pages by blocking offensive words from appearing. Businesses and brands that manage <a href="https://www.facebook.com/help/1182883832161405">a Facebook page</a> can choose up to 1,000 keywords to block in any language (these can include words, phrases and even emojis). If a user posts a comment containing one of the blocked words, their post will not be shown unless the page’s administrator chooses to publish it.</p>
<p>While these tools may help to a degree, automated platform features alone are not enough. Technology is increasingly sophisticated, but it’s difficult for machines to determine whether a particular comment or post is appropriate or not, regardless of the language used. Platforms also rely on human moderators, but these are <a href="https://fortune.com/2018/03/22/human-moderators-facebook-youtube-twitter/">a finite resource</a>.</p>
<p>As part of my research into hostility moderation, I have looked at the <a href="https://www.tandfonline.com/doi/abs/10.1080/0267257X.2017.1329225">different strategies</a> which companies and brands are choosing to adopt. These include:</p>
<ol>
<li><p><strong>Impartial or neutral strategies</strong> mean the companies do not take a particular side during incidents, but provide further information on the topic at the root of the hostility.</p></li>
<li><p><strong>Cooperative moderation strategies</strong> involve reinforcing positive comments and interactions by acknowledging those users who support others during incidents of hostility. </p></li>
<li><p><strong>Authoritative strategies</strong> focus on moderating hostility by referring to the business page engagement rules and, in more extreme instances, by temporarily or permanently blocking users from posting comments. </p></li>
</ol>
<p>My <a href="https://www.sciencedirect.com/science/article/pii/S1094996820301006">research</a> has also found that an authoritative approach to moderation, in requesting users to interact in a more civil manner, generates the most positive attitudes towards the company, and a perception that it has a level of social responsibility.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-facebook-isnt-telling-us-about-its-fight-against-online-abuse-96818">What Facebook isn't telling us about its fight against online abuse</a>
</strong>
</em>
</p>
<hr>
<p>Ultimately, we all have a role to play to address hostility online. Social media platforms are not perfect, but they have made moderation tools widely available, and we should use them where it’s warranted.</p><img src="https://counter.theconversation.com/content/177943/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Denitsa Dineva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We all have a role to play to address hostility online.Denitsa Dineva, Lecturer in Marketing and Strategy, Cardiff UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1745692022-01-28T13:18:24Z2022-01-28T13:18:24ZOnline abuse in sport: why athletes are targeted and how they can end up winning<p>For some sports stars, a certain level of adulation is just part of the job. But many are also now subjected to abuse and malicious campaigns on social media. It recently emerged that Liverpool FC have <a href="https://www.dailymail.co.uk/sport/football/article-10430879/Liverpool-hire-therapist-help-players-deal-vile-online-trolling.html">hired a therapist</a> to help players deal with the effects of online trolling.</p>
<p><a href="https://www.tandfonline.com/doi/full/10.1080/23750472.2021.2004210">Our research</a> examined the triggers for this kind of abuse, and how to deal with it. We looked at messages received on Twitter by the likes of Roger Federer and Cristiano Ronaldo, as well as other big names from a variety of sports. </p>
<p>Using a technique called “<a href="https://www.cs.uic.edu/%7Eliub/FBS/NLP-handbook-sentiment-analysis.pdf">sentiment analysis</a>” to rank posts from negative to positive, we found almost all the athletes were subjected to some level of abuse, involving criticism of their professional performances as well as personal insults. </p>
<p>At the milder end, a tweet might describe the target as “lazy and overrated”, or as someone who “never fails to disappoint in being so bad”. There were of course far more aggressive posts, many of which were laden with expletives. </p>
<p>Our study found that two events appeared to spark a peak in online abuse, both of which related to sporting performance. The first was a poor performance, and the second was a good performance – from supporters of a defeated rival team or player. </p>
<p>Our research also suggests that trolls are most likely to engage in abusive behaviour during or immediately after high stakes sporting competitions which provoke feelings of high anxiety or stress. But even away from the spotlight of sporting duties, athletes’ personal lives attract abuse, rumour and innuendo, with comments often targeted at a family member or friend. One athlete received abuse directed at their children. </p>
<p>Attempts have been made by various bodies to address this. In football for example,
a large number of clubs and players took part in a four-day <a href="https://www.bbc.co.uk/sport/football/56872469">social media boycott</a> in April 2021. The aim was to force social media companies to do more to tackle online abuse, with stronger preventative measures as well as greater accountability and consequences. </p>
<p>Yet just a few months later, following England’s defeat to Italy in the Euro 2020 final, a <a href="https://www.theguardian.com/football/2021/jul/15/englands-bukayo-saka-urges-social-media-platforms-to-act-after-racial-abuse">torrent of racist abuse</a> was directed online towards Marcus Rashford, Jadon Sancho and Bukayo Saka after they missed penalties. This in turn led to shows of support for those players both online and <a href="https://www.skysports.com/football/news/11661/12376666/saka-applauded-by-spurs-fans-nuno-hopeful-of-kane-talks">in football stadiums</a>. But our research suggests some pre-emptive measures may also help.</p>
<h2>A new training regime</h2>
<p>Athletes, particularly upcoming stars, would benefit from specific training on the challenges posed by social media platforms and how to manage their online profiles. These could be based around the Professional Footballers Association’s <a href="https://www.thepfa.com/players/union-support/social-media">own recommendations</a>, which include advice on how to avoid getting drawn into online confrontations and how content may be misreported or misinterpreted.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1414527397718863872"}"></div></p>
<p>Another solution has recently been <a href="https://www.smrfoundation.org/blog/">proposed by</a> the <a href="https://www.smrfoundation.org/">Social Media Research Foundation</a>, which argues that the power of moderation should be put into the hands of all users. This would allow athletes to select their own content moderators, so abusive messages could be hidden from view. A similar process exists on some online forums which have engaged members or employees who moderate and filter comments. </p>
<p>Athletes and governing bodies have <a href="https://www.skysports.com/football/news/11095/12290747/online-hate-pfa-data-raises-significant-concern-over-twitter-response-to-social-media-abuse-of-players">raised concerns</a> about the negative psychological effects of Twitter for professional athletes. And although a lot of attention has been focused on the roles of social media platforms, little has changed. So perhaps it is time to offer expertise and guidance directly to the potential victims.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1425066330273759249"}"></div></p>
<p>Athletes need help to manage their personal profiles and media exposure to lessen the potential for abuse and reduce its impact. This doesn’t have to mean more boycotts. Our research suggests that better engagement with fans, and a sense of personal connection, can potentially overcome the perceived lack of consequences of their conduct towards athletes online.</p>
<p>Nor should it be forgotten that social media platforms also provide <a href="https://theconversation.com/tyson-fury-defeated-deontay-wilder-in-the-social-media-fight-as-well-as-in-the-ring-132548">incredible opportunities</a> for <a href="http://alexfenton.co.uk/ronaldo-returns-to-united-breaking-social-media-records/">individual sports stars</a>, who can attract huge numbers of followers, opening up lucrative sponsorship opportunities. They can help foster connections with grassroots sports and inspire young athletes. And as one of the most high profile victims of online abuse, Marcus Rashford, has shown, <a href="https://www.bbc.co.uk/news/education-58632162">raise awareness for political causes</a>.</p><img src="https://counter.theconversation.com/content/174569/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How clubs and support teams can help deflect the negative impact of social media.Wasim Ahmed, Senior Lecturer in Digital Business, University of StirlingJenny Meggs, Lecturer in Sports Psychology, Lancaster UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1728782021-11-30T12:12:22Z2021-11-30T12:12:22ZParliamentary inquiry to put behaviour of ‘big tech’ under scrutiny<p>The Morrison government is setting up a parliamentary inquiry to put big tech companies “under the microscope” over dangers posed to people’s wellbeing by toxic material on their sites. </p>
<p>In its latest strike against big tech, Scott Morrison said the move built on the weekend announcement that the government would legislate “to unmask anonymous online trolls”. </p>
<p>“Mums and dads are rightly concerned about whether big tech is doing enough to keep their kids safe online,” he said.</p>
<p>He said big tech “has big questions to answer. But we also want to hear from Australians – parents, teachers, athletes, small businesses and more – about their experience, and what needs to change.”</p>
<p>The House of Representatives select committee on social media and online safety will be chaired by Lucy Wicks, MP for the NSW seat of Robertson. It will begin hearings in December and report in mid February.</p>
<p>In political terms, the government believes it is tapping into strong community concerns about the conduct of big tech and the risks posed to children.</p>
<p>The inquiry is expected to invite evidence from Tayla Harris, Adam Goodes and Erin Molan, who have been victims of trolling. </p>
<p>Facebook whistleblower Frances Haugen will also be invited. A former employee of Facebook, Haugen earlier this year left the company taking a massive trove of documents, including research reports, which she provided to the media. She later outing herself as the source. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/scanlon-survey-shows-community-fears-about-covid-can-spike-quickly-as-governments-face-omicron-172769">Scanlon survey shows community fears about COVID can spike quickly, as governments face Omicron</a>
</strong>
</em>
</p>
<hr>
<p>She said: “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money.”</p>
<p>She has provided material to officials bodies in the US, and given evidence to British and European parliamentary hearings as well as congresssional hearings.</p>
<p>Communications Minister Paul Fletcher said, “The troubling revelations from a Facebook whistleblower have amplified existing concerns in the community”. </p>
<p>He said organisations and individuals would have an opportunity through the inquiry “to air their concerns” and big tech would have the opportunity “to account for its own conduct”. Facebook, Instagram, Twitter, and TikTok will be asked to take part. </p>
<p>Australia had led the world in regulating social media, Fletcher said. It had established the world’s first dedicated online safety watchdog in 2015. This year the Online Safety Act had been passed to give the eSafety Commissioner stronger powers to direct the removal of online abuse. </p>
<p>Assistant Minister to the Prime Minister for Mental Health and Suicide Prevention, David Coleman, accused social media platforms of putting profits ahead of children’s safety. </p>
<p>He said even before COVID, in Australia there had been an increase in signs of distress and mental ill health among young people. Social media was part of this problem. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/who-decides-when-parliament-sits-and-what-happens-if-it-doesnt-172861">Who decides when parliament sits and what happens if it doesn't?</a>
</strong>
</em>
</p>
<hr>
<p>“In a 2018 Headspace survey of over 4000 young people aged 12 to 25, social media was nominated as the main reason youth mental health is getting worse. And the recent leak of Facebook’s own internal research demonstrates the impact social media platforms can have on body image and the mental health of young people.</p>
<p>"We know that we can’t trust social media companies to act in the best interests of children, so we’re going to force them to,” Coleman said.</p>
<p>Under its terms of reference the inquiry will look at the range of harms that may be faced by Australians on social media and other online platforms, including harmful content and harmful conduct. </p>
<p>It will investigate the potential harm to mental health and wellbeing, and the extent to which algorithms used by platforms permit, increase or reduce harm. </p>
<p>Also under scrutiny will be identity verification and age assurance policies, and the effectiveness and takeup of industry measures to keep people, especially children, safe online, and to give parents the tools for protecting their children. </p>
<p>An exposure draft of the anti-trolling bill will be released on Wednesday. </p>
<p>Under this legislation, social media platforms would have to reveal the identity of those posting defamatory material anonymously.</p>
<p>Morrison said earlier this week, “The online world should not be a Wild West where bots and bigots and trolls and others are anonymously going around and can harm people and hurt people”.</p><img src="https://counter.theconversation.com/content/172878/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Morrison government is setting up a parliamentary inquiry to put big tech companies “under the microscope” over dangers posed to people’s wellbeing by toxic material on their sites.Michelle Grattan, Professorial Fellow, University of CanberraLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1727432021-11-29T04:10:53Z2021-11-29T04:10:53ZThe government’s planned ‘anti-troll’ laws won’t help most victims of online trolling<figure><img src="https://images.theconversation.com/files/434348/original/file-20211129-21-1cyuuci.jpeg?ixlib=rb-1.1.0&rect=0%2C7%2C5000%2C3315&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Yesterday, Prime Minister Scott Morrison and Attorney-General Michaelia Cash <a href="https://www.attorneygeneral.gov.au/media/media-releases/combatting-online-trolls-and-strengthening-defamation-laws-28-november-2021">announced</a> proposed new legislation aimed at making online “trolls” accountable for their actions. </p>
<p>Over the past few weeks, we’ve heard Morrison decry trolls as “cowardly” and “un-Australian”, language that made it into the talking points at yesterday’s media conference. But is his new-found concern about trolling all it’s cracked up to be?</p>
<p>The proposed new legislation would give courts the power to force social media companies to pass on to people the details of their trolls, so they can pursue defamation action against them. </p>
<p>This decision is largely a reaction to the High Court’s <a href="https://theconversation.com/high-court-rules-media-are-liable-for-facebook-comments-on-their-stories-heres-what-that-means-for-your-favourite-facebook-pages-167435">upholding</a> of the ruling in the Dylan Voller case, which now holds media companies responsible for defamatory comments posted on their social media pages. But there are some things that we need to be wary of in this legislation.</p>
<h2>Defamation isn’t the same as trolling</h2>
<p>Speaking to the media yesterday, Morrison argued this legislation is a necessary means to curb online trolling. But the policy proposal largely deals with issues of defamation, which isn’t necessarily the same thing. </p>
<p>As I have <a href="https://theconversation.com/the-media-dangerously-misuses-the-word-trolling-79999">previously pointed out</a>, trolling is a grossly overused term that encompasses a range of activities. Defamation, meanwhile, is far more specific and legally defined. To prove defamation, one has to prove the content posted has damaged the victim’s reputation. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/high-court-rules-media-are-liable-for-facebook-comments-on-their-stories-heres-what-that-means-for-your-favourite-facebook-pages-167435">High Court rules media are liable for Facebook comments on their stories. Here's what that means for your favourite Facebook pages</a>
</strong>
</em>
</p>
<hr>
<p>Framing this announcement in the context of the very real harms of targeted online bullying and harassment is, I believe, disingenuous. I say this because those who suffer this kind of harassment aren’t likely to be bringing defamation suits. In short, this legislation won’t necessarily help them.</p>
<p>What’s more, a version of the newly announced powers already exists anyway. The recent <a href="https://www.esafety.gov.au/sites/default/files/2021-07/Online%20Safety%20Act%20-%20Fact%20sheet.pdf">Online Safety Act 2021</a> allows the e-Safety Commissioner to order social media companies to remove bullying or harassing content within 24 hours, or face a A$555,000 fine. Crucially, it also gives the commissioner powers to demand information about the owners of anonymous accounts who engage in online abuse.</p>
<p>Where social media companies fail to provide information about the offending poster, the newly announced laws would see them held accountable for the defamatory content. But that assumes they know this information in the first place.</p>
<p>Social media companies already collect users’ details on sign-up, including their name, email address, country of residence and, increasingly, telephone number. But for many social media platforms, there is nothing to stop users setting up an account with a fake name, using a throwaway email address or a “burner” phone, and then ditching all of that but maintaining the account once the information has been initially verified.</p>
<p>Even if the information provided is correct, it doesn’t mean the person will necessarily answer their phone or respond to an email. As one journalist asked yesterday, should social media companies be held accountable in that instance? The standard <a href="https://community.hrdaily.com.au/profiles/blogs/putting-the-reasonable-person-to-the-test">“reasonable person” assessment in law</a> would likely find not, meaning any defamation action brought against the company itself would likely fail.</p>
<h2>Social media ID laws by stealth</h2>
<p>My main concern with this proposed legislation is that it will prompt social media companies to collect enough information on their users so they become readily identifiable upon request. This seems a very similar concept to the government’s suggestion earlier this year that Australians who set up social media accounts should have to provide 100 points of identification. </p>
<p>That proposal was met with a <a href="https://www.smh.com.au/politics/federal/it-s-a-long-bow-social-media-id-push-dubbed-a-privacy-risk-20210402-p57g7d.html">barrage of criticism</a>, both for reasons of simple privacy, and because some experts, including myself, believe removing anonymity <a href="https://theconversation.com/ending-online-anonymity-wont-make-social-media-less-toxic-172228">won’t fix online toxicity anyway</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ending-online-anonymity-wont-make-social-media-less-toxic-172228">Ending online anonymity won't make social media less toxic</a>
</strong>
</em>
</p>
<hr>
<p>The other real issue, ironically enough, is one of user safety. Yes, online anonymity gives trolls a mask to hide behind, but it also allows people to access support for addiction or mental health issues, for example, or for a young LGBTQI+ person in fear of real-world violence or disapproval to find a community online. Online anonymity can be a crucial shield for victims of domestic violence who want to avoid being found by their abusers.</p>
<p>Forcing social media companies to provide users’ details to a court also opens up the possibility of “abuse of process”. This is where the legal process itself is used as a form of intimidation and bullying or, worse, for an abuser to gain access to their victim. The government has assured us the policy will contain safeguards against this, but has provided no detail so far on how this will be achieved.</p>
<p>Finally, it’s worth noting that several of the highest-profile current plaintiffs in Australian defamation cases involving social media defamation are to be found among the government itself. So while it might sound cynical, we’re entitled to wonder whom this policy is really designed to help.</p><img src="https://counter.theconversation.com/content/172743/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jennifer Beckett does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The government’s plan to make social media companies hand over trolls’ details aims to make it easier for victims to sue their harassers for defamation. But this conflates two very different concepts.Jennifer Beckett, Lecturer in Media and Communications, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1713742021-11-17T15:33:31Z2021-11-17T15:33:31ZOnline anonymity: study found ‘stable pseudonyms’ created a more civil environment than real user names <figure><img src="https://images.theconversation.com/files/430598/original/file-20211106-757-1w7ad09.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C1000%2C437&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The ability to remain anonymous when commenting online is a double-edged sword. It is valuable because it enables people to speak without fear of social and legal discrimination. But this is also what makes it dangerous. Someone from a repressive religious community can use anonymity to talk about their sexuality, for example. But someone else can use anonymity to hurl abuse at them with impunity. </p>
<p>Many people focus on the dangers of online anonymity. Back in 2011, Randi Zuckerberg, sister of Mark and (then) marketing director of Facebook, said that for safety’s sake, <a href="https://www.huffingtonpost.co.uk/entry/randi-zuckerberg-anonymity-online_n_910892">“anonymity on the internet has to go away”</a>. Such calls appear <a href="https://www.theguardian.com/society/2018/jun/11/labour-mp-jess-phillips-calls-for-end-to-online-anonymity-after-600-threats">again</a> and <a href="https://www.politico.eu/article/ending-anonymity-is-not-easy-for-uk-ministers/">again</a>. Behind them is a common intuition: that debate would be more civil and constructive if people used their real names.</p>
<p>But my research with colleagues suggests that anonymity – under certain conditions – can actually make for more civil and productive online discussion. This surprising result came out of a <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/jopp.12149">study</a> looking at the deliberative quality of comments on online news articles under a range of different identity rules. </p>
<p>We built a data set of 45 million comments on news articles on the Huffington Post website between January 2013 and February 2015. During this period, the site moved from a regime of easy anonymity to registered pseudonyms and finally to outsourcing their comments to Facebook. This created three distinct phases. </p>
<p>In the initial phase users could easily set up multiple accounts. The comment space was, at that time, a troll’s paradise. People could read an article, quickly create a username, and post whatever they wanted. If moderators blocked that username for abusive behaviour, the person (or even bot) behind it could just make another, and then another, and so on. This led to a space that was unpleasant for users. So the website <a href="https://www.huffpost.com/entry/why-is-huffpost-ending-an_b_3817979?guccounter=1">began to make changes</a>.</p>
<p>In the second phase, users had to authenticate their accounts, but did not have to use their real name with their comments. That meant they could be anonymous to other users but could be identified by the platform. If they behaved badly and were blocked, they couldn’t just make a new account and carry on – at least, not without creating a new authenticating account on Facebook. This made personas on this commenting space less disposable. They became “stable pseudonyms”. </p>
<p>In the third phase, the commenting system was outsourced to Facebook. Huffington Post usernames were replaced with user’s Facebook names and avatars. Depending on settings, comments might appear on users’ Facebook feeds. While not everyone has their own face on their profile picture, and not everyone even uses their real name on their account, many users do. This third phase therefore roughly approximates a real-name environment. </p>
<h2>Keeping it friendly</h2>
<p>We looked initially at the use of swear words and offensive terms – a crude measure of civility. We found that after the first change the use of these words dropped significantly. This was not just because some of the worst offenders left the site. Among those who stayed, language was cleaner after the change than before. We describe this as a sort of “broken-windows” effect, after the famous theory that cleaning up a neighbourhood can help reduce crime. Here, a cleaner environment <a href="https://dl.acm.org/doi/pdf/10.1145/2786451.2786459">improves everyone’s behaviour</a>.</p>
<p>We then looked across all three phases at other features of individual comments, including the length of words, causation words (for example, “because”), words indicating tentative conclusions (for example, “perhaps”), and more. We were able to automate this analysis and use it to construct a measure of the “cognitive complexity” of comments. This method has been tested on the <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/spsr.12179">deliberations of the Swiss parliament</a> and shown to be a good proxy for deliberative quality. We could not, of course, see the context and meaning of each individual comment, but using this method at least allowed us to do the analysis at a very large scale. </p>
<p>Our results suggest that the quality of comments was highest in the middle phase. There was a great improvement after the shift from easy or disposable anonymity to what we call “durable pseudonyms”. But instead of improving further after the shift to the real-name phase, the quality of comments actually got worse – not as bad as in the first phase, but still worse <a href="https://journals.sagepub.com/doi/full/10.1177/0032321719891385">by our measure</a>.</p>
<h2>A surprise finding</h2>
<p>This complicates the common assumption that people behave better with their real names on display. We don’t know exactly what explains our results, but one possibility is that under durable pseudonyms the users orient their comments primarily at their fellow commentators as an audience. They then perhaps develop a concern for their own reputation within that forum, as has been <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1430-9134.2001.00173.x">suggested elsewhere</a>. It’s possible that a real-name environment shifts the dynamic. When you make comments that can be seen not only by other Huffington Post readers but also by your Facebook friends, it seems plausible that you might speak differently.</p>
<p>What matters, it seems, is not so much whether you are commenting anonymously, but whether you are invested in your persona and accountable for its behaviour in that particular forum. There seems to be <a href="https://demos.co.uk/project/whats-in-a-name/">value</a> in enabling people to speak on forums without their comments being connected, via their real names, to other contexts. The online comment management company Disqus, in a similar vein, found that comments made under conditions of durable pseudonymity were rated by other users as having the <a href="https://disqus.com/research/pseudonyms/">highest quality</a>. </p>
<p>There is obviously more to online discussion spaces than just their identity rules. But we can at least say that calls to end anonymity online by forcing people to reveal their real identities might not have the effects people expect – even if it appears to be the most obvious answer.</p><img src="https://counter.theconversation.com/content/171374/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alfred Moore does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A higher quality discussion emerged among commenters allowed to use personas instead of their real names.Alfred Moore, Lecturer in Political Theory, University of YorkLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1618832021-07-25T19:57:04Z2021-07-25T19:57:04Z‘Girls please stay in the kitchen’ — as skateboarding debuts at the Olympics, beware of the lurking misogyny<figure><img src="https://images.theconversation.com/files/412578/original/file-20210722-25-cil5nl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Keith Birmingham/AP/AAP</span></span></figcaption></figure><p>Skateboarding will make its Olympic debut this year at the Tokyo Games. </p>
<p>The women’s and men’s competitions will both involve park and street events. In each, athletes perform optional skill sets within a time limit and are judged based on the combined difficulty and execution shown, similar to diving or gymnastics. </p>
<p>Skateboarding has been included <a href="https://olympics.com/en/sports/skateboarding/">at Tokyo</a> for the first time as part of a bid to make the games “more youthful, more urban [and] include more women”.</p>
<p>But gender equality in sports is not as simple as just scheduling a women’s competition. </p>
<p>My research suggests female athletes in Tokyo are likely to cop sexist abuse online, especially if they are competing in traditionally male events. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/alt-goes-mainstream-how-surfing-skateboarding-bmx-and-sport-climbing-became-olympic-events-164158">Alt goes mainstream: how surfing, skateboarding, BMX and sport climbing became Olympic events</a>
</strong>
</em>
</p>
<hr>
<h2>My research</h2>
<p>In my recently published <a href="https://journals.sagepub.com/eprint/XA3MGPJWNHQXHGA6ZAQD/full">research</a>, I examined nearly 4,000 comments posted to YouTube about women’s skateboarding competitions. The comments were collected from 14 competition live streams, from 2017 to the end of 2019. The competitions selected were high-profile skateboarding events with large prizes. </p>
<p>Given that YouTube comments can be added, edited or removed at any time, all comments were extracted at the beginning of the study to create a stable data set.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/PpDJVGx0yqE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">How Olympic skateboarding works.</span></figcaption>
</figure>
<h2>‘Welcome to womanhoodsville’</h2>
<p>Of the comments examined, 17% of those made on street skating competitions contained misogyny or abuse. While <a href="https://www.tandfonline.com/doi/abs/10.1080/16138171.2018.1452870">recent studies</a> have found sportswomen to be individual targets of online abuse, I also found frequent gender discrimination targeting women skaters collectively. This was often expressed through gendered gate-keeping of both skateboarding and sport. </p>
<blockquote>
<p>Girls, please stay in the kitchen.</p>
</blockquote>
<p>Many comments used aggressive language that dehumanised and sexualised women.</p>
<blockquote>
<p>Give the bitches armor [sic] so they don’t skate like pussies.</p>
</blockquote>
<p>There were also frequent anti-feminist sentiments posted, suggesting women were being granted a free ride for the sake of equality. </p>
<blockquote>
<p>Welcome to womanhoodsville, where you get 1000x the attention with a 1000th of the effort.</p>
</blockquote>
<p>Interestingly — and disturbingly — some of the abusive comments we observed seemed to suggest women’s inclusion comes at the cost of men. </p>
<blockquote>
<p>These hoe’s [sic] should be greatful [sic] that men did all the work so they can just go around doing flatground kickflips and missing 5050s for $20,000.</p>
</blockquote>
<h2>Dude culture</h2>
<p>Despite women’s sustained participation, skateboarding has long been perceived as a “dude” culture. The new TV series <a href="https://www.theguardian.com/tv-and-radio/2020/apr/29/betty-review-hbo-female-skateboarders-freewheeling-comedy">Betty</a>, based on its actors’ real-life experiences, highlights the macho monopolisation of skate spaces. As creator Crystal Moselle <a href="https://www.npr.org/2021/06/19/1008304555/hbos-betty-highlights-the-lives-of-women-skateboarders-during-the-pandemic">explains</a>: </p>
<blockquote>
<p>[…] skateboarding for so long has been set up as a male sport. So even just, like, going to the store to set up a board is intimidating. It’s a lot of intimidation.</p>
</blockquote>
<figure class="align-center ">
<img alt="A woman competes in a pre-Olympic skateboarding competition." src="https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=386&fit=crop&dpr=1 600w, https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=386&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=386&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=485&fit=crop&dpr=1 754w, https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=485&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/412579/original/file-20210722-13-1jbjsvp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=485&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Women have had to fight to be included in elite skateboarding events.</span>
<span class="attribution"><span class="source">Riccardo Antimiani /EPA/AAP</span></span>
</figcaption>
</figure>
<p>Women have also had to fight for competitive opportunities, including the sport’s “<a href="https://www.theguardian.com/world/2017/nov/12/billie-jean-king-tennis-equality-battle-of-the-sexes">Billie Jean King moment</a>”, when women threatened to boycott the 2005 X Games to gain better access to practice time, coverage and prize money. </p>
<p>Meanwhile, some major skate events have only recently included <a href="https://www.prnewswire.com/news-releases/dew-tour-adds-womens-skateboard-street-and-park-competitions-to-its-summer-event-300639546.html">full women’s programs</a> in the course of becoming Olympic qualifying competitions.</p>
<h2>Beyond skateboarding</h2>
<p>This is not just a skateboarding problem, unfortunately. There is a wider problem with misogyny in sport. The uninhibited online abuse we observed is similar to the explosion of sexist commentary that occurred around the formation of the women’s AFL league. </p>
<p>In 2019, <a href="https://www.abc.net.au/news/2019-03-20/tayla-harris-felt-sexually-abused-aflw-photo-trolls-seven/10919008">trolls</a> flocked to an image of AFL player Tayla Harris kicking a football. The following year, the Herald Sun <a href="https://www.heraldsun.com.au/sport/afl/aflw/the-herald-sun-explains-its-decision-to-close-comments-on-afl-womens-stories/news-story/8a07790b3de1e21eba7b2325ea7a0371">attributed</a> their decision to close comments on their coverage to “constant trolling, harassment and disgraceful commentary”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1107933757976117249"}"></div></p>
<p>And of course, sadly, it’s not just athletes — women working in sports journalism <a href="https://www.genvic.org.au/media-releases/the-horrendous-online-abuse-a-female-sports-journalist-received-highlights-dangers-of-media-that-must-change/">face this, too</a>. This year, American sports writer Julie DiCaro published a book, <a href="https://www.chicagomag.com/chicago-magazine/april-2021/julie-dicaro-sidelined/">Sidelined</a>, about the online vitriol experienced by women working in the field. </p>
<h2>Online abuse is everywhere</h2>
<p>Since this research was undertaken, the vilest comments have been slowly removed from the streams. But this is not enough — online abuse of women is <a href="https://www.plan.org.au/media-centre/social-media-new-frontier-for-gendered-violence-as/">ubiquitous</a>. </p>
<p>And while moderation can remove comments calling women skaters “a bunch of broken dishwashers” or a viewer’s bucket list of sexual acts they’d like an athlete to perform, it can’t change attitudes to women’s participation.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-tokyo-olympics-are-supposed-to-be-a-landmark-in-gender-equality-are-the-games-really-a-win-for-women-164234">The Tokyo Olympics are supposed to be a 'landmark in gender equality' — are the Games really a win for women?</a>
</strong>
</em>
</p>
<hr>
<p>Abusive, sexist language posted on online spaces where the sport is now consumed by global audiences may also shape perceptions of skateboarding as neither inclusive nor safe for women. And this occurs at a moment when women skaters are poised to become more visible than ever, providing opportunity to inspire further growth at the grassroots level.</p>
<p>My research is yet another example of how social media can reveal the deep entrenchment of misogyny in a society where women are still seen as interlopers and threats to certain areas of public life.</p><img src="https://counter.theconversation.com/content/161883/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brigid McCarthy does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Skateboarding will make its Olympic debut in Tokyo but research shows female athletes are likely to cop abuse online when the competition starts.Brigid McCarthy, Lecturer in Journalism, La Trobe UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1611052021-05-20T15:40:29Z2021-05-20T15:40:29ZTwitter to ask users to rethink abusive messages – a promising step towards ‘slowcial media’<figure><img src="https://images.theconversation.com/files/401853/original/file-20210520-23-6xpysu.jpeg?ixlib=rb-1.1.0&rect=0%2C12%2C4193%2C2785&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/kharkov-ukraine-june-25-2020-smartphone-1897437988">somemeans/Shutterstock</a></span></figcaption></figure><p>In an effort to reverse the flood of abuse on the platform, Twitter is rolling out a new feature which will <a href="https://www.theguardian.com/technology/2021/may/06/twitter-launches-prompt-in-bid-to-reduce-abusive-language">show a self-moderation prompt</a> to users who compose replies that the platform’s algorithms recognise to be abusive. The prompt effectively asks users to <a href="https://www.bbc.co.uk/news/business-57004794">think twice</a> before posting an abusive message.</p>
<p>Because it compels users to rethink and reflect on abusive tweets, Twitter’s new self-moderation prompt could be a promising step away from fast and furious social media posting and towards a more considered <a href="http://en.slow-media.net/manifesto/comment-page-2">slow media</a> – or “<a href="https://www.jakedugard.com/blog/slowcial-media">slowcial media</a>”.</p>
<h2>Twitter’s new feature</h2>
<p>This isn’t the first time Twitter has trialled and released “<a href="https://warwick.ac.uk/newsandevents/knowledgecentre/health/public-health/healthnudges">nudges</a>” aimed at addressing poor behaviour on the platform. In 2020, Twitter added <a href="https://www.independent.co.uk/life-style/gadgets-and-tech/twitter-misinformation-warning-like-tweets-b1760851.html">misinformation labels</a> to tweets in response to COVID-19 conspiracies circulating on the platform, which they say reduced the number of tweets quoting misleading information by 29%. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/misinformation-tech-companies-are-removing-harmful-coronavirus-content-but-who-decides-what-that-means-144534">Misinformation: tech companies are removing 'harmful' coronavirus content – but who decides what that means?</a>
</strong>
</em>
</p>
<hr>
<p>But online abuse continues to be a highly contentious issue for Twitter, with reports of <a href="https://theconversation.com/search/result?sg=53f72403-673c-42cb-b911-a6b2a23132a3&sp=1&sr=1&url=%2Fwhy-is-celebrity-abuse-on-twitter-so-bad-it-might-be-a-problem-with-our-empathy-154970">celebrity abuse</a>, aimed particularly at <a href="https://theconversation.com/analysis-shows-horrifying-extent-of-abuse-sent-to-women-mps-via-twitter-126166">women</a>, commonplace on the platform. Twitter’s new self-moderation prompt aims to address this problem. </p>
<p>The new prompt has been <a href="https://twitter.com/twittersupport/status/1292883957164281861?lang=en">trialled</a> for select accounts and regions since May 2020. Twitter shared the results of this trial in a <a href="https://blog.twitter.com/en_us/topics/product/2021/tweeting-with-consideration.html">recent blog post</a>, announcing that 34% of people who encountered the prompt revised their initial reply – or deleted it altogether. They also claim those who’d been prompted once composed an average of 11% fewer offensive tweets in the future.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1257717113705414658"}"></div></p>
<p>To understand why Twitter’s new nudge – adding a little friction to the instantaneous process of posting a tweet – appears to be reducing abusive replies, we can look at what existing studies tell us about the sources of online abuse.</p>
<h2>Why the abuse?</h2>
<p>Online behaviour is often characterised by a tendency to act in a less inhibited way than one might act offline, as when users <a href="https://theconversation.com/twitter-tries-to-tackle-abuse-as-research-shows-that-most-of-us-can-be-trolls-online-72698">post abuse</a> they’d not necessarily share in a face-to-face context. Research suggests this disinhibition stems from our feeling of <a href="https://theconversation.com/using-real-names-is-just-one-way-of-cleaning-up-online-comments-24796">anonymity</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/15257832/">invisibility</a> online – and the absence of any perceived authority to prevent us from misbehaving. </p>
<p>I’ve previously been involved in studies that investigated the different ways in which people <a href="https://www.psychologytoday.com/gb/blog/love-digitally/201603/do-you-crave-facebook-likes">seek validation</a> from posting on social media. We found that people were often prepared to manipulate posts to increase the degree of attention they received in the form of likes. They even reported blindly posting about issues they didn’t necessarily agree with, explaining that they did this to boost their spirits or self-esteem.</p>
<figure class="align-center ">
<img alt="A group of people looking at their phones" src="https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/401905/original/file-20210520-19-pmmd3h.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Social media users are often motivated to post for self-serving reasons.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-men-women-look-their-smart-1614521743">agil73/Shutterstock</a></span>
</figcaption>
</figure>
<p>All this seems to suggest that social media platforms are a unique environment where individuals post with little prior consideration as to whether that post could offend or upset others.</p>
<h2>Slowcial media</h2>
<p>Twitter’s move to extend the time period we use to consider rushed and sometimes abusive replies ties in with the work of psychologist Daniel Kahneman, whose book <a href="https://www.theguardian.com/books/2011/dec/13/thinking-fast-slow-daniel-kahneman">Thinking Fast and Slow</a> argues we think in two different ways. Fast thinking requires little to no effort and takes place with a minimal degree of control, while slow thinking is more thoughtful and reflective, and is associated with higher levels of concentration. </p>
<p>It’s clear that both ways of thinking might determine what we post on social media. When we follow the impulse to post quickly, we’re thinking fast and with less consideration. But when Twitter’s algorithm makes us pause to stop and think, it may bring slow thinking into play. </p>
<p>Seeing as slow thinking is responsible for overseeing a person’s behaviour, its activation in the sometimes frenzied environment of social media may prevent us from instantly venting our anger via fast thinking – even if we feel justifiably aggrieved. </p>
<h2>Convenience over concentration</h2>
<p>Having said all of this, as humans, we do tend to seek the <a href="https://www.spectator.co.uk/article/not-so-fast">easiest and most economical route</a> to our needs and wants. Therefore, it’s possible that we may be reluctant to activate slow thinking – often the case when we <a href="https://www.eff.org/wp/clicks-bind-ways-users-agree-online-terms-service">unthinkingly click through</a> terms and conditions prompts. </p>
<p>Whether Twitter’s “stop and think” prompt will work may also depend to some extent on <a href="https://www.sciencedirect.com/topics/psychology/impulsivity">how impulsive</a> you are. Impulsiveness is characterised by a tendency to act without thinking too closely about one’s actions, and can be measured using an <a href="https://journalbipolardisorders.springeropen.com/articles/10.1186/s40345-016-0057-1">impulsiveness test</a>. </p>
<p>Finally, regardless of any new “stop and think” function on Twitter, other personality factors also drive people’s desire to use social media in a toxic way, a behaviour often referred to as <a href="https://theconversation.com/new-research-shows-trolls-dont-just-enjoy-hurting-others-they-also-feel-good-about-themselves-145931">trolling</a>. Typically, trolls show <a href="https://www.sciencedirect.com/science/article/abs/pii/S0191886917300260">a disregard for any pain or suffering</a> inflicted on other people, which is often characteristic of a psychopathic and sadistic personality types.</p>
<p>So while granting users a second chance to rethink their abusive tweets might reduce online abuse, it’s unlikely to be enough. There will still be those who won’t take the chance to slow down and reflect, and others who press on with their abusive messages anyway – even after engaging their slow, measured system of thinking.</p><img src="https://counter.theconversation.com/content/161105/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Martin Graff does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A gentle nudge to rethink our social media posting could significantly reduce online abuse.Martin Graff, Senior Lecturer in Psychology of Relationships, University of South WalesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1549702021-03-04T15:13:37Z2021-03-04T15:13:37ZWhy is celebrity abuse on Twitter so bad? It might be a problem with our empathy<figure><img src="https://images.theconversation.com/files/387770/original/file-20210304-23-1972yvh.jpg?ixlib=rb-1.1.0&rect=46%2C0%2C5184%2C3453&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/july-3-2020-brazil-this-photo-1768944725">rafapress/Shutterstock</a></span></figcaption></figure><p>Stories of celebrities suffering abuse and harassment on Twitter are a constant feature in today’s news. <a href="https://www.bbc.co.uk/news/uk-northern-ireland-foyle-west-56086818">Footballers</a> and <a href="https://www.rugbypass.com/news/twitter-trolls-abuse-of-owen-farrell-fueled-by-jealously/">rugby players</a> are ritually abused if their teams lose. Politicians and journalists are abused for doing their jobs – and that abuse is <a href="https://www.amnesty.org.uk/press-releases/women-abused-twitter-every-30-seconds-new-study">far worse for women</a>. Even <a href="https://www.theguardian.com/society/2021/feb/17/captain-sir-tom-moore-daughter-trolling-broken-heart">Captain Sir Tom Moore</a> was subjected to a torrent of abuse on Twitter.</p>
<p>This abuse has led many <a href="https://www.bbc.co.uk/news/entertainment-arts-39743552">high-profile celebrities</a> to quit social media altogether, most notably <a href="https://www.theguardian.com/uk-news/2021/jan/10/harry-meghan-quit-social-media">Harry and Meghan</a>, the Duke and Duchess of Sussex. And Twitter has been singled out as a <a href="https://eprints.ncl.ac.uk/file_store/production/263649/562427B9-55E3-4946-AA27-9279B5E3E5B2.pdf">particularly uncivil social media platform</a>, where abuse and harassment are common.</p>
<p>My research team and I recently set out to understand the forces behind celebrity Twitter abuse. <a href="https://doi.org/10.1016/j.chbr.2021.100056">Our experiments</a> found that people victim-blame celebrities for the abuse they suffer on Twitter. This finding suggests that people struggle to empathise with abused celebrities, which may affect the level of harassment the rich and famous experience online.</p>
<h2>Wall to wall</h2>
<p>In a previous <a href="https://researchonline.gcu.ac.uk/en/publications/the-volume-and-source-of-cyberabuse-influences-victim-blame-and-p">research article</a>, we looked at abuse on Facebook. When we analysed the abusive posts we found there, we realised that many victims of abuse weren’t treated with a great deal of sympathy from other users. Actually, we found <a href="https://researchonline.gcu.ac.uk/en/publications/rape-myth-acceptance-victim-blame-attribution-and-just-world-beli">victim-blaming</a> in many of the posts and comments we observed.</p>
<p>Victim-blaming occurs when people assign the blame for an abusive episode to victims rather than the abusers themselves. It’s sometimes seen in criminal cases and <a href="https://www.cambridge.org/core/journals/industrial-and-organizational-psychology/article/beyond-blaming-the-victim-toward-a-more-progressive-understanding-of-workplace-mistreatment/C2933BAAB0A5BD4FC406A4A36EBDD4AF">workplace tribunals</a>. On social media, victim-blaming suggests that users may struggle to empathise with victims of abuse. </p>
<p>We decided to investigate this phenomenon further, this time on Twitter. We wanted to see if the victim-blaming of celebrities on Twitter differed in any way from the victim-blaming of non-celebrities.</p>
<figure class="align-center ">
<img alt="A hand holds a phone with the Twitter app on the screen" src="https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/385009/original/file-20210218-21-dg3a0y.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">We used the Twitter app to investigate the victim-blaming of celebrities online.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/chiang-mai-thailand-nov-10-2017-752831710">Jirapong Manustrong/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Abusive tweets</h2>
<p>To investigate this, we set up two studies. The methodology in both was exactly the same, except in one study the abuse victims were <a href="https://researchonline.gcu.ac.uk/en/publications/celebrity-abuse-on-twitter-the-impact-of-tweet-valence-volume-of-">celebrities</a>, and in the other, they were <a href="https://doi.org/10.1016/j.chbr.2021.100056">non-celebrities</a>. All our abuse victims were white men, to control for other abuse factors like race and gender.</p>
<p>We showed people a tweet on a live Twitter feed, with six replies beneath the original tweet. We showed three types of initial tweet: something positive (a show of gratitude or a compliment), something neutral (a mundane, everyday tweet), or something negative (a complaint or criticism). We expected that our participants would be more likely to victim blame people who’d initially tweeted something negative.</p>
<p>We also varied the six replies. Some tweets featured two neutral replies and four abusive ones; others featured four neutral replies and only two abusive ones. We expected people would register the abuse as more severe when they saw four abusive replies rather than two.</p>
<p>After reading all of this, we asked our participants who they thought was to blame for the abuse, and how severe they thought the abuse was. By comparing responses in our two studies, we’d know if celebrity victims of abuse truly are treated with less empathy.</p>
<h2>One rule for them</h2>
<p>We found that celebrity victims were overwhelmingly blamed for their own abuse if they’d tweeted something negative to begin with. People even tended to blame the celebrity victim for the abuse they received after a neutral tweet, which they didn’t do as much with non-celebrity victims.</p>
<p>Our studies found that unless the celebrities were tweeting something positive, they were often blamed for whatever abuse came next. Participants also saw the abuse of celebrities as less severe than the abuse of non-celebrities, even if the abusive comments themselves were identical in both cases.</p>
<figure class="align-center ">
<img alt="Harry and Meghan are photographed on a smartphone while speaking to a crowd" src="https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&rect=249%2C47%2C2396%2C1866&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/387750/original/file-20210304-23-18kzeo1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Harry and Meghan, the Duke and Duchess of Sussex, are frequent victims of social media abuse.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/prince-harry-meghan-markle-birmingham-uk-1610159707">MattKeeble.com/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Behind the abuse</h2>
<p>Victim-blaming is often driven by people’s belief that <a href="https://link.springer.com/chapter/10.1007%2F978-1-4899-0448-5_2">we live in a just world</a>: that bad things happen to bad people who deserve their misfortune. But our finding – that people believe celebrities deserve more misfortune than non-celebrities – touches upon an empathy gap that further research may do well to explore.</p>
<p>And, while it may be tempting to blame social media companies for the abuse that appears on their platforms, this would be to once again avoid blaming the abusers themselves.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/online-abuse-on-facebook-and-twitter-cant-be-solved-by-regulation-alone-89270">Online abuse on Facebook and Twitter can't be solved by regulation alone</a>
</strong>
</em>
</p>
<hr>
<p>Some celebrities have argued that <a href="https://metro.co.uk/2020/07/28/vas-morgan-creates-petition-ban-anonymous-social-media-accounts-rita-ora-support-13051715/">anonymity should be banned on Twitter</a>, in the belief that it’s the user’s sense of impunity that encourages their abuse. Others, <a href="https://news.sky.com/story/marcus-rashford-says-people-behind-racist-abuse-should-have-social-media-accounts-deleted-immediately-12216435">like footballer Marcus Rashford</a>, think abusive accounts should be deleted immediately.</p>
<p>Irrespective of policing policies, the sad truth is that we will continue to see celebrities being routinely abused on social media. But learning more about how why this is the case – and whether psychological factors such as empathy influence abuse on social media – can go a long way towards improving what is currently a deeply unpleasant situation.</p><img src="https://counter.theconversation.com/content/154970/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christopher Hand does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>People appear to victim-blame celebrities for the abuse they suffer on Twitter.Christopher Hand, Lecturer, Psychology, Glasgow Caledonian UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1459312020-09-16T05:17:59Z2020-09-16T05:17:59ZNew research shows trolls don’t just enjoy hurting others, they also feel good about themselves<figure><img src="https://images.theconversation.com/files/358070/original/file-20200915-24-jraxua.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">www.shutterstock.com</span></span></figcaption></figure><p>There is an urgent need to understand why people troll. </p>
<p>Recent Australian estimates show about <a href="https://www.sbs.com.au/news/one-in-three-australians-say-they-ve-been-trolled-online">one in three</a> internet users have experienced online harassment. </p>
<p>Across several <a href="https://www.sciencedirect.com/science/article/abs/pii/S0191886917304270?via%3Dihub">research studies</a>, I have attempted to <a href="https://www.sciencedirect.com/science/article/abs/pii/S0191886919300017?via%3Dihub">construct the psychological profile</a> of those who trolls to harm others. </p>
<p>In my <a href="https://www.liebertpub.com/doi/10.1089/cyber.2019.0652">most recent study</a>, conducted with Genevieve Steele, I wanted to see if trolling could be linked to self-esteem. Do people troll because they have low self-worth? </p>
<h2>What is trolling?</h2>
<p>In <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563220303010?via%3Dihub">scientific literature</a>, internet trolling is defined as a malicious online behaviour, characterised by aggressive and deliberate provocation of others. “Trolls” seek to provoke, upset and harm others via inflammatory messages and posts. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/online-trolling-used-to-be-funny-but-now-the-term-refers-to-something-far-more-sinister-110272">Online trolling used to be funny, but now the term refers to something far more sinister</a>
</strong>
</em>
</p>
<hr>
<p>Trolling can refer to a <a href="https://www.liebertpub.com/doi/10.1089/cyber.2018.0210">variety of online behaviour</a>. In some circumstances, the intent of the trolling behaviour may even be to amuse and entertain. However, in my research, I have explored trolling as a malevolent behaviour, where the troll wants to hurt their online victim. </p>
<h2>Why is trolling a problem?</h2>
<p>Trolling can cause significant harm and distress. It is associated with serious <a href="https://theconversation.com/how-empathy-can-make-or-break-a-troll-80680">physical and psychological effects</a>, including disrupted sleep, lowered self-esteem, depression, <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563216301285?via%3Dihub">self-harm, suicidal ideation,</a> and in some cases, even <a href="https://www.smh.com.au/entertainment/celebrity/charlotte-dawson-death-twitter-criticised-for-failing-to-act-against-trolls-20140222-338z6.html">suicide</a>. </p>
<figure class="align-center ">
<img alt="Woman looking at her phone with serious expression." src="https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/358286/original/file-20200916-22-1t5dy6n.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Trolling can lead to sleep loss and mental health issues.</span>
<span class="attribution"><span class="source">www.shutterstock.com</span></span>
</figcaption>
</figure>
<p>In 2019, <a href="https://www.tai.org.au/content/online-harassment-and-cyberhate-costs-australians-37b">The Australia Institute estimated</a> trolling and online abuse had cost the Australian economy up to $3.7 billion in health costs and lost income. </p>
<p>Alarmingly, it is <a href="https://www.sbs.com.au/news/one-in-three-australians-say-they-ve-been-trolled-online">extremely common</a> to experience trolling. Combined with the psychological and economic costs of trolling, this demonstrates the urgency of understanding why people troll. </p>
<p>If we can understand why people troll, this can inform management and prevention. </p>
<h2>Researching trolls</h2>
<p>In my latest study, I explored gender, psychopathy, sadism and self-esteem as predictors of engaging in malevolent trolling. </p>
<p><a href="https://www.sciencedirect.com/science/article/abs/pii/S1359178902000988?via%3Dihub">Psychopathy</a> is characterised by callousness, deceitfulness and a lack of personal responsibility. <a href="https://journals.sagepub.com/doi/10.1177/0956797613490749">Sadism</a> is characterised by enjoyment of physically and/or psychologically harming other people. </p>
<p>The study recruited 400 participants via social media advertisements. Almost 68% of participants were women, 43% were Australian, while the average age was 25. They completed an anonymous, confidential online questionnaire, which assessed personality and self-esteem. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/women-troll-on-dating-apps-just-as-often-as-men-72736">Women troll on dating apps just as often as men</a>
</strong>
</em>
</p>
<hr>
<p>The study also measured the extent to which participants displayed troll-like behaviours. For example:</p>
<blockquote>
<p>I enjoy upsetting people I do not personally know on the internet</p>
<p>although some people think my posts are offensive, I think they are funny. </p>
</blockquote>
<h2>What the study found</h2>
<p>Results showed that gender, psychopathy, and sadism were all significant independent predictors of malevolent trolling. That is, if you are male, have high psychopathy, or high sadism, you are more likely to troll. </p>
<p>The most powerful predictor of trolling was sadism. The more someone enjoys hurting others, the more likely it is they will troll. </p>
<figure class="align-center ">
<img alt="Profile of man looking at blurred computer screen." src="https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/358258/original/file-20200916-14-pr74zr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Men are more likely to be trolls than women.</span>
<span class="attribution"><span class="source">www.shutterstock.com</span></span>
</figcaption>
</figure>
<p>Self-esteem was not an independent predictor of trolling. </p>
<p>However, we found self-esteem interacts with sadism. So, if a person had high levels of sadism and high self-esteem, they were more likely to troll. This result was unexpected because low self-esteem has predicted other antisocial online behaviour, such as <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563214002878?via%3Dihub">cyberbullying</a>.</p>
<h2>What does this mean?</h2>
<p>These results have important implications for how we manage and respond to trolling. </p>
<p>First, based on the results of psychopathy and sadism, we understand the internet troll as someone who is callous, lacks a sense of personal responsibility and enjoys causing others harm. </p>
<p>The significance of psychopathy in the results also indicates trolls have an <a href="https://theconversation.com/how-empathy-can-make-or-break-a-troll-80680">empathy deficit</a>, particularly when it comes to their ability to experience and internalise other people’s emotions.</p>
<p>On top of this, the interaction between high sadism and high self-esteem suggests trolls are not trolling because they have low self-worth. In fact, this is quite the opposite. The more someone enjoys hurting others and the better they feel about themselves, the more likely they are to troll. </p>
<h2>So, how can we use this information?</h2>
<p>Unfortunately, the psychological profile of an internet troll means you will not get far appealing to their sense of humanity. And don’t just brush off the troll as someone who has low self-worth. Their character is far more complex, which makes managing the behaviour all the more challenging.</p>
<p><a href="https://www.sciencedirect.com/science/article/abs/pii/S0191886916307930?via%3Dihub">Previous research has found</a> showing the troll they have upset you may only reinforce their behaviour.</p>
<figure class="align-center ">
<img alt="Woman holding phone, looking out a window." src="https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/358069/original/file-20200915-18-1xsf67e.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Don’t show trolls they upset you.</span>
<span class="attribution"><span class="source">www.shutterstock.com</span></span>
</figcaption>
</figure>
<p>It appears the popular refrain is correct: <a href="https://theconversation.com/dont-feed-the-trolls-really-is-good-advice-heres-the-evidence-63657">don’t feed the trolls</a> and give them the hurt or angry response they are looking for. </p>
<p>This does not mean we should just ignore this behaviour. People who commit this type of cyber abuse should still be held accountable for their actions. </p>
<p>I propose we change the narrative. Trolls are not to be feared — their power lies in the reactions they cause. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-feed-the-trolls-really-is-good-advice-heres-the-evidence-63657">'Don't feed the trolls' really is good advice – here's the evidence</a>
</strong>
</em>
</p>
<hr>
<p>One way we can start is to become <a href="https://bullyingnoway.gov.au/NationalDay/ForSchools/LessonPlans/Pages/Stand-Together-2013.aspx">active bystanders</a>. Bystanders are those who witness the trolling. Active bystanders intervene and say “this is not okay”. </p>
<p>Don’t fight fire with fire. Respond with outward indifference and strict no tolerance. Let’s work together to dismantle the power of the troll and take back the internet from their influence. </p>
<p>It is not only up to the person experiencing the trolling to respond and manage the behaviour. We all need to take responsibility for our online environment.</p><img src="https://counter.theconversation.com/content/145931/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Evita March does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A new Australian study shows if a person has high levels of sadism and high self-esteem, they are more likely to troll.Evita March, Senior Lecturer in Psychology, Federation University AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1383892020-06-09T22:53:08Z2020-06-09T22:53:08ZZoom-bombings disrupt online events with racist and misogynist attacks<figure><img src="https://images.theconversation.com/files/340493/original/file-20200609-165349-xwloeg.jpg?ixlib=rb-1.1.0&rect=58%2C0%2C6530%2C4350&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The zoom-bombing of online meetings, classes and social events reflect a disturbing trend.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>As COVID-19 circulated the globe in March, reports emerged of another new, viral threat: “Zoom-bombing.” </p>
<p>The term derives from photo-bombing, which is defined as appearing “<a href="https://dictionary.cambridge.org/dictionary/english/photobomb">behind or in front of someone when their photograph is being taken, usually doing something silly as a joke</a>.” However, for many Zoom online meeting hosts, participants and computing infrastructure managers, Zoom-bombing was no joke. </p>
<p>The cancellation of in-person school and university classes prompted a stock market surge for Zoom, along with considerable scrutiny of <a href="https://www.vice.com/en_us/article/k7e599/zoom-ios-app-sends-data-to-facebook-even-if-you-dont-have-a-facebook-account">the video conferencing company’s startlingly weak privacy and security protocols</a>. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/JEESnmEudkE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC News: The National takes a look at Zoom-bombing.</span></figcaption>
</figure>
<p>And yet, it has been the Zoom-bomb — the interruption of Zoom meetings — that has led to <a href="https://techcrunch.com/2020/03/17/zoombombing/">considerable news media attention since mid-March</a>. While the company sought to <a href="https://blog.zoom.us/wordpress/2020/03/20/keep-uninvited-guests-out-of-your-zoom-ev">communicate best practices to prevent Zoom-bombing</a>, it continued to proliferate, leading users and shareholders alike to <a href="https://campaigns.organizefor.org/petitions/demand-that-zoom-immediately-create-a-solution-to-protect-its-users-from-racist-cyber-attacks">organize an online petition</a> and <a href="https://www.classaction.org/media/johnston-v-zoom-video-communications-inc.pdf">threaten class-action lawsuits</a>.</p>
<p>Zoom-bombing gradually began to subside after the FBI issued a statement on March 30, <a href="https://www.fbi.gov/contact-us/field-offices/boston/news/press-releases/fbi-warns-of-teleconferencing-and-online-classroom-hijacking-during-covid-19-pandemic">characterizing it as a cybercrime that should be reported to law enforcement agencies</a>.</p>
<h2>Disrupting targets</h2>
<p>Given the fear, disruption and anxiety produced by the COVID-19 pandemic, the intentional disruption of online work and education raises some obvious questions. </p>
<p>First, what would motivate someone to cause such a disruption during an unprecedented global pandemic? Is this the work of isolated individuals or a coherent co-ordinated campaign, targeting democratic institutions and processes? What is the goal of such disruptions, and who has been targeted? </p>
<p>Our team of researchers at Ryerson University’s <a href="https://www.disinformnet.ca/">Infoscape Research Lab</a> set out to answer these questions by studying three popular social media platforms: Twitter, Reddit and YouTube. We anticipated that Zoom-bombing would take on different characteristics on each of these platforms, since each is designed to facilitate a different form of communication. </p>
<p>At the outset of our research, we employed digital humanities methods to track the language associated with Zoom-bombing on each of the platforms. Tracking keywords enabled our research to cast a wide net and collect as much user generated content as possible related to Zoom-bombing on the three platforms.</p>
<h2>Broader concerns</h2>
<p>From April 3-28, 2020, our study analyzed a random sample of 1,000 tweets that contained Zoom-bomb related terms. Over half of the tweets sought to organize and co-ordinate Zoom-bombing, often by sharing Zoom access codes, or posted information and advice on how to avoid such online disruptions. </p>
<p>Tweets often reflected broader social concerns over continuing work during COVID-19, the security of online meetings and the emerging challenges to online learning. A significant percentage of tweets (15.9 per cent) specifically named online targets, including Holocaust memorials, Asian community groups, Alcoholics Anonymous meetings and various religious services. </p>
<p>Some tweets (9.4 per cent) feature students sharing Zoom access ID codes for their own classes to target specific teachers. We found that Twitter users also used the occasion to post and comment on humorous content related to the Zoom-bombing phenomenon. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=569&fit=crop&dpr=1 600w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=569&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=569&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=715&fit=crop&dpr=1 754w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=715&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/340756/original/file-20200609-21182-15qy6mj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=715&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A chart showing the content breakdown posted on Twitter.</span>
<span class="attribution"><span class="source">(Greg Elmer, Anthony Glyn Burton, Stephen Neville)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>In contrast, the prevalence of popular keywords on Reddit such as “zoompranks” and “OnlineClassRaid” suggested that the platform was largely being used as a staging ground for the Zoom-bombing of online classes and meetings. An analysis of 300 random posts from Zoom-bombing subreddits (online discussion groups dedicated to specific topics) confirmed our suspicions.</p>
<p>Nearly 70 per cent of all posts served to co-ordinate Zoom-bombing, either by sharing practical advice, Zoom meeting ID access codes or other logistical information. If we include posts that offer short affective outbursts or reactions to Zoom-bombing, then this figure approaches 90 per cent of all posts. </p>
<p>By contrast, a mere 1.3 per cent of posts admonished those in the subreddits for launching such attacks. While the vast majority of Reddit posts sought to facilitate Zoom-bombing, we also found a small percentage (6.4 per cent) that targeted particular groups, including an LGBTQ social meeting and a breastfeeding support class. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=530&fit=crop&dpr=1 600w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=530&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=530&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=666&fit=crop&dpr=1 754w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=666&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/340758/original/file-20200609-21214-131pym2.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=666&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A chart showing the content breakdown posted on Reddit.</span>
<span class="attribution"><span class="source">(Greg Elmer, Anthony Glyn Burton, Stephen Neville)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>Of all the platforms we studied, YouTube offered the most jarring view of Zoom-bombing. This highlighted the popular and controversial role that the platform played in offering videos of the “funniest moments” in Zoom-bombing.</p>
<p>Following a similar method to our study of Reddit and Twitter, we analyzed a sample of 60 of the most viewed videos on YouTube. The large majority of these videos (85 per cent) were roughly 10-minute compilations of multiple clips of Zoom-bombs, many of which were initially shared on TikTok, a popular video-sharing platform. </p>
<p>The remaining videos, also compilations, include commentaries throughout by YouTube “influencers,” individuals with large numbers of online followers. While the viewer sees these micro-celebrities laugh throughout the video, Zoom-bomb disruptions are hard to watch: many Zoom meeting hosts and participants were confused, irritated or shocked by the actions and words of Zoom-bombers. Teachers of smaller children looked traumatized. </p>
<p>Seventy-two per cent of our sample videos included mob-like interruptions, with multiple voices, screams, profanities and other sounds occurring at the same time. Most troubling, however, was the objectionable language and images used by Zoom-bombers.</p>
<p>While we found that a few Zoom-bombs included light-hearted pranks that bemused some Zoom meeting participants, nearly 87 per cent of YouTube compilations also contained racist, misogynist, homophobic and other objectionable content. Much of this content was directed against female teachers in Zoom classroom meetings. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=569&fit=crop&dpr=1 600w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=569&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=569&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=715&fit=crop&dpr=1 754w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=715&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/340759/original/file-20200609-21230-1tpxbbj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=715&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A chart showing the content breakdown posted on YouTube.</span>
<span class="attribution"><span class="source">(Greg Elmer, Anthony Glyn Burton, Stephen Neville)</span>, <span class="license">Author provided</span></span>
</figcaption>
</figure>
<h2>Viral threats</h2>
<p>We can draw a number of conclusions based on our research to date. </p>
<p>The insecurity of Zoom and the quick transition to online learning created an insecure environment, ripe for disruption and abuse by computer-savvy, overwhelmingly male high school students. </p>
<p>Zoom-bombing should remind us of the technological divide between the highly skilled and creative generations that live much of their lives online and older generations that struggle with platform settings, protocols and practices. </p>
<p>But such a generational divide should <a href="https://www.nytimes.com/2020/03/20/style/zoombombing-zoom-trolling.html">not mask the most troubling aspects of Zoom-bombing</a>, the intentional disruption of important work and the abusive targeting of women and people of colour. </p>
<p>Such toxic practices of course pre-exist <a href="https://www.vox.com/culture/2020/1/20/20808875/gamergate-lessons-cultural-impact-changes-harassment-laws">internet videoconferencing</a> and will unfortunately persist long after the end of Zoom-bombing. We may all be experiencing the pandemic together, but Zoom-bombing has also reminded us that viral threats require social solutions.</p><img src="https://counter.theconversation.com/content/138389/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Greg Elmer receives funding from Heritage Canada, SSHRC and the Bell Media Research Chair, Ryerson University. </span></em></p><p class="fine-print"><em><span>Anthony Glyn Burton receives funding from the Social Sciences and Humanities Research Council of Canada. </span></em></p><p class="fine-print"><em><span>Stephen J. Neville receives funding from the Social Sciences and Humanities Research Council of Canada.</span></em></p>Zoom-bombing disrupts people’s use of the Zoom platform for work, study and socializing. Zoom-bombing events have included racist and misogynist attacks on users.Greg Elmer, Professor, Professional Communication, Toronto Metropolitan UniversityAnthony Glyn Burton, Master's student, Joseph-Armand Bombardier SSHRC Scholar, Toronto Metropolitan UniversityStephen J. Neville, PhD Student of Communication & Culture, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1317012020-02-25T11:45:12Z2020-02-25T11:45:12ZRacism’s rise in football demands harsher sanctions and better mental health support<figure><img src="https://images.theconversation.com/files/315786/original/file-20200217-11044-4zf0he.jpg?ixlib=rb-1.1.0&rect=18%2C22%2C2968%2C1839&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Dany Rose </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/london-england-09-mar-2019-danny-1365018053">Mitch Gunn/Shutterstock</a></span></figcaption></figure><p>The <a href="https://www.theguardian.com/football/2018/jun/06/danny-rose-tells-family-not-travel-world-cup-player-racism-fears-abuse-england-football-team">English defender Danny Rose</a> first experienced depression after a lengthy break from action following delayed knee surgery. His depression was deepened by family tragedy and racist abuse. Fearing similar racism at the 2018 World Cup in Russia, Rose told his family to avoid the event. This fear for their safety at the tournament caused him great distress.</p>
<p>Racist abuse in football increased sharply in 2019. There were more than 150 incidents reported to police last season, representing a rise of more than 50% compared with the
<a href="https://www.theguardian.com/football/2020/jan/30/football-related-racist-incidents-sharp-rise-police-kick-it-out">season before</a>. </p>
<p>This worrying rise has been seen across every level of competition – from international matches to amateur leagues. For fans and football organisations this is an alarming trend. While moves have been made penalise those responsible for incidents, they have been deemed insufficient. There has also been a lack of mental health support for players experiencing such abuse. </p>
<h2>Poor sanctions</h2>
<p>One of the worst recent incidents of racism took place in October 2019, during the build up and fallout from England’s away match in Bulgaria. The game was a qualifier for the 2020 European Championships. Bulgaria was already paying the penalty for <a href="https://en.wikipedia.org/wiki/Racism_in_association_football#Bulgaria">past incidents of racist abuse</a> and <a href="https://www.bbc.co.uk/sport/football/50036774">5,000 seats were kept empty</a>). However, throughout the game Bulgaria’s fans gave Nazi salutes and hurled persistent racist abuse (<a href="https://www.bbc.co.uk/news/world-europe-50060759">including chants and monkey noises</a>) at England’s Tyrone Mings and Raheem Sterling. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/315944/original/file-20200218-10995-mhwttr.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Raheem Sterling received racist abuse while playing in Bulgaria.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/sofia-bulgaria-14-october-2019-raheem-1543352753">Belish/Shutterstock</a></span>
</figcaption>
</figure>
<p>Uefa sanctioned Bulgaria, fining them €75,000 and ordering them to play behind closed doors after the racist abuse. This was <a href="https://www.independent.co.uk/sport/football/international/bulgaria-punishment-racism-england-football-kick-it-out-statement-uefa-aleksander-ceferin-a9177096.html">a missed opportunity</a> to come down hard on racism with Danny Rose, among others, calling the disciplinary action a <a href="https://www.dailymail.co.uk/sport/football/article-7678307/Danny-Rose-admits-felt-sickened-England-team-mates-subjected-racist-abuse-Bulgaria.html">“farce”</a>. </p>
<p>This raises the question about what adequate and effective sanctions might include. With the increased availability and improvement of <a href="https://www.kickitout.org/forms/online-reporting-form">reporting mechanisms</a> and video footage (from CCTV and fan mobile phones) the identification of abusive individuals means that <a href="https://www.itv.com/news/wales/2020-02-11/newport-county-fan-handed-lifetime-football-ban-for-racist-abuse">targeted stadium bans</a> have increasingly been used.</p>
<p>And while such moves work to deal with the issue at source, they do little to handle any mental health fallout. </p>
<h2>Lack of support</h2>
<p>The conversation around mental health in elite sport has changed dramatically in recent years. In particular, there has been a significant increase in players’ willingness to <a href="https://www.tandfonline.com/doi/abs/10.1080/2159676X.2018.1564691">publicly disclose</a> their mental health diagnoses and campaign for better fan awareness. </p>
<p>This positive change has led to a greater understanding of the common causes of mental health issues in professional sports players – including competitive pressures, job insecurity, long-term injury and retirement. It has also helped to shed light on the stigma attached to seeking help, which can lead players to delay or not deal with their mental health. </p>
<p>While this is all good, the link between mental wellbeing and racism – particularly the long-term impacts of such abuse – is still sorely overlooked. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/-3nA-68fa3k?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Renée Hector, a defender for Tottenham’s women’s team, candidly disclosed the events that led to her depression in a <a href="https://www.bbc.co.uk/sport/football/49345402">BBC Sport interview</a>. The initial racist incident came from an opposition player during the championship match in January 2019. The abusive player was banned for five games and fined £200 but denied allegations. This started in motion the decline in Hector’s mental health. While deeply upsetting it was the vicious and relentless online abuse that followed that made her fall into a depression. She has since called for harsher punishments for racist abuse in football and for more help for players experiencing such incidents. </p>
<p>Hector’s revelations make plain how footballers are left exposed and unsupported by sporting organisations and their employers. </p>
<p>When considering mental health issues in the sport the question is who has a duty of care for the player and other football club staff? Is it their club, the national Football Association (FA), the Professional Footballers Association (PFA), Uefa or Fifa or their personal agent? Clearly national associations and governing bodies are failing to adequately punish and prevent racism. They are also compounding the issues by failing to provide pastoral support to those who experience it. These shortcomings could have lasting effects and until effective change occurs <a href="https://www.bbc.co.uk/sport/football/51526210">players can and will continue to walk off the pitch</a>.</p><img src="https://counter.theconversation.com/content/131701/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christopher Elsey does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Incidents of racism have risen sharply but football institutions are failing to address the issue properlyChristopher Elsey, Lecturer in Health and Well Being in Society, De Montfort UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1295562020-01-10T05:06:26Z2020-01-10T05:06:26ZBushfires, bots and arson claims: Australia flung in the global disinformation spotlight<p>In the first week of 2020, hashtag #ArsonEmergency became the focal point of a new online narrative surrounding the bushfire crisis. </p>
<p>The message: the cause is arson, not climate change.</p>
<p>Police and bushfire services (and some <a href="https://twitter.com/BBCRosAtkins/status/1215034651489820673">journalists</a>) have contradicted this <a href="https://www.theguardian.com/australia-news/2020/jan/08/police-contradict-claims-spread-online-exaggerating-arsons-role-in-australian-bushfires">claim</a>.</p>
<p>We <a href="https://www.zdnet.com/article/twitter-bots-and-trolls-promote-conspiracy-theories-about-australian-bushfires/">studied</a> about 300 Twitter accounts driving the #ArsonEmergency hashtag to identify inauthentic behaviour. We found many accounts using #ArsonEmergency were behaving “suspiciously”, compared to those using #AustraliaFire and #BushfireAustralia. </p>
<p>Accounts peddling #ArsonEmergency carried out activity similar to what we’ve witnessed in past disinformation campaigns, such as the coordinated behaviour of <a href="https://www.nytimes.com/2018/02/18/world/europe/russia-troll-factory.html">Russian trolls during the 2016 US presidential election</a>. </p>
<h2>Bots, trolls and trollbots</h2>
<p>The most effective disinformation campaigns use bot and troll accounts to infiltrate genuine political discussion, and shift it towards a different “<a href="https://www.nature.com/articles/d41586-019-02235-x">master narrative</a>”.</p>
<p>Bots and trolls have been a thorn in the side of fruitful political debate since Twitter’s early days. They mimic genuine opinions, akin to what a concerned citizen might display, with a goal of persuading others and gaining attention. </p>
<p><a href="https://www.tandfonline.com/doi/full/10.1080/10584609.2018.1526238">Bots</a> are usually automated (acting without constant human oversight) and perform simple functions, such as retweeting or repeatedly pushing one type of content. </p>
<p>Troll accounts are controlled by humans. They try to stir controversy, hinder healthy debate and simulate fake grassroots movements. They aim to persuade, deceive and cause conflict.</p>
<p>We’ve observed both troll and bot accounts spouting disinformation regarding the bushfires on Twitter. We were able to distinguish these accounts as being inauthentic for two reasons. </p>
<p>First, we used sophisticated software tools including <a href="https://github.com/mkearney/tweetbotornot">tweetbotornot</a>, <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/hbe2.115">Botometer</a>, and <a href="https://botsentinel.com/">Bot Sentinel</a>. </p>
<p>There are various definitions for the word “bot” or “troll”. Bot Sentinel says:</p>
<blockquote>
<p>Propaganda bots are pieces of code that utilize Twitter API to automatically follow, tweet, or retweet other accounts bolstering a political agenda. Propaganda bots are designed to be polarizing and often promote content intended to be deceptive… Trollbot is a classification we created to describe human controlled accounts who exhibit troll-like behavior. </p>
<p>Some of these accounts frequently retweet known propaganda and fake news accounts, and they engage in repetitive bot-like activity. Other trollbot accounts target and harass specific Twitter accounts as part of a coordinated harassment campaign. Ideology, political affiliation, religious beliefs, and geographic location are not factors when determining the classification of a Twitter account.</p>
</blockquote>
<p>These machine learning tools compared the behaviour of known bots and trolls with the accounts tweeting the hashtags #ArsonEmergency, #AustraliaFire, and #BushfireAustralia. From this, they provided a “score” for each account suggesting how likely it was to be a bot or troll account. </p>
<p>We also manually analysed the Twitter activity of suspicious accounts and the characteristics of their profiles, to validate the origins of #ArsonEmergency, as well as the potential motivations of the accounts spreading the hashtag.</p>
<h2>Who to blame?</h2>
<p>Unfortunately, we don’t know who is behind these accounts, <a href="https://www.tandfonline.com/doi/full/10.1080/1369118X.2019.1637447">as we can only access trace data such as tweet text and basic account information</a>. </p>
<p>This graph shows how many times #ArsonEmergency was tweeted between December 31 last year and January 8 this year:</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=382&fit=crop&dpr=1 600w, https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=382&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=382&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=480&fit=crop&dpr=1 754w, https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=480&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/309369/original/file-20200109-80153-7kubgj.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=480&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">On the vertical axis is the number of tweets over time which featured #ArsonEmergency. On January 7, there were 4726 tweets.</span>
<span class="attribution"><span class="source">Author provided</span></span>
</figcaption>
</figure>
<p>Previous bot and troll campaigns have been thought to be the work of <a href="https://link.springer.com/article/10.1007/s42001-019-00051-x">foreign interference, such as Russian trolls</a>, or <a href="http://www.abc.net.au/news/2019-11-08/topham-guerins-boomer-meme-industrial-complex/11682116?pfmredir=sm&sf223191298=1">PR firms hired to distract and manipulate voters</a>. </p>
<p><a href="https://www.nytimes.com/2020/01/08/world/australia/fires-murdoch-disinformation.html">The New York Times has also</a> reported on perceptions that media magnate Rupert Murdoch is influencing Australia’s bushfire debate.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/weather-bureau-says-hottest-driest-year-on-record-led-to-extreme-bushfire-season-129447">Weather bureau says hottest, driest year on record led to extreme bushfire season</a>
</strong>
</em>
</p>
<hr>
<h2>Weeding-out inauthentic behaviour</h2>
<p>In late November, some Twitter accounts began using #ArsonEmergency to counter <a href="https://www.climatecouncil.org.au/not-normal-climate-change-bushfire-web/">evidence</a> that climate change is linked to the severity of the bushfire crisis.</p>
<p>Below is one of the earliest examples of an attempt to replace #ClimateEmergency with #ArsonEmergency. The accounts tried to get #ArsonEmergency trending to drown out dialogue acknowledging the link between climate change and bushfires.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=643&fit=crop&dpr=1 600w, https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=643&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=643&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=808&fit=crop&dpr=1 754w, https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=808&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/309228/original/file-20200109-80144-ino2th.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=808&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">We suspect the origins of the #ArsonEmergency debacle can be traced back to a few accounts.</span>
<span class="attribution"><span class="source">Author provided</span></span>
</figcaption>
</figure>
<p>The hashtag was only tweeted a few times in 2019, but gained traction this year in a sustained effort by about 300 accounts.</p>
<p><a href="https://www.zdnet.com/article/twitter-bots-and-trolls-promote-conspiracy-theories-about-australian-bushfires/">A much larger portion of bot and troll-like accounts</a> pushed #ArsonEmergency, than they did #AustraliaFire and #BushfireAustralia. </p>
<p>The narrative was then adopted by genuine accounts who furthered its spread. </p>
<p>On multiple occasions, we noticed suspicious accounts countering expert opinions while using the #ArsonEmergency hashtag.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=764&fit=crop&dpr=1 600w, https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=764&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=764&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=960&fit=crop&dpr=1 754w, https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=960&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/309229/original/file-20200109-80132-nbxowa.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=960&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The inauthentic accounts engaged with genuine users in an effort to persuade them.</span>
<span class="attribution"><span class="source">author provided</span></span>
</figcaption>
</figure>
<h2>Bad publicity</h2>
<p>Since media coverage has shone light on the disinformation campaign, #ArsonEmergency has gained even more prominence, but in a different light. </p>
<p>Some journalists are acknowledging the role of disinformation bushfire crisis – and countering narrative the Australia has an arson emergency. However, the campaign does indicate Australia has a climate denial problem. </p>
<p>What’s clear to me is that Australia has been propelled into the global disinformation battlefield. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/watching-our-politicians-fumble-through-the-bushfire-crisis-im-overwhelmed-by-deja-vu-129338">Watching our politicians fumble through the bushfire crisis, I'm overwhelmed by déjà vu</a>
</strong>
</em>
</p>
<hr>
<h2>Keep your eyes peeled</h2>
<p>It’s difficult to debunk disinformation, as it often contains a grain of truth. In many cases, it leverages people’s previously held beliefs and biases. </p>
<p>Humans are particularly vulnerable to disinformation in times of emergency, or when addressing contentious issues like climate change.</p>
<p>Online users, especially journalists, need to stay on their toes. </p>
<p>The accounts we come across on social media may not represent genuine citizens and their concerns. A trending hashtag may be trying to mislead the public.</p>
<p>Right now, it’s more important than ever for us to prioritise factual news from reliable sources – and identify and combat disinformation. The Earth’s future could depend on it.</p><img src="https://counter.theconversation.com/content/129556/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Timothy Graham receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Tobias R. Keller receives funding from the Swiss National Science Foundation. </span></em></p>We found about 300 suspicious Twitter accounts, which we suspect included a high proportion of bots and trolls pushing the #ArsonEmergency narrative.Timothy Graham, Senior Lecturer, Queensland University of TechnologyTobias R. Keller, Visiting Postdoc, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1230142019-09-11T00:26:09Z2019-09-11T00:26:09ZVictoria’s new anti-vilification bill strikes the right balance in targeting online abuse<figure><img src="https://images.theconversation.com/files/291670/original/file-20190910-109927-jtv38t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Victorian MP Fiona Patten has introduced a new anti-vilification bill to parliament that would extend protections to women, the disabled and the LGBT community.</span> <span class="attribution"><span class="source">James Ross/AAP</span></span></figcaption></figure><p>Two weeks ago, Victorian Reason Party MP Fiona Patten <a href="https://fionapatten.com.au/news/media-release-patten-launches-anti-trolling-bill/">introduced a new anti-vilification bill</a> to the state parliament.</p>
<p>In the midst of <a href="https://theconversation.com/the-government-has-released-its-draft-religious-discrimination-bill-how-will-it-work-122618">heated public debate</a> over the federal government’s draft Religious Discrimination Bill, Patten’s bill has been given far less attention.</p>
<p>The <a href="http://www.legislation.vic.gov.au/domino/Web_Notes/LDMS/PubPDocs.nsf/ee665e366dcb6cb0ca256da400837f6b/5427bc7c551a2a6aca258463001eb278/$FILE/591PM60bi1.pdf">Racial and Religious Tolerance Amendment Bill</a> is <a href="https://10daily.com.au/news/crime/a190828lihky/new-anti-trolling-laws-to-stop-online-abuse-and-harassment-20190828">described by Patten</a> as an “Australia-first” attempt to target hate speech and trolling on social media, particularly against women.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-australias-anti-vilification-laws-matter-106615">Why Australia's anti-vilification laws matter</a>
</strong>
</em>
</p>
<hr>
<p>Others have <a href="https://www.theaustralian.com.au/nation/politics/fiona-patten-defends-antivilification-bill/news-story/590cca0772e2f411d596b3421f3f7f29">warned of unintended consequences</a> that may arise related to free speech, likening the bill to <a href="http://www5.austlii.edu.au/au/legis/cth/consol_act/rda1975202/s18c.html">section 18C of the Racial Discrimination Act</a>.</p>
<p>But in reality, Patten’s bill provides a common sense approach to protecting Victorians from harmful verbal abuse at a time when such protections are being <a href="https://theconversation.com/religious-discrimination-bill-is-a-mess-that-risks-privileging-people-of-faith-above-all-others-122631">eroded at the federal level</a>. </p>
<p>Parliamentary debate on the bill continues this week.</p>
<h2>What is the current Victorian law?</h2>
<p>Victoria currently prohibits vilification through the <a href="http://www8.austlii.edu.au/cgi-bin/viewdb/au/legis/vic/consol_act/rarta2001265">Racial and Religious Tolerance Act</a>.</p>
<p>This prohibits a person from engaging in conduct against another person or group <a href="http://www8.austlii.edu.au/cgi-bin/download.cgi/cgi-bin/download.cgi/download/au/legis/vic/consol_act/rarta2001265.pdf">in public</a> that incites:</p>
<ul>
<li>hatred</li>
<li>serious contempt</li>
<li>revulsion, or</li>
<li>severe ridicule</li>
</ul>
<p>Any of these four standards can be used to meet the threshold test for vilification. If this is met, a claimant can lodge a complaint through the Victorian Equal Opportunity and Human Rights Commission (VEOHRC) or the Victorian Civil and Administrative Tribunal (VCAT).</p>
<p>However, this conduct is only prohibited when it is on the basis of another person or group’s <a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/vic/consol_act/rarta2001265/s7.html">race</a> or <a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/vic/consol_act/rarta2001265/s8.html">religion</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/metoo-must-also-tackle-online-abuse-93000">#MeToo must also tackle online abuse</a>
</strong>
</em>
</p>
<hr>
<p>There are some <a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/vic/consol_act/rarta2001265/s11.html">exceptions</a> where reasonable statements made in good faith are rendered lawful. This includes artistic works, academic and scientific works, fair media reporting, and statements made “in the public interest”.</p>
<p>These exceptions ensure that the right to freedom from discrimination and vilification is balanced with the right to free speech. If anything, the exceptions err on the side of protecting free speech.</p>
<p>It is also considered a <a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/vic/consol_act/rarta2001265/s24.html">more serious criminal offence</a> when a person “intentionally” engages in conduct that the person “knows” is likely to vilify.</p>
<h2>What does the Patten bill do?</h2>
<p>The Patten bill seeks to extend vilification protections to other attributes, namely gender, sexual orientation, gender identity, sex characteristics and disability.</p>
<p>The bill also grants new powers to the VEOHRC to be able to request information from any relevant person or business to identify online “trolls” after a vilification complaint has been made. </p>
<p>Such requests could require, for instance, social media companies to hand over information on individuals who engage in online abuse through “anonymous” accounts. This request only applies to existing complaints, though. It does not provide a <em>carte blanche</em> right for authorities to search through social media to identify offenders, and is subject to VCAT approval.</p>
<p>The bill also contains appropriate confidentiality and privacy controls to prevent unreasonable disclosure of this information. </p>
<p>The bill also does not change the threshold test for vilification, except that “incitement” is replaced with “likely to incite”. As such, conduct would be prohibited if it was “likely to” incite hatred, serious contempt, revulsion, or severe ridicule on the basis of the above attributes (including race or religion). </p>
<p>Further, consistent with other Australian jurisdictions, the bill would deem “reckless” vilification a criminal offence. This would cover dangerous acts of vilification in which an offender has wilfully disregarded the harm caused to other people, even if they may not have “intended” the outcome.</p>
<p>It would also change the subjective test for criminal vilification – that an offender “knows” their conduct is vilification – to an objective test. </p>
<p>This ensures judges do not have to subjectively “go inside” the heads of offenders to ascertain their knowledge at the time of an offence, which can lead to unpredictable results.</p>
<h2>Why are the changes needed?</h2>
<p>The Patten bill would ensure that women, the LGBTI+ community and people with disabilities are afforded the same protections from harmful abuse as those granted for race and religion.</p>
<p><a href="https://www.twenty10.org.au/wp-content/uploads/2016/04/Robinson-et-al.-2014-Growing-up-Queer.pdf">Over 64% of LGBTI+ people</a> between the ages of 16 and 27 have been subject to verbal abuse on the basis of their sexual orientation, gender identity or intersex status. Thoughts of self-harm are <a href="https://www.acon.org.au/wp-content/uploads/2015/04/Writing-Themselves-In-3-2010.pdf">almost twice as likely to occur</a> for LGBTI+ young people after they have been subject to such abuse.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/marriage-equality-was-momentous-but-there-is-still-much-to-do-to-progress-lgbti-rights-in-australia-110786">Marriage equality was momentous, but there is still much to do to progress LGBTI+ rights in Australia</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.amnesty.org.au/australia-poll-reveals-alarming-impact-online-abuse-women/">Thirty percent of all women</a> have experienced online abuse or harassment, with a third of these reporting fears for their physical safety as a result.</p>
<p>Further, <a href="https://www.theguardian.com/australia-news/2019/aug/29/religious-discrimination-bill-coalition-accused-of-weakening-state-human-rights-law">a significant proportion</a> of complaints under Tasmania’s strong anti-vilification laws are from people with disabilities. </p>
<p>All of these groups deserve protection.</p>
<p>The Patten bill would also modernise a nearly 20-year-old law that is largely ill-equipped to deal with online abuse. </p>
<p>The bill would particularly help the VEOHRC in responding to complaints over online gendered hate speech and bullying, such as that <a href="https://www.abc.net.au/news/2019-03-20/tayla-harris-felt-sexually-abused-aflw-photo-trolls-seven/10919008">directed at AFLW player Tayla Harris</a> earlier this year.</p>
<h2>What about concerns about the bill’s reach?</h2>
<p>Despite <a href="https://www.theaustralian.com.au/nation/politics/new-victorian-antidiscrimination-law-section-18c-on-steroids/news-story/e5d2b72877102ce19f36eed6c7491e88">reports to the contrary</a>, the bill is <em>not</em> “section 18C on steroids”.</p>
<p><a href="http://www5.austlii.edu.au/au/legis/cth/consol_act/rda1975202/s18c.html">Section 18C</a> of the Racial Discrimination Act prohibits acts that are “reasonably likely” to “offend, insult, humiliate or intimidate” a person or group. These standards are lower than the threshold test for racial and religious vilification under Victorian law. </p>
<p>The Victorian test would be largely unchanged by the Patten bill. As such, even with the Patten bill changes, it would be <em>harder</em> to prove vilification under Victorian law than under section 18C.</p>
<p>Indeed, complaints under the current Victorian law are hardly voluminous; <a href="https://www.humanrightscommission.vic.gov.au/home/our-resources-and-publications/annual-reports/item/1745-victorian-equal-opportunity-and-human-rights-commission-annual-report-2017-18-dec-2018">only 26 were lodged</a> in the past two years.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/section-18c-is-an-important-part-of-a-civilised-society-and-no-threat-to-free-speech-64801">Section 18C is an important part of a civilised society and no threat to free speech</a>
</strong>
</em>
</p>
<hr>
<p>Suggestions that the <a href="https://www.theaustralian.com.au/nation/politics/fiona-patten-defends-antivilification-bill/news-story/590cca0772e2f411d596b3421f3f7f29">bill reaches far beyond</a> other Australian anti-vilification laws and uniquely restricts free speech are also blatantly incorrect.</p>
<p><a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/tas/consol_act/aa1998204/s17.html">Tasmania</a> provides far stronger protection for vilification, prohibiting conduct that a reasonable person would anticipate would offend, insult, ridicule, humiliate or intimidate another person. It also protects a more expansive range of attributes that include age, relationship status and pregnancy.</p>
<p>Existing laws in <a href="http://classic.austlii.edu.au/au/legis/qld/consol_act/aa1991204/s124a.html">Queensland</a>, <a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/nsw/consol_act/aa1977204/s49zt.html">NSW</a> and <a href="http://www8.austlii.edu.au/cgi-bin/viewdoc/au/legis/act/consol_act/da1991164/s67a.html">ACT</a> contain almost identical provisions to the Patten bill. </p>
<p>The main difference is that those three jurisdictions do not protect gender, while the Patten bill (and Tasmania) does. </p>
<p>Considering the prevalence of online gendered abuse, the protection of women in anti-vilification laws is entirely appropriate. Indeed, the unique aspect of Patten’s bill is its granting of important additional powers to address largely gendered online abuse and trolling.</p>
<p>The Patten bill is not a radical attack on free speech. Rather, it represents a sound, appropriate and balanced approach to protecting vulnerable groups from harmful abuse, particularly in our social media age.</p>
<p>In a period where the federal government is undermining and overriding existing discrimination and vilification protections, this bill provides some much-needed hope and reason.</p><img src="https://counter.theconversation.com/content/123014/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Liam Elphick does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The proposed amendments would provide much-needed updates to Victoria’s vilification laws and bring the state in line with NSW, Queensland, Tasmania and the ACT.Liam Elphick, Adjunct Research Fellow, Law School, The University of Western AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1152972019-05-08T20:06:42Z2019-05-08T20:06:42ZWe need to do more about cyberbullying against Indigenous Australians<figure><img src="https://images.theconversation.com/files/273223/original/file-20190508-183096-1mnsoki.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Detail from a poster designed by the Indigenous creative agency Iscariot Media, which highlights the problem of cyberbullying. </span> <span class="attribution"><span class="source">Author provided</span></span></figcaption></figure><p>As part of his re-election pitch, Prime Minister Scott Morrison has promised to <a href="https://www.sbs.com.au/news/scott-morrison-declares-war-on-social-media-trolls">crack down on online trolls</a>, increasing the penalties for online harassment.</p>
<p>The Melbourne Demons, meanwhile, have <a href="https://thewest.com.au/sport/melbourne-demons/tear-the-abuse-apart-melbourne-demons-powerful-stand-against-social-media-bullying-ng-b881157241z">announced a new campaign targeting online bullying</a>. They recently opened a game by running through a banner featuring hateful tweets directed at AFL players, including <a href="https://www.abc.net.au/news/2019-04-04/online-racist-trolls-to-be-torn-down-by-melbourne-football-club/10969640">Noongar man and Demons defender Neville Jetta</a>. </p>
<p>It is good to see the issue of online abuse in the spotlight. However, researchers and policy-makers alike need to be aware that Indigenous peoples may experience social media and online abuse differently to other social groups.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=409&fit=crop&dpr=1 600w, https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=409&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=409&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=515&fit=crop&dpr=1 754w, https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=515&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/273210/original/file-20190508-73133-ehut20.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=515&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Melbourne Demons players run through a banner highlighting the problem of online abuse at the MCG in Melbourne on April 5.</span>
<span class="attribution"><span class="source">Julian Smith/AAP</span></span>
</figcaption>
</figure>
<p>A recent cluster of child suicides has brought closer scrutiny to the relationship between cyberbullying and race. <a href="https://www.sbs.com.au/nitv/article/2019/01/15/indigenous-youth-suicide-crisis-point?cid=inbody:i-don%E2%80%99t-know-what-to-do-older-sister%E2%80%99s-heartache-after-the-death-of-younger-sibling">Five Aboriginal girls, aged as young as 12, committed suicide in the first two weeks of 2019</a>.</p>
<p>One 14-year-old wrote on Facebook on <a href="https://www.sbs.com.au/nitv/nitv-the-point/article/2019/03/27/i-dont-know-what-do-older-sisters-heartache-after-death-younger-sibling">the day before her death</a>, “Once I’m gone the bullying and the racism will stop”. This week, Opposition Leader Bill Shorten described the problem of Indigenous suicide as “<a href="https://www.sbs.com.au/nitv/nitv-news/article/2019/05/07/labor-party-says-indigenous-suicide-national-emergency">a national disaster</a>”.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=899&fit=crop&dpr=1 600w, https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=899&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=899&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1130&fit=crop&dpr=1 754w, https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1130&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/273213/original/file-20190508-73137-14niscl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1130&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Bill Shorten: has described Indigenous suicide as a ‘national emergency’.</span>
<span class="attribution"><span class="source">Lukas Coch/AAP</span></span>
</figcaption>
</figure>
<p>It seems increasingly clear that there is a link between cyberbullying, anti-Indigenous racism, and mental ill-health. But what do we actually know about Indigenous people’s experiences of online bullying?</p>
<p>More than a third of the participants in <a href="https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf">our recent national research project looking at Indigenous social media use</a> reported that they had personally been subjected to racism online.</p>
<p>Twenty one percent had received direct threats by other social media users; 17% indicated these had impacted their “offline” lives, in the forms of physical violence or mental ill-health. But this is only a snapshot of an issue that deserves much greater attention.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/indigenous-voices-are-speaking-loudly-on-social-media-but-racism-endures-94287">Indigenous voices are speaking loudly on social media but racism endures</a>
</strong>
</em>
</p>
<hr>
<p>We also recently reviewed <a href="https://research-management.mq.edu.au/ws/portalfiles/portal/92634728/MQU_Cyberbullying_Report_Carlson_Frazer.pdf">literature on cyberbullying against Indigenous Australians</a> and found that there was insufficient research into the problem. There is an urgent need to engage Indigenous communities, elders and youth in conversations about online bullying and <a href="https://theconversation.com/aboriginal-communities-embrace-technology-but-they-have-unique-cyber-safety-challenges-69344">safety</a>. It is only through engaging with cyberbullying as a phenomenon that affects different social groups differently that its causes, effects and mitigating factors might be understood. </p>
<h2>A crisis online</h2>
<p>Indigenous peoples are enthusiastic social media users. These technologies have brought <a href="https://research-management.mq.edu.au/ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_Frazer.pdf">many benefits</a>. They help Indigenous families and communities connect intimately across vast distances; allowing users to share and maintain cultural knowledge, fulfil cultural protocol, such as Sorry Business, and engage in political activism. </p>
<p>But they have also brought negative consequences, and research has been slow in keeping up with recent shifts in online practices. Social media facilitates racist abuse against Indigenous peoples and the perpetration of widespread cyberbullying. In recent high-profile cases, abuse has been directed at <a href="https://www.theguardian.com/sport/2019/mar/25/liam-ryan-racist-abuse-west-coast-eagles-want-keyboard-cowards-punished-afl">West Coast Eagles player Liam Ryan</a> and Arrernte union organiser and freelance writer Celeste Liddle:</p>
<p></p><blockquote><p>I love it when people tell me to just ignore racists. Like, have you ever flicked through my Twitter feed? Just this afternoon I’ve been called ‘Abo’, ‘oogabooga’ and been reduced to a percentage several times. How do you ignore what you cannot escape?</p>— Celeste Liddle (@Utopiana) <a href="https://twitter.com/Utopiana/status/1114153903006728192?ref_src=twsrc%5Etfw">April 5, 2019</a></blockquote> <p></p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/273214/original/file-20190508-73133-1ssg43a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">West Coast Eagles player Liam Ryan has been the target of racist abuse online.</span>
<span class="attribution"><span class="source">Richard Wainwright/AAP</span></span>
</figcaption>
</figure>
<h2>More research needed</h2>
<p>Cyberbullying affects somewhere between <a href="https://www.researchgate.net/publication/260151324_Bullying_in_the_Digital_Age_A_Critical_Review_and_Meta-Analysis_of_Cyberbullying_Research_Among_Youth">10-40% of young social media users in Australia</a>. A recent <a href="http://www.tai.org.au/sites/default/files/P530%20Trolls%20and%20polls%20-%20surveying%20economic%20costs%20of%20cyberhate%20%255bWEB%255d_0.pdf">Australia Institute survey</a> found 39% of Australians have experienced some form of cyber-hatred and violence, and that it has cost the economy around $3.7 billion. </p>
<p>Victims of cyberbullying <a href="https://apo.org.au/node/40772">are significantly more likely to experience psychological ill-health</a>, most seriously in the forms of depression, anxiety, and thoughts of suicide. There are also <a href="https://www.sciencedirect.com/science/article/pii/S1054139X15001664">significant social consequences</a>, with victims and perpetrators being more likely to truant school, take leave from employment, and experience social isolation more generally.</p>
<p><a href="https://www.liebertpub.com/doi/10.1089/cyber.2018.0339">International research</a> suggests that culture and ethnicity are significant factors in the occurrence of cyberbullying. Perpetrators of cyberbullying often target <a href="https://link.springer.com/article/10.1007/s40653-018-0207-y">markers of social difference</a>, such as being Indigenous.</p>
<p>Despite being identified as a significant public health concern, however, cyberbullying against Indigenous Australians has largely escaped the attention of researchers. Indigenous peoples constitute a distinct social group in Australia. Yet this has rarely been factored into sustained studies of cyberbullying. <a href="https://link.springer.com/article/10.1007/s40653-017-0163-y">Research has tended to assume a normalised “white” subject</a>, failing to differentiate participants along these potentially significant demographic lines. <a href="https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online">But it is well established</a> that different social groups use and experience social media differently. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=424&fit=crop&dpr=1 600w, https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=424&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=424&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=533&fit=crop&dpr=1 754w, https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=533&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/272698/original/file-20190506-103045-15hxr34.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=533&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A new series of posters aims to better inform Indigenous communities about cyberbullying.</span>
<span class="attribution"><span class="license">Author provided</span></span>
</figcaption>
</figure>
<p>The <a href="https://www.mq.edu.au/about/about-the-university/faculties-and-departments/faculty-of-arts/departments-and-centres/department-of-indigenous-studies/our-research">Department of Indigenous Studies</a> at Macquarie University recently partnered with the <a href="http://www.ahmrc.org.au">Aboriginal Health and Medical Research Council</a> to produce a range of resources to better inform Indigenous communities about cyberbullying.</p>
<p>This includes a series of posters designed by the Indigenous creative agency <a href="https://www.iscariotmedia.com">Iscariot Media</a>, that can be shared on social media or printed and displayed in places like community centres and schools. </p>
<p>Treating all online abuse as the same risks ignoring the different rates, causes, and consequences of online violence. By paying more attention, we can build a better understanding of how cyberbullying is related to racism and the legacy of colonisation in Australia. </p>
<hr>
<p><em>If you or anyone you know needs help or is having suicidal thoughts, contact Lifeline on 131 114 or beyondblue 1300 22 46 36.</em></p><img src="https://counter.theconversation.com/content/115297/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bronwyn Carlson receives funding from the Australian Research Council Discovery Indigenous grant ID: 160100049</span></em></p><p class="fine-print"><em><span>Ryan Frazer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Online abuse has been in the spotlight during this election campaign and AFL season. But researchers and policy-makers alike need to do more to understand cyberbullying against Indigenous Australians.Bronwyn Carlson, Professor, Indigenous Studies, Macquarie UniversityRyan Frazer, Associate Research Fellow, Macquarie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1142932019-03-28T18:10:16Z2019-03-28T18:10:16ZThe AFL and its clubs must continue to expose and sanction online trolls, it’s the law<p>We have experienced what at times has felt like an epidemic of online trolling of AFL players in recent weeks. Some of the trolling has taken the form of sexual abuse, such as that of <a href="https://www.abc.net.au/news/2019-03-21/tayla-harris-trolls-arent-only-problem/10921784">Tayla Harris</a>. Others have been racist in nature, such as that of <a href="https://www.abc.net.au/news/2019-02-24/eddie-betts-racially-abused-on-social-media-crows-condemn-it/10844012">Eddie Betts</a> and <a href="https://www.abc.net.au/news/2019-03-25/afl-west-coast-eagles-forward-liam-ryan-cops-racist-comments/10936038">Liam Ryan</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1107933757976117249"}"></div></p>
<p>It’s beyond doubt that online trolling is a serious issue due to the significant, and potentially long term, impacts cyberbullying can have on the mental health and well-being of its targets, and their families. </p>
<p>Eddie Betts, for example, has <a href="https://www.abc.net.au/news/2017-04-13/enough-is-enough-betts-says-racism-is-wrecking-afl/8442562">described</a> racist comments from fans “wrecking” his enjoyment of the game and bringing his wife to tears. And Tayla Harris similarly has <a href="https://thewest.com.au/opinion/tayla-harris-conversation-could-be-a-turning-point-for-australian-women-in-sport-ng-b881145134z">described</a> how vulgar sexist comments make her feel “uncomfortable in my workplace”, not knowing whether the people making them would show up at the football on the weekend.</p>
<p>What to do about online trolling is more problematic. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fighting-online-abuse-shouldnt-be-up-to-the-victims-87426">Fighting online abuse shouldn't be up to the victims</a>
</strong>
</em>
</p>
<hr>
<h2>Some forms of online trolling are illegal</h2>
<p>Some laws already operate to criminalise the behaviour in certain defined circumstances. For instance, under the Australian Commonwealth <a href="http://www5.austlii.edu.au/au/legis/cth/consol_act/cca1995115/sch1.html">Criminal Code</a>, it’s an offence for a person to use the internet, including social media:</p>
<blockquote>
<p>in a way that reasonable persons would regard as being, in all the circumstances, menacing, harassing or offensive. </p>
</blockquote>
<p>And according to the Victorian <a href="http://classic.austlii.edu.au/au/legis/vic/consol_act/ca195882/s21a.html">Crimes Act</a>, it’s an offence to publish on the internet a statement or other material relating to the victim:</p>
<blockquote>
<p>with the intention of causing physical or mental harm to the victim […] or of arousing apprehension or fear in the victim for his or her own safety or that of any other person. </p>
</blockquote>
<p>Trolling also may constitute unlawful racial hatred under the <a href="https://www.legislation.gov.au/Details/C2014C00014">Commonwealth Racial Discrimination Act</a> if done to offend, insult, humiliate or intimidate another person, or a group of people, on the basis of their race, colour or national or ethnic origin.</p>
<p>And trolling may be defamatory. Defamation generally occurs when a person intentionally publishes – including through social media – information about another person or group of people that damages their reputation, or can make others think less of them. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dont-be-a-bystander-five-steps-to-fight-cyberbullying-91440">Don't be a bystander: Five steps to fight cyberbullying</a>
</strong>
</em>
</p>
<hr>
<h2>But the law is hard to enforce</h2>
<p>While declaring trolling to be a criminal offence (and defamatory) is strong on symbolism, enforcement can be slow and costly. And proving intent is difficult. It’s also a reactive, after-the-event remedy. The damage is done well before prosecutorial action is taken.</p>
<p>For this reason, people have been searching for more proactive legal remedies. After all, prevention is better than cure.</p>
<p>This has led to calls for legislation requiring social media companies to act more quickly to identify and remove sexist and racist comments from their sites. Some, including the West Australian Premier Mark McGowan and AFL Players Association Chief Executive Paul Marsh, have even <a href="https://www.abc.net.au/news/2019-03-25/afl-west-coast-eagles-forward-liam-ryan-cops-racist-comments/10936038">called for</a> legislation to force people to use legitimate names on social media, removing the false bravado that comes from anonymity.</p>
<p>But it’s unclear what more social media companies should do to protect AFL players – especially in a political environment in which governments are calling on them to do more to address <a href="https://www.smh.com.au/politics/federal/social-media-giants-face-child-safety-law-overhaul-20190215-p50y5j.html">child sexual abuse material</a> and <a href="https://www.sbs.com.au/news/australia-will-punish-social-media-giants-for-leaving-extremist-content-up">extremist content</a>. </p>
<p>Each social media company already has its own rules about what is and is not allowed on their platforms, and the way users are expected to behave towards one another. See, for example, Facebook’s <a href="https://www.facebook.com/communitystandards/">Community Standards Policy</a>, Google’s <a href="https://www.google.com/intl/en-US/+/policy/content.html">User Content and Conduct Policy</a>, and Instagram’s <a href="https://help.instagram.com/477434105621119/">Community Guidelines</a>. </p>
<p>These rules generally prohibit sexist and racist content that purposefully targets individuals with the intention of bullying, harassing or degrading them. And each company employs literally thousands of people to monitor and remove content that is in breach of their rules. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/can-facebook-use-ai-to-fight-online-abuse-95203">Can Facebook use AI to fight online abuse?</a>
</strong>
</em>
</p>
<hr>
<h2>It’s about work health and safety</h2>
<p>This then leaves us with the AFL itself, and its clubs. Their immediate response has been positive. The AFL has taken the initiative to expose and sanction the trolls behind recent racist and sexist comments, and, where appropriate, to refer the trolls to the police for investigation. </p>
<p>In the case of Liam Ryan, the AFL was, in a short period of time, able to trace the comments back to a member of the Richmond Football Club, which then imposed a two year ban on the perpetrator attending games. </p>
<p>But the AFL has also signalled there are limits to what it can do. AFL Chief Executive, Gillon McLachlan, <a href="https://wwos.nine.com.au/afl/tayla-harris-picture-aflw-afl-to-ban-trolls/4296724f-efd9-4eb9-ba6f-4fc47631d286">said</a>: </p>
<blockquote>
<p>It’s a big wide world out there and you can’t do it for all of them.</p>
</blockquote>
<p>But this is exactly what the AFL and its clubs must aim to do. Why? Because online trolling is a workplace health and safety issue.</p>
<p>Under <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2654578">work health and safety laws</a>, AFL clubs must, so far as is reasonably practicable, provide and maintain for their players a working environment that is safe and without risks to health. And the AFL, as the sport’s governing body, must ensure, so far as is reasonably practicable, that players are not exposed to risks to their health or safety arising from the conduct of the AFL competition. </p>
<p>We already have observed that online trolling is a risk to mental health. And as Tayla Harris’s comments suggest, it can also make a workplace unsafe. </p>
<p>Through their actions, the AFL and its clubs have demonstrated they are able to expose and sanction trolls. This leans heavily in favour of the measure being reasonably practicable. The courts have <a href="https://prod.wsvdigital.com.au/sites/default/files/2018-06/ISBN-Reasonably-practicable-how-WorkSafe-applies-the-law-2007-11.pdf">made clear</a> that once the availability and suitability of a relevant safety measure is established:</p>
<blockquote>
<p>that safety measure should be implemented unless the cost of doing so is so disproportionate to the benefit (in terms of reducing the severity of the hazard or risk) that it would be clearly unreasonable to justify the expenditure.</p>
</blockquote>
<p>For the AFL to now do less would not only be morally debatable, it also would be legally questionable.</p><img src="https://counter.theconversation.com/content/114293/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eric Windholz worked with WorkSafe Victoria from 2001 to 2009, including as General Counsel and General Manager, Strategic Programs and Support.</span></em></p>Online trolling is a workplace health and safety issue. The AFL must expose and sanction those responsible – anything less would not only be morally debatable, but also legally questionable.Eric Windholz, Senior Lecturer and Associate, Monash Centre for Commercial Law and Regulatory Studies, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1102722019-02-03T19:21:06Z2019-02-03T19:21:06ZOnline trolling used to be funny, but now the term refers to something far more sinister<figure><img src="https://images.theconversation.com/files/256550/original/file-20190131-108351-w5ujdy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The definition of "trolling" has changed a lot over the last 15 years.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/722420158?size=huge_jpg">Shutterstock</a></span></figcaption></figure><p>It seems like internet trolling happens everywhere online these days – and it’s showing no signs of slowing down. </p>
<p>This week, the British press and Kensington Palace officials have <a href="https://www.abc.net.au/news/2019-01-30/british-press-urges-end-to-abuse-of-duchesses-meghan-and-kate/10760822">called for</a> an end to the merciless online trolling of Duchesses Kate Middleton and Meghan Markle, which reportedly includes racist and sexist content, and even threats.</p>
<p>But what exactly is internet trolling? How do trolls “behave”? Do they intend to harm, or amuse?</p>
<p>To find out how people define trolling, <a href="https://home.liebertpub.com/publications/cyberpsychology-behavior-brand-social-networking/10/overview">we conducted a survey</a> with 379 participants. The results suggest there is a difference in the way the media, the research community and the general public understand trolling. </p>
<p>If we want to reduce abusive online behaviour, let’s start by getting the definition right.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-empathy-can-make-or-break-a-troll-80680">How empathy can make or break a troll</a>
</strong>
</em>
</p>
<hr>
<h2>Which of these cases is trolling?</h2>
<p>Consider the comments that appear in the image below:</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=734&fit=crop&dpr=1 600w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=734&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=734&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=922&fit=crop&dpr=1 754w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=922&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=922&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Screenshot</span></span>
</figcaption>
</figure>
<p>Without providing any definitions, we asked if this was an example of internet trolling. Of participants, 44% said yes, 41% said no and 15% were unsure.</p>
<p>Now consider this next image:</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=394&fit=crop&dpr=1 600w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=394&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=394&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=495&fit=crop&dpr=1 754w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=495&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=495&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Screenshot</span></span>
</figcaption>
</figure>
<p>Of participants, 69% said this was an example of internet trolling, 16% said no, and 15% were unsure).</p>
<p>These two images depict very different online behaviour. The first image depicts mischievous and comical behaviour, where the author perhaps intended to amuse the audience. The second image depicts malicious and antisocial behaviour, where the author may have intended to cause harm.</p>
<p>There was more consensus among participants that the second image depicted trolling. That aligns with a more common definition of internet trolling <a href="https://scottbarrykaufman.com/wp-content/uploads/2014/02/trolls-just-want-to-have-fun.pdf">as destructive and disruptive online behaviour</a> that causes harm to others. </p>
<p>But this definition has only really evolved in more recent years. Previously, internet trolling was defined very differently.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-researched-russian-trolls-and-figured-out-exactly-how-they-neutralise-certain-news-100994">We researched Russian trolls and figured out exactly how they neutralise certain news</a>
</strong>
</em>
</p>
<hr>
<h2>A shifting definition</h2>
<p>In 2002, one of the earliest definitions of internet “trolling” <a href="https://www.tandfonline.com/doi/pdf/10.1080/01972240290108186">described the behaviour as</a>: </p>
<blockquote>
<p>luring others online (commonly on discussion forums) into pointless and time-consuming activities. </p>
</blockquote>
<p>Trolling often started with a message that was intentionally incorrect, but not overly controversial. By contrast, internet “flaming” <a href="https://www.sciencedirect.com/science/article/pii/S0167923602001902">described online behaviour with hostile intentions</a>, characterised by profanity, obscenity, and insults that inflict harm to a person or an organisation. </p>
<p>So, modern day definitions of internet trolling seem more consistent with the definition of flaming, rather than the initial definition of trolling. </p>
<p>To highlight this intention to amuse compared to the intention to harm, communication researcher <a href="https://www.researchgate.net/profile/Jonathan_Bishop4/publication/259229799_Representations_of_'trolls'_in_mass_media_communication_A_review_of_media-texts_and_moral_panics_relating_to_'internet_trolling'/links/0046352a85d257a299000000/Representations-of-trolls-in-mass-media-communication-A-review-of-media-texts-and-moral-panics-relating-to-internet-trolling.pdf">Jonathan Bishop suggested</a> we differentiate between “kudos trolling” to describe trolling for mutual enjoyment and entertainment, and “flame trolling” to describe trolling that is abusive and not intended to be humorous. </p>
<h2>How people in our study defined trolling</h2>
<p>In our study, which has been accepted to be published in the journal <a href="https://home.liebertpub.com/publications/cyberpsychology-behavior-brand-social-networking/10/overview">Cyberpsychology, Behavior, and Social Networking</a>, we recruited 379 participants (60% women) to answer an online, anonymous questionnaire where they provided short answer responses to the following questions:</p>
<ul>
<li><p>how do you define internet trolling? </p></li>
<li><p>what kind of behaviours constitute internet trolling?</p></li>
</ul>
<p>Here are some examples of how participants responded:</p>
<blockquote>
<p>Where an individual online verbally attacks another individual with intention of offending the other (female, 27)</p>
<p>People saying intentionally provocative things on social media with the intent of attacking / causing discomfort or offence (female, 26)</p>
<p>Teasing, bullying, joking or making fun of something, someone or a group (male, 29)</p>
<p>Deliberately commenting on a post to elicit a desired response, or to purely gratify oneself by emotionally manipulating another (male, 35)</p>
</blockquote>
<p>Based on participant responses, we suggest that internet trolling is now more commonly seen as an intentional, malicious online behaviour, rather than a harmless activity for mutual enjoyment. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=556&fit=crop&dpr=1 600w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=556&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=556&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=698&fit=crop&dpr=1 754w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=698&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=698&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A word cloud representing how survey participants described trolling behaviours.</span>
</figcaption>
</figure>
<h2>Researchers use ‘trolling’ as a catch-all</h2>
<p>Clearly there are discrepancies in the definition of internet trolling, and this is a problem.</p>
<p>Research does not differentiate between kudos trolling and flame trolling. Some members of the public might still view trolling as a kudos behaviour. For example, one participant in our study said:</p>
<blockquote>
<p>Depends which definition you mean. The common definition now, especially as used by the media and within academia, is essentially just a synonym to “asshole”. The better, and classic, definition is someone who speaks from outside the shared paradigm of a community in order to disrupt presuppositions and try to trigger critical thought and awareness (male, 41)</p>
</blockquote>
<p>Not only does the definition of trolling differ from researcher to researcher, but there can also be discrepancy between the researcher and the public. </p>
<p>As a term, internet trolling has significantly deviated from its early, 2002 definition and become a catch-all for all antisocial online behaviours. The lack of a uniform definition of internet trolling leaves all research on trolling open to validity concerns, which could leave the behaviour remaining largely unchecked.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/our-experiments-taught-us-why-people-troll-72798">Our experiments taught us why people troll</a>
</strong>
</em>
</p>
<hr>
<h2>We need to agree on the terminology</h2>
<p>We propose replacing the catch-all term of trolling with “cyberabuse”.</p>
<p>Cyberbullying, cyberhate and cyberaggression are all different online behaviours with different definitions, but they are often referred to uniformly as “trolling”. </p>
<p>It is time to move away from the term trolling to describe these serious instances of cyberabuse. While it may have been empowering for the public to picture these internet “trolls” as ugly creatures living under the bridge, this imagery may have begun to downplay the seriousness of their online behaviour. </p>
<p>Continuing to use the term trolling, a term that initially described a behaviour that was not intended to harm, could have serious consequences for managing and preventing the behaviour.</p><img src="https://counter.theconversation.com/content/110272/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Evita March does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Some people still think “trolling” refers to harmless fun. If we want to reduce abusive online behaviour, let’s start by getting our definitions right.Evita March, Senior Lecturer in Psychology, Federation University AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1037972018-10-03T08:58:05Z2018-10-03T08:58:05ZRegulate social media? It’s a bit more complicated than that<figure><img src="https://images.theconversation.com/files/238930/original/file-20181002-85614-1p04eni.jpg?ixlib=rb-1.1.0&rect=35%2C323%2C4000%2C2335&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/censorship-freedom-speech-traffic-sign-two-445529134?src=0PMkzDJpWDOHdm_0E4QmGw-1-10">M-SUR/Shutterstock</a></span></figcaption></figure><p>Free speech is a key aspect of the internet, but it has become increasingly obvious that many online will push that freedom to extremes, leaving website comment sections, Twitter feeds and Facebook groups awash with racist, sexist, homophobic or otherwise unpalatable opinions and vitriolic views, and obscene or shocking images or videos. </p>
<p>The borderless nature of the internet, where a website may be hosted in one country, operated by staff in another, with comments left by readers in a third, poses a thorny problem for website operators and government agencies seeking to tackle the issue. </p>
<p>In Britain, the telecommunications regulator Ofcom recently issued a <a href="https://www.ofcom.org.uk/phones-telecoms-and-internet/information-for-industry/internet-policy/addressing-harmful-online-content">report</a> discussing the issues around online harm and potential ways forward. A UK government <a href="https://www.gov.uk/government/news/new-laws-to-make-social-media-safer">white paper</a> on the subject is also expected this autumn, and health secretary Matt Hancock announced at the Conservative Party conference that he would direct the UK’s chief medical officer to <a href="https://www.theguardian.com/media/2018/sep/29/health-chief-set-social-media-time-limits-young-people">draw up guidelines</a> about social media use among children and teenagers amid growing concerns over potential harm.</p>
<p>Most forms of online content are not subject to the <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0005/100103/broadcast-code-april-2017.pdf">Ofcom Broadcasting Code</a> with which radio and television services based in the UK must comply. In fact, Ofcom highlights that a wide range of popular online content – including videos uploaded to YouTube, or content posted on social media, sent through messaging services, or which appears on many online news sites, and also political advertising – is subject to little or no specific UK regulation. These different platforms are subject to different rules, which means the same content shared on each would be treated differently depending on how it was accessed. </p>
<p>Ofcom sees this “different screen, different rules” approach as arbitrary and problematic, providing no clear level of protection for viewers. While addressing this disparity is a legitimate aim, is it even possible to subject all content accessible by UK audiences via the internet or online services to UK law, regardless of where in the world it originates? </p>
<h2>Rule of territory vs access</h2>
<p>Under international law, one of the primary means for states to exercise their jurisdiction is the <a href="https://unijuris.sites.uu.nl/wp-content/uploads/sites/9/2014/12/The-Concept-of-Jurisdiction-in-International-Law.pdf">territorial principle</a>, the right to regulate acts that occur within their territory. UK law would apply to online content hosted on servers located in the UK, for example, or to an internet user uploading content online from the UK.</p>
<p>But of course internet users can access content created and hosted from all over the world, and it is not always possible to tell where it has come from or where it is hosted. This limits the territorial principle, and makes establishing the existence of a territorial connection with <a href="https://www.yalelawjournal.org/article/the-un-territoriality-of-data">“un-territorial data”</a> a key requirement. Unfortunately, there is no international agreement on how to do so.</p>
<p>Instead, states have interpreted the principle quite broadly to argue that the <a href="https://labs.ripe.net/Members/sara_solmone/establishing-jurisdiction-online">mere accessibility</a> of online content from within their territory is deemed sufficient. For example, in court cases against <a href="https://www.cps.gov.uk/legal-guidance/obscene-publications">Perrin</a> and <a href="https://www.unodc.org/cld/case-law-doc/cybercrimecrimetype/fra/2000/uejf_and_licra_v_yahoo_inc_and_yahoo_france.html">Yahoo</a>, UK and French courts respectively applied their national laws to online content accessible in their countries, even though it had been uploaded from and was hosted in the US. The act of publishing content online, the courts argued, is equal to physically acting or producing adverse effects within their territory irrespective of its origin.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=420&fit=crop&dpr=1 600w, https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=420&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=420&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=528&fit=crop&dpr=1 754w, https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=528&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/238935/original/file-20181002-85632-ed01fl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=528&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Who decides what people can and cannot say online? Many accept there should be limits. Few agree on details.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/censored-dialog-bubble-text-metaphor-meaning-498512638">M-SUR/Shutterstock</a></span>
</figcaption>
</figure>
<h2>The too-long arm of the law?</h2>
<p>So we increasingly see states tending to impose measures that go well beyond their borders. For example, in a case regarding “the right to be forgotten”, the <a href="https://www.cnil.fr/fr/node/15790">French Data Protection Authority</a> ordered Google to remove search results not just from its European versions, but from all its geographical extensions in order to make the search results inaccessible worldwide. This case shows a national court using the fact that the US-based company Google conducts business in France to impose the global application of its domestic laws.</p>
<p>This is problematic because it inevitably runs into the rights and freedoms of foreign citizens abroad, who should in theory only need to comply with the local laws in their country. Currently, the French case is <a href="http://curia.europa.eu/juris/document/document.jsf?text=&docid=195494&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=38822">pending</a> before the European Court of Justice. The “right to be forgotten” which is certainly protected by EU law, does not have universal application, therefore while internet users in the EU might have a right to have some personal information removed, internet users in foreign countries where that information is legal have a right to access it.</p>
<p>But if the global delisting order was enforced, internet users in those other countries would see their freedom to access information violated due to a decision of a foreign authority in a foreign jurisdiction based on foreign law. If all states adopted this approach, it would only be a matter of time before internet users in Britain found their right to freedom of information was on the line.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/238931/original/file-20181002-85602-qwga6v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Error 451, indicating online content that has been removed for legal reasons, is named after Ray Bradbury’s dystopian novel Fahrenheit 451.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/http-error-451-unavailable-legal-reasons-431331046">M-SUR/Shutterstock</a></span>
</figcaption>
</figure>
<h2>Defining and regulating ‘harm’</h2>
<p>On the other hand, states have a right to regulate to protect citizens from harm. Indeed, the actions of foreign-based corporations and the way in which they manage online content can negatively affect internet users’ rights worldwide. As <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3096330">has been pointed out</a>, multinational companies’ choices of where to host their data, the country in which they are based, and consequently the laws with which they have to comply, and what they include and exclude from their terms of service all significantly affect the rights to privacy and freedom of expression of internet users worldwide. </p>
<p>Some companies <a href="https://www.internetjurisdiction.net/uploads/pdfs/Papers/Content-Jurisdiction-Policy-Options-Document.pdf">voluntarily perform global takedowns</a> of content at the request of governments or users based on their own terms of service. But international cooperation is needed, and is preferable to unilateral actions by courts that have very broad extraterritorial effects. There are a number of international multistakeholder groups working on internet governance which are exploring possible solutions, such as the <a href="https://www.itu.int/net/wsis/index.html">World Summit on Information Society</a>, the <a href="https://www.intgovforum.org/multilingual/">Internet Governance Forum</a>, meetings organised by <a href="https://www.icann.org/resources/pages/welcome-2012-02-25-en">ICANN</a>, and <a href="https://www.nro.net/about-the-nro/regional-internet-registries/">Regional Internet Registries</a> and international conferences organised by the <a href="https://www.internetjurisdiction.net/event/3rd-global-conference-of-the-internet-jurisdiction-policy-network-june-3-5-2019">Internet & Jurisdiction Policy Network</a>. </p>
<p>But there are many missing elements that make progress difficult. When it comes to <a href="https://www.internetjurisdiction.net/uploads/pdfs/Papers/Content-Jurisdiction-Policy-Options-Document.pdf">regulating online content</a> there is no international agreement on how states should exercise their jurisdiction, or on what kind of content should be considered abusive, when or whether internet companies should be responsible for their users’ content at all, how or if they should remove content considered harmful or abusive, and whether such removals are global or limited in scope.</p>
<p>These problems are so intertwined with the sovereignty of each state that an international agreement that manages all the issues but is acceptable to all is unrealistic. But agreed common international guidelines on how to address these issues is needed, and will require input from nation states, internet companies, the internet technical community, and voices representing users’ rights and civil society.</p><img src="https://counter.theconversation.com/content/103797/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sara Solmone is a member of the UK England Chapter of the Internet Society and an observer and part of the Membership Committee at GigaNet.
Sara was also a RIPE Academic Cooperation Initiative (RACI) fellow at Reseaux IP Europeens 75 (RIPE).</span></em></p>The borderless nature of the internet makes it hard to pull the plug on social media talk that crosses the line.Sara Solmone, Postgraduate Teaching Assistant, University of East LondonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1008552018-09-05T10:36:55Z2018-09-05T10:36:55ZPropaganda-spewing Russian trolls act differently online from regular people<figure><img src="https://images.theconversation.com/files/233936/original/file-20180828-86120-e6l4m6.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C9000%2C5995&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">They may look similar, but online trolls act differently.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/another-day-office-online-troll-1044083503">Daren Woodward/Shutterstock.com</a></span></figcaption></figure><p>As information warfare becomes more common, <a href="https://theconversation.com/how-the-russian-government-used-disinformation-and-cyber-warfare-in-2016-election-an-ethical-hacker-explains-99989">agents of various governments are manipulating social media</a> – and therefore <a href="https://theconversation.com/weaponized-information-seeks-a-new-target-in-cyberspace-users-minds-100069">people’s thinking</a>, <a href="https://www.theguardian.com/world/2017/nov/14/how-400-russia-run-fake-accounts-posted-bogus-brexit-tweets">political actions</a> and <a href="https://www.cfr.org/backgrounder/russia-trump-and-2016-us-election">democracy</a>. Regular people need to know a lot more about what information warriors are doing and <a href="https://fivethirtyeight.com/features/why-were-sharing-3-million-russian-troll-tweets/">how they exert their influence</a>. </p>
<p>One group, a Russian government-sponsored troll farm called the Internet Research Agency, was the subject of a <a href="https://www.justice.gov/file/1035477/download">federal indictment issued in February</a>, stemming from Special Counsel <a href="https://theconversation.com/us/topics/russia-investigation-40039">Robert Mueller’s investigation into Russian activities</a> aimed at influencing the 2016 U.S. presidential election.</p>
<p>Our recent <a href="https://arxiv.org/abs/1801.09288">study of that group’s activities</a> reveals that there are some behaviors that might help identify propaganda-spewing trolls and tell them apart from regular internet users.</p>
<h2>Targeted tweeting</h2>
<p>We looked at 27,291 tweets posted by 1,024 Twitter accounts controlled by the Internet Research Agency, <a href="https://web.archive.org/web/20180731150149/https://democrats-intelligence.house.gov/uploadedfiles/exhibit_b.pdf">based on a list</a> released by <a href="https://web.archive.org/web/20180731150121/https://democrats-intelligence.house.gov/social-media-content/">congressional investigators</a>. We found that these Russian government troll farms were focused on tweeting about specific world events like the Charlottesville protests, specific organizations like ISIS and political topics related to Donald Trump and Hillary Clinton.</p>
<p>That finding fits with other research showing that Internet Research Agency trolls <a href="https://faculty.washington.edu/kstarbi/examining-trolls-polarization.pdf">infiltrated and exerted influence in online communities</a> with both left- and right-leaning political views. That helped them muddy the waters on both sides, stirring discord across the political spectrum.</p>
<h2>Distinctive behavior</h2>
<p>We also found that these troll-farm accounts behaved differently from regular people online. For example, when declaring their locations, they listed a country, but not any particular city in that country. That’s unusual: Most Twitter users tend to be more specific, listing a state or town, as we found when we sampled 1,024 Twitter accounts at random. The most common location designation for Russian troll accounts was “U.S.,” followed by “Moscow,” “St. Petersburg,” and “Russia.”</p>
<p><iframe id="tO1s0" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/tO1s0/5/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p><iframe id="I0D3y" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/I0D3y/2/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>In addition, the troll accounts were more likely to tweet using Twitter’s own website on a desktop computer – labeled in tweets as “<a href="https://www.dmnews.com/channel-marketing/social/article/13036318/the-surprising-popularity-of-brands-using-twitter-web-client">Twitter Web Client</a>.” By contrast, we found regular Twitter users are much more likely to use Twitter’s mobile apps for iPhone or Android, or specialized apps for managing social media, like TweetDeck.</p>
<p><iframe id="hu2aQ" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/hu2aQ/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>Working many angles</h2>
<p>Looking at the Internet Research Agency accounts over 21 months, between January 2016 and September 2017, we found that they frequently reset their online personas by changing account information like their name and description and by mass-deleting past tweets. In this way, the same account – still retaining its followers – could be repurposed to advocate a different position or target a different demographic of users.</p>
<p>For instance, on May 15, 2016, the troll account with the Twitter ID number 4224912857 was calling itself “Pen_Air” with a profile description reading “National American news.” This particular troll account tweeted 326 times, while its followers rose from 1,296 to 4,308 between May 15, 2016, and July 22, 2016. </p>
<p>But as the U.S. presidential elections approached, it changed: On September 8, 2016, the account changed its name to “Blacks4DTrump” and its profile description to “African-Americans stand with Trump to make America Great Again!” Over the next 11 months, it tweeted nearly 600 times – far more often than its previous identity had. This activity no doubt helped increase the account’s follower count to nearly 9,000.</p>
<p>The activity didn’t stop after the election. Around August 18, 2017, the account was repurposed again. Almost all of its previous tweets were deleted – leaving just 35. And its name became “southlonestar2,” with a description as “Proud American and TEXAN patriot! Stop ISLAM and PC. Don’t mess with Texas.”</p>
<p>In all three incarnations the account’s tweets focused on right-wing political topics, using hashtags like #NObama and #NeverHillary and retweeting <a href="https://web.archive.org/web/20180731150149/https://democrats-intelligence.house.gov/uploadedfiles/exhibit_b.pdf">other troll accounts, like TEN_GOP and tpartynews</a>. </p>
<p>These troll accounts also often tweeted links to posts from <a href="https://www.washingtonpost.com/news/posteverything/wp/2017/09/20/rt-wants-to-spread-moscows-propaganda-here-lets-treat-it-that-way/">Russian government-sponsored organizations purporting to be news</a>. </p>
<h2>Fighting trolling</h2>
<p>Though our research focused on Twitter, the Internet Research Agency didn’t. It even expanded beyond Facebook: In early 2018, Reddit announced that <a href="https://www.reddit.com/r/announcements/comments/8bb85p/reddits_2017_transparency_report_and_suspect/">Russian trolls had likely operated</a> on its site as well. That report highlights the fact that the companies hosting social media and online discussion sites are the best informed about what’s happening on their systems. As a result, in our view, the platforms’ companies should provide technical solutions, analyzing activity and taking action to safeguard users from secret influence campaigns from government agents.</p>
<p>Yet even if the large platforms like Twitter, Facebook, and Reddit were somehow able to completely eradicate trolling by professional government agents, there are many smaller communities online that may remain vulnerable. Some of our previous work has shown how <a href="https://arxiv.org/abs/1705.06947">ideas that first emerge on fringe sites</a> like 4chan can rapidly make it to mainstream discussions online and in the real world. Russian trolls could take advantage of that tendency to infiltrate these smaller sites, like Gab or Minds, influencing real people who also use those systems – and getting them to spread propaganda and disinformation more widely.</p>
<p>It’s clear to us that <a href="https://theconversation.com/can-facebook-use-ai-to-fight-online-abuse-95203">technological solutions on their own</a> cannot solve the problem of government-sponsored trolling online. The trolls’ efforts take advantage of <a href="https://theconversation.com/how-the-russian-government-used-disinformation-and-cyber-warfare-in-2016-election-an-ethical-hacker-explains-99989">weaknesses in society</a>; the only fix for that is for people as individuals and collectively to think more critically about online information, especially before sharing it.</p><img src="https://counter.theconversation.com/content/100855/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Savvas Zannettou receives funding from EU project "ENCASE" Grant Agreement number 691025. </span></em></p><p class="fine-print"><em><span>Jeremy Blackburn does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Some behaviors might help tell propaganda-spewing trolls apart from regular internet users, but the main protection is for people to think more critically about online information.Savvas Zannettou, Ph.D. Student in Electrical Engineering, Computer Engineering and Informatics, Technological University of CyprusJeremy Blackburn, Assistant Professor of Computer Science, University of Alabama at BirminghamLicensed as Creative Commons – attribution, no derivatives.