tag:theconversation.com,2011:/au/topics/online-harassment-16140/articlesOnline harassment – The Conversation2023-12-04T13:26:11Ztag:theconversation.com,2011:article/2182202023-12-04T13:26:11Z2023-12-04T13:26:11ZOnline ‘likes’ for toxic social media posts prompt more − and more hateful − messages<figure><img src="https://images.theconversation.com/files/562566/original/file-20231129-15-f1jgk5.jpg?ixlib=rb-1.1.0&rect=355%2C0%2C4540%2C3442&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Posting a hateful message online can have a lot to do with how like-minded bigots will respond.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/concept-of-social-media-communication-and-digital-royalty-free-image/1647673589">Thitima Uthaiburom/iStock via Getty Images Plus</a></span></figcaption></figure><p>The rampant <a href="https://www.nytimes.com/2023/11/15/technology/hate-speech-israel-gaza-internet.html">increase of hate messages</a> on social media is a scourge in today’s technology-infused society. Racism, homophobia, xenophobia and even personal attacks on <a href="https://www.pewresearch.org/politics/2019/10/10/partisan-antipathy-more-intense-more-personal/">people who have the audacity to disagree</a> with someone else’s political opinion – these and other forms of online hate present an ugly side of humanity.</p>
<p>The derision on social media appears in vile and profane terms for all to see. Obviously, the sole purpose of posting online hate is to harass and harm one’s victims, right?</p>
<p>Not necessarily, according to recent studies about hate messaging in social media. Although seeing hate comments is unquestionably upsetting, new research suggests there’s a different reason people post hate: to <a href="https://doi.org/10.1016/j.copsyc.2021.12.010">get attention and garner social approval</a> from like-minded social media users. It’s a social activity. It’s exhilarating to be the nastiest or snarkiest and to get lots of thumbs-ups or hearts. Anecdotal evidence makes a good case for the social basis of online hate, and new empirical research backs it up.</p>
<p>In <a href="https://scholar.google.com/citations?hl=en&user=LJNfe3cAAAAJ">over 30 years of research about online interaction</a>, I’ve documented how people make friends and form relationships online. It now appears that the same dynamics that can make some online relationships intensely positive can also fuel friendly feelings among those who join together online in expressing enmity toward identity groups and individual targets. It’s a “<a href="https://www.adl.org/resources/report/hate-parties-sharing-links-fringe-platforms-drives-antisemitic-comments-youtube">hate party</a>,” more or less.</p>
<h2>Online hate is a social phenomenon</h2>
<p>When you look at online hate messages, you start to notice clues that suggest, more often than not, that hatemongers are posting messages to each other, not to those their messages implicate and denigrate.</p>
<p>For instance, white supremacists and neo-Nazis often include codes and symbols that have shared meaning for the in-group but are opaque to outsiders, including the very people that their messages vilify. Including “88” in one’s message, hashtag or handle is one such code; the <a href="https://www.adl.org/resources/hate-symbols/search">Anti-Defamation League’s lexicon of hate symbols</a> explains that the 8th letter of the alphabet is H. And 88, therefore, is HH, or Heil Hitler.</p>
<p>Another clue that hate is for haters is the way it has shifted somewhat from mainstream social media to fringe sites that have gotten so hateful and disturbing that it’s hard to imagine any member of a targeted group wanting to peruse those spaces. The fringe sites <a href="https://www.politico.eu/article/fringe-social-media-telegram-extremism-far-right/">say they promote unfettered free speech online</a>. But in doing so, they attract users who write posts that are widely unacceptable and wouldn’t last a minute on mainstream sites with community standards and content moderation.</p>
<p>The kinds of messages that would quickly be flagged as hate speech in any offline setting come to dominate the threads and discussions in some of these spaces. Users curate meme repositories – for instance, the anti-Jewish, anti-LGBTQ and “new (n-word)” collections – that are hideous to most people but funny to those who partake in these secluded virtual backrooms. They’re not spaces where the targets of these epithets are likely to wander.</p>
<h2>Ganging up builds community</h2>
<p>Further research lends credence to the hypothesis that haters are in it for social approval from one another. Internet researchers <a href="https://scholar.google.com/citations?user=fM-s2vQAAAAJ&hl=en&oi=ao">Gianluca Stringhini</a>, <a href="https://scholar.google.com/citations?user=W_ApnIUAAAAJ&hl=en&oi=ao">Jeremy Blackburn</a> and their colleagues have been tracking what they call cross-platform “raids” for a decade.</p>
<p>Here’s how it works. A user on one platform recruits other users to target and harass someone on another platform – the creator of a specific video over on YouTube, for instance. The originator’s post contains a link to the YouTube video and a description of some race or gender issue to prey on, instilling the urge to act among prospective accomplices. Followers <a href="https://doi.org/10.1145/3359309">head to YouTube and pile on</a>, filling the comments section with hate messages.</p>
<p>The attack looks like its purpose is to antagonize a victim rather than building ties among the antagonists. And, of course, the <a href="https://doi.org/10.1377/hpb20200929.601434">effects on the targeted person</a> can be devastating. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man smiling looking at mobile phone" src="https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/562567/original/file-20231129-27-br46au.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Online posters cheer on the toxic messages of their peers – and savor their social approval.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/mullet-royalty-free-image/1304467498">ianmcdonnell/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>But backstage, the attackers circle back to the platform where the plot was organized. They boast to one another about what they did. They post screen grabs from the YouTube page to show off their denigrating deeds. They congratulate each other. It was for getting attention and approval after all, consistent with the social approval theory of online hate.</p>
<h2>Social approval eggs users on to greater extremes</h2>
<p>More direct evidence of the effect of social approval on hate messaging is also emerging. Online behavior researcher <a href="https://scholar.google.com/citations?user=DwYC8vkAAAAJ&hl=en&oi=ao">Yotam Shmargad</a> and his collaborators have studied newspapers’ online discussion websites. When people get “upvotes” on antisocial comments they’ve posted, they become <a href="https://doi.org/10.1177/0894439320985527">more likely to post additional antisocial comments</a>. </p>
<p>A recent study by my colleagues <a href="https://scholar.google.com/citations?user=xlNV12MAAAAJ&hl=en&oi=ao">Julie Jiang</a>, <a href="https://scholar.google.com/citations?hl=en&user=veoVwKwAAAAJ&view_op=list_works">Luca Luceri</a> and <a href="https://scholar.google.com/citations?user=0r7Syh0AAAAJ&hl=en&oi=ao">Emilio Ferrara</a> looked at users of X, the platform formerly known as Twitter, and what happened when they received signs of social approval to their xenophobic tweets. When posters’ toxic tweets got an unusually high number of “likes” from other users, <a href="https://doi.org/10.48550/arXiv.2310.07779">their subsequent messages were even more toxic</a>. The more their messages were retweeted by others, the more posters doubled down with more extreme hate.</p>
<p>These findings do nothing to diminish the real hurt and anger that justifiably arise when people see themselves or their identity groups disparaged online.</p>
<p>The social approval theory of online hate doesn’t explain how people come to hate others or become bigoted in the first place. It does provide a new account for the expression of hate on social media, though, and how social gratifications encourage the ebb and flow of this problematic practice.</p><img src="https://counter.theconversation.com/content/218220/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joseph B. Walther receives funding from the Institute for Rebooting Social Media, Berkman Klein Center for Internet & Society at Harvard University, and donations for online hate research and the Center for Information Technology and Society at the University of California, Santa Barbara.</span></em></p>Hate is for the haters. Much of the thrill of posting toxic messages can come from the attention and social approval a poster gets from like-minded people.Joseph B. Walther, Visiting Scholar at Harvard University, Distinguished Professor of Communication, University of California, Santa BarbaraLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2108742023-11-06T21:04:35Z2023-11-06T21:04:35ZTrolling and doxxing: Graduate students sharing their research online speak out about hate<iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/trolling-and-doxxing-graduate-students-sharing-their-research-online-speak-out-about-hate" width="100%" height="400"></iframe>
<p>An <a href="https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/">increasingly volatile online environment</a> is affecting our society, including members of the academic community and research they pursue.</p>
<p>Graduate students are especially vulnerable to online hate, because cultivating a visible social media presence is <a href="https://www.universityaffairs.ca/career-advice/from-phd-to-life/guest-post-grad-students-need-social-media/">considered essential</a> for mobilizing their research, gaining credibility and finding opportunities as they prepare to compete in an <a href="https://www.universityaffairs.ca/news/news-article/the-mismatch-continues-between-phd-holders-and-their-career-prospects/">over-saturated job market</a>. </p>
<p>Our research <a href="https://bearingwitness.site">has examined the experiences of graduate students</a> who have encountered online hate while conducting their research or disseminating it online, and a wider landscape of university protocol and policies.</p>
<p>This research suggests faculty supervisors and university staff responsible for students’ development and well-being are often ill-prepared to support students through online harassment experiences. This means graduate students are left frightened, discouraged and with nowhere to turn for help.</p>
<figure>
<iframe src="https://player.vimeo.com/video/876457075" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<figcaption><span class="caption">Documentary ‘Bearing Witness: Hate, Harassment and Online Public Scholarship.’</span></figcaption>
</figure>
<h2>New policies needed to support researchers</h2>
<p>Research by communications scholars George Veletsianos and Jaigris Hodson, who are part of the <a href="https://harassment.thedlrgroup.com/team/">Public Scholarship and Online Abuse</a> research group, finds that scholars online may be targeted for a range of reasons, but “<a href="https://www.insidehighered.com/views/2018/05/29/dealing-social-media-harassment-opinion">women in particular are harassed partly because they happen to be women who dare to be public online</a>.”</p>
<p>Online hatred <a href="https://www.coe.int/en/web/cyberviolence/cyberviolence-against-women">disproportionately affects</a> women, <a href="https://www.ohchr.org/en/stories/2021/03/report-online-hate-increasing-against-minorities-says-expert">Black, Indigenous, racialized</a>, <a href="https://abcnews.go.com/US/lgbtq-community-facing-increased-social-media-bias-author/story?id=85463533">queer, trans and</a> other marginalized scholars.</p>
<p>New frameworks and policies are required that protect and care <a href="https://theconversation.com/free-speech-on-campus-means-universities-must-protect-the-dignity-of-all-students-124526">for increasingly diverse academic communities</a> to foster equity and diversity.</p>
<h2>Impacts and inadequate support</h2>
<p>Nearly any discipline or research topic can become a target for harassment: from <a href="https://www.universityaffairs.ca/features/feature-article/the-growing-problem-of-online-harassment-in-academe/">English literature to game studies</a> to <a href="https://www.bbc.co.uk/programmes/w3ct369y">virology</a> and <a href="https://www.vice.com/en/article/g5ybw3/climate-scientists-online-abuse">climate science</a>. </p>
<p>Online harassment restricts which research projects are able to proceed and who is able to pursue them. It affects <a href="https://doi.org/10.1080/17439884.2021.1878218">not only researchers’ well-being</a> and career prospects, but by extention, their fields of study and members of the public served by it.</p>
<p>Institutions have yet to develop adequate supports for both faculty and students, even as the <a href="https://blogs.lse.ac.uk/impactofsocialsciences/2023/06/13/its-as-if-it-didnt-exist-is-cyberbullying-of-university-professors-taken-seriously/">pervasiveness of online harassment in academic life</a> has begun to receive greater attention. </p>
<p>Research by Hodson and Veletsianos with Chandell Gosse finds university policies designed to protect community members <a href="https://theconversation.com/post-secondary-workplace-harassment-policies-need-to-adapt-to-digital-life-161325">have not evolved to address the complex forms of harassment that unfold via social media</a>. </p>
<h2>Lack of clear and accessible structures, procedures</h2>
<p>Research from 2020 by Alex Ketchum of McGill University’s Institute for Gender, Sexuality, and Feminist Studies on <a href="https://publicscholarshipandmediawork.blogspot.com/p/report.html">resources provided by media relations offices at Canadian universities</a> indicates that universities’ publicly accessible information about doxxing, trolling and scholarship is scarce. Ketchum addresses challenges related to public scholarship in her book <em><a href="https://www.concordia.ca/press/engage.html#order">Engage in Public Scholarship!: A Guidebook on Feminist and Accessible Communication</a></em>.</p>
<p>Without clear structures and procedures for reporting harassment and supporting community members at an institutional level, harassment is treated by universities as isolated incidents without grasping the scale of the issue.</p>
<h2>‘Bearing Witness’</h2>
<p>We have facilitated a number of <a href="https://www.yorku.ca/laps/events/laps-research-to-impact-workshop-confronting-online-hate-and-harassment-of-academic-researchers">workshops</a> and <a href="https://www.yorku.ca/research/robarts/events/emerging-scholar-online/?fbclid=IwAR0rlJdnD-2um6XWzQzWpC5vvnJMvHHMW-DFZwbJwEx0v5LxoOJqMWbk0Y4">events</a> that foreground experiences of online harassment among graduate students. This work has been done with support from the <a href="https://irdl.info.yorku.ca/">Institute for Research on Digital Literacies</a>, under the direction of Natalie Coulter. </p>
<p>As part of a multi-stage project titled <a href="https://bearingwitness.site/">Bearing Witness</a>, we conducted one-on-one interviews with seven York University students who have encountered hatred in response to sharing or conducting their research online. </p>
<p>To protect participants from further harassment, we invited student artist-researchers to interpret the anonymized interview transcripts and create original artworks that reflected upon and echoed the stories of their peers. </p>
<p>These stories formed the basis of an exhibition and panel discussion at <a href="https://www.federationhss.ca/en/congress/bearing-witness-hate-harassment-and-public-scholarship">Congress 2023</a>, a national conference of academic researchers held at the end of May and beginning of June 2023, and will inform <a href="https://bearingwitness.site/symposium/">a symposium</a> on Nov. 7 and a <a href="https://irdl.info.yorku.ca/events/">a pop-up exhibition</a> in the Media Creation Lab in the Scott Library at York University.</p>
<h2>Researcher experiences of harassment</h2>
<p>In our study, participants described receiving threats of physical and sexual violence, directed not only towards them, but to their families and research participants. These encounters severely impacted students’ mental health and led them to fear for their physical well-being on campus and at conferences. </p>
<p>Each student we spoke with described feeling under-supported by the university, in particular <a href="https://education.macleans.ca/feature/inside-the-mental-health-crisis-at-canadian-universities/">struggling to access mental-health services</a>. Participants also said research methods seminars, research ethics board certification courses and conversations with supervisory committees had not addressed the possibility of encountering online harassment.</p>
<p>The online harassment students encountered also derailed or significantly curtailed their research projects. Students reported that the effects of the harassment forced them to drastically alter, if not entirely halt, their course of study and degree progress.</p>
<h2>Resources to help protect from harassment</h2>
<p>There are many online resources graduate students can consult to protect themselves from online harassment. Resources <a href="https://onlineharassmentfieldmanual.pen.org">from PEN America</a> and <a href="https://gameshotline.org/online-free-safety-guide">gaming communities</a> provide cybersecurity tips to prevent doxxing, assess threats and report harassment to platforms and law enforcement. </p>
<p>However, universities must take steps to lessen the burden for individual victims.</p>
<p>Media relations and knowledge-mobilization offices must develop clear protocols for protecting community members and supporting them in the wake of encountering hatred online. It is equally essential that these policies are readily available and easy to locate for scholars in distress.</p>
<h2>Important work begins with witness</h2>
<p>Faculty must be made aware of the realities of online harassment and available university resources — including campus security, legal clinics and mental health services. </p>
<p><a href="https://datasociety.net/pubs/res/Best_Practices_for_Conducting_Risky_Research-Oct-2016.pdf">Supervisors should be prepared</a> to have frank discussions with graduate students about the potential risks associated with their research and develop a pre-emptive action plan that can be implemented quickly.</p>
<p>This important work must begin with institutions bearing witness to graduate students’ experiences. University staff and faculty must listen to individual voices so that the issue of online harassment can be understood in its full scale and complexity.</p><img src="https://counter.theconversation.com/content/210874/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alex Borkowski receives funding from SSHRC. </span></em></p><p class="fine-print"><em><span>Natalie Coulter receives funding from SSHRC, as well as from internal grants at York University.</span></em></p><p class="fine-print"><em><span>Marion Tempest Grant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>To inform university responses to online harassment affecting graduate students, artist-researchers created original artworks in response to interviews with their peers who experienced online hate.Alex Borkowski, PhD Candidate, Communication & Culture, York University, CanadaMarion Tempest Grant, PhD Candidate, Communication & Culture, York University, CanadaNatalie Coulter, Associate Professor of Communication Studies, and Director of the Institute for Research on Digital Literacies, York University, CanadaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1936152023-10-22T11:41:39Z2023-10-22T11:41:39ZThe 23andMe data breach reveals the vulnerabilities of our interconnected data<figure><img src="https://images.theconversation.com/files/554853/original/file-20231019-20-wrblg3.jpg?ixlib=rb-1.1.0&rect=13%2C0%2C3040%2C1964&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Users' genetic information was accessed during a hacker attack on the 23andMe's user databases.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>On Oct. 6, news broke that 23andMe, the genomics company that collects genetic material from thousands of people for ancestry and genetic predisposition tests, <a href="https://www.wired.com/story/23andme-credential-stuffing-data-stolen/">had a massive data breach</a>. </p>
<p>But as it turns out, the company’s servers were not hacked. Rather, hackers targeted hundreds of individual user accounts — allegedly those that had <a href="https://blog.23andme.com/articles/addressing-data-security-concerns">repeated passwords</a>. After gaining access to the accounts, hackers could leverage the “<a href="https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics">DNA relatives matches</a>” function of 23andMe to get information about thousands of other people.</p>
<p>This data breach challenges how we think about privacy, data security and corporate accountability in the information economy.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3c7YIo97Zs0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Hackers targeted user passwords to access 23andMe’s user data.</span></figcaption>
</figure>
<h2>Shared information</h2>
<p>Genetic information databases have a notable feature: anyone’s DNA data also reveals information about others who share part of their genetic code with them. When someone sends a sample to 23andMe, the company has genetic information about that person <em>and</em> their relatives even if those relatives didn’t send a sample or consent to any data collection. Their data is inevitably intertwined.</p>
<p>This isn’t just a characteristic of genetic data. Most data is about more than one person because data often describes shared features between people.</p>
<p>The ramifications of overlooking how personal data affects others <a href="https://www.forbes.com/advisor/business/what-is-data-breach/">extend to the entire information economy</a>. Every individual choice about personal data has spillover effects on others. People are exposed to consequences — ranging from financial loss to discrimination — stemming from data practices that depend not only on information about themselves, but also on information about others. </p>
<p>User data-collection agreements can lead to indirect harm to third parties. For example, the negative impacts of the <a href="https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html">Cambridge Analytica scandal extended</a> far beyond those whose data the company collected.</p>
<p>This predicament underscores the collective impact of individual data decisions. </p>
<h2>Data analytics</h2>
<p>Algorithms powered by artificial intelligence draw inferences by analyzing the relationships between data points. AI algorithms rely on databases containing information about multiple people to learn things about a particular person or a particular group. </p>
<p>Companies draw conclusions about people by analyzing data collected from others, making probabilistic assessments based on personal characteristics and relationships. Companies continue to add information about people to their datasets daily. And, the more people a dataset like the one built by 23andMe includes, the less someone’s choice not to be part of it matters.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a thumb presses a heart button on a smartphone screen" src="https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=391&fit=crop&dpr=1 600w, https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=391&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=391&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=492&fit=crop&dpr=1 754w, https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=492&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/554855/original/file-20231019-18-a4ar1v.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=492&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">AI-powered algorithms analyze user information and the connections and relationships with other people’s data.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Similarly, every time a user agrees to the collection, processing or sharing of personal information, it also affects others who share similarities with the user. These collective assessments make data processing profitable, such as through marketing, data sales and business decisions based on consumer behaviour. </p>
<h2>Equity issues</h2>
<p>The interconnected nature of data isn’t a coincidence — it’s at the core of how businesses operate in the information economy. This also creates equity issues.</p>
<p>In the 23andMe case, hackers are offering the assembled genetic information for sale, <a href="https://www.reuters.com/technology/hackers-advertise-sale-23andme-data-leaked-data-forum-2023-10-06/">with lists that include thousands of people</a>. Hackers reportedly assembled and put up for sale <a href="https://nationalpost.com/news/hacker-puts-millions-of-23andme-user-data-up-for-sale-on-the-internet">lists of people with Ashkenazi Jewish ancestry</a>. </p>
<p>Individuals on the list now face increased risk of discrimination or harassment, as leaked data includes names and location. Other information from the company would allow them to do the same for people with a propensity for Type 2 diabetes, Parkinson’s disease or dementia — all of which 23andMe measures — putting them at risk of other harms, from raised insurance premiums to employment discrimination. </p>
<h2>Data’s collective risks</h2>
<p>We often fail to acknowledge the interconnected nature of data because we’re fixated on each individual. As a consequence, companies can exploit one person’s agreement to legitimize data practices involving others. Companies’ legal obligations to obtain individual agreements for data collection fail to recognize broader interests beyond those of the person who agreed. </p>
<p>We need privacy laws attuned to how the information economy works. Providing consent on behalf of others, as 23andMe users did when they clicked “I agree,” would be illegitimate under any meaningful notion of consent. To contain group data harms like those this hack produced, we need substantive rules about what companies can and can’t do. </p>
<p>Prohibitions on indiscriminate data collection and risky data uses avoid leaving unsuspecting individuals as collateral damage. Because corporate data practices can impact <em>everyone</em>, their safety obligations should too.</p>
<p><em>This is a corrected version of a story originally published on Oct. 22, 2023. The earlier story incorrectly said that 23andMe was owned by Google, the data breach was as a result of weak rather than repeated passwords, and affected the public, rather than users of 23andMe’s services who opted into the “DNA relatives matches” feature.</em></p><img src="https://counter.theconversation.com/content/193615/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ignacio Cofone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Our online data is inevitably intertwined with the data of others. Current protections are ill-equipped to address this reality and manage the far-ranging impacts of data breaches.Ignacio Cofone, Associate professor, Law, McGill UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2143012023-09-26T17:13:33Z2023-09-26T17:13:33ZOnline abuse could drive women out of political life – the time to act is now<figure><img src="https://images.theconversation.com/files/550072/original/file-20230925-25-y5lqv6.jpg?ixlib=rb-1.1.0&rect=98%2C44%2C5892%2C3943&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock/vchal</span></span></figcaption></figure><p>It is becoming increasingly evident that life in modern politics is presenting women with a stark choice – endure almost constant online threats and abuse or get out of public life. </p>
<p><a href="https://theconversation.com/jacinda-arderns-resignation-gender-and-the-toll-of-strong-compassionate-leadership-198152">Jacinda Ardern, the former prime minister of New Zealand,</a> and Sanna Marin, the former prime minister of Finland, are the two highest profile cases, but the problem is widespread. </p>
<p>Elected representatives have always faced criticism and public scrutiny. Some would argue this is par for the course. But the social media era has normalised ever more aggressive forms of abuse. Politicians can now expect insults, intimidation, cyberbullying and trolling as a regular part of their daily online interactions.</p>
<p>Women in politics can expect even worse. Everything from sexist comments to hate speech, cyberstalking, body shaming and even threats of assault, rape and death, all create a toxic virtual environment that poses a real risk to their participation in public life – and the health of democracy.</p>
<p>The problem is global. Research by the <a href="https://www.ipu.org/news/news-in-brief/2022-11/violence-against-women-parliamentarians-causes-effects-solutions-0">Inter-Parliamentary Union</a>, an organisation that seeks to represent parliaments around the world, revealed that four out of five women parliamentarians have been subjected to psychological violence such as bullying, intimidation, verbal abuse or harassment. </p>
<p>Two thirds have been targeted with humiliating sexual or sexist remarks and more than two out of five have received threats of assault, sexual violence or death.</p>
<p>The abuse against Ardern has been so intense that even in retirement <a href="https://time.com/6250008/jacinda-ardern-ongoing-security-threats/">she’s expected to have extra police protection</a>. Work in the <a href="https://acleddata.com/2021/12/08/violence-targeting-women-in-politics-trends-in-targets-types-and-perpetrators-of-political-violence/">US and Canada</a>, <a href="https://decoders.blob.core.windows.net/troll-patrol-india-findings/Amnesty_International_India_Troll_Patrol_India_Findings_2020.pdf">India</a>, the <a href="https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1-1/">UK</a>, <a href="https://www.unwomen.org/sites/default/files/Headquarters/Attachments/Sections/Library/Publications/2014/Violence%20Against%20Women%20in%20Politics-Report.pdf">South East Asia</a>, across <a href="https://ogbv.pollicy.org/report.pdf">Africa</a> and in <a href="https://www.ipu.org/resources/publications/issue-briefs/2018-10/sexism-harassment-and-violence-against-women-in-parliaments-in-europe">Europe</a> reveal broadly similar findings there.</p>
<p>Ongoing research at the <a href="https://www.universityofgalway.ie/about-us/news-and-events/news-archive/2020/october/online-abuse-and-threats-of-violence-against-female-politicians-on-the-rise.html">University of Galway</a> on the experiences of female politicians in Ireland – from local councillors to former government ministers – paints a similarly worrying picture. In qualitative interviews, conducted as part of my ongoing research with colleagues, we’ve found more than nine out of ten reported they had received abusive messages. </p>
<p>These ranged from foul language to hateful comments about their appearance and intelligence. Almost three quarters said they had experienced threats of physical violence on social media and 38% said they had received threats of rape or sexual violence – all criminal offences under Irish law.</p>
<p>Joan Burton, the former tánaiste (deputy prime minister) of Ireland, previously revealed she had been <a href="https://www.irishmirror.ie/news/irish-news/politics/ex-tanaiste-joan-burton-reveals-12269260">threatened with an acid attack</a>, and had received death threats from internet trolls. Intersectional cyberabuse is also commonplace, according to a study published by the <a href="https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662621/EPRS_STU(2021)662621_EN.pdf">European Parliament</a>. Women politicians who belong to minority racial or ethnic backgrounds, or identify as LGBTQI+, are frequent targets.</p>
<p>And of course it is not just politicians who are at risk. The <a href="https://onlineviolencewomen.eiu.com/">Economist Intelligence Unit</a> has reported that more than one in three women have experienced online violence.</p>
<h2>Driving women out</h2>
<p>All this has the very real potential to pose a chilling effect on the participation and engagement of women in civic and political life – not just as politicians but as participants in the online debates that now drive so much of political culture. A global survey by Washington-based non-profit <a href="https://www.ndi.org/tweets-that-chill">National Democratic Institute</a> found that more than half of young women who posted political opinions online were attacked for their views.</p>
<p>This abuse isn’t just a collection of isolated incidents – it’s a systemic problem that erodes our democratic values. One in five Irish female politicians who responded to our study said they have considered quitting politics because of the online harassment they have received. Safety concerns for themselves, their staff and their families further deter participation. Some respondents also said they didn’t feel safe going to public meetings.</p>
<p>A 2021 report by <a href="https://stratcomcoe.org/publications/abuse-of-power-coordinated-online-harassment-of-finnish-government-ministers/5">Nato</a> tracked abuse received by Finnish female government ministers, including Marin, on X (formerly Twitter) and found volumes of hostile, gendered attacks. The report uncovered routine uses of terms like “lipstick government”, “feminist quintet” and “tampax team” to refer to the government. </p>
<p>A key point in the Nato report is that these attacks were coordinated by those actively seeking to disrupt democracy. This amounts to compelling evidence that the problem runs deep, illustrating that people attempting to undermine a government have recognised attacking women as a winning strategy. </p>
<p>The examples highlighted in the report don’t merely revolve around hatred towards these women. They underscore that those seeking to oppose a government understand this form of hatred is an effective means to achieve their goals. This suggests a disconcerting indifference on the part of the attackers but also a perception that nothing can or will be done to counter their attacks.</p>
<p>After years of progress on increasing female participation in political life, democracies around the world are now in real danger of regressing if women are driven out of politics.</p>
<h2>We know the problem, we know the solutions</h2>
<p>Tackling cyber-violence against women in politics is complicated but that doesn’t mean we cannot take action. Laws already exist that are supposed to protect women from this kind of abuse but they are not being vigorously enforced.</p>
<p>It’s also time to rein in the tech platforms and hold them legally accountable for the toxic content they host, pushed out by their algorithms. A collective international effort is needed to advocate for tough sanctions. </p>
<p>That should include, for example, an online safety tsar with the power to force these monoliths to take down abusive content and stop it from spreading. Tech companies that are consistent hate spreaders should face massive fines. </p>
<p>Public awareness and education campaigns should target boys and men, emphasising respectful online behaviour and critical thinking to encourage them to question harmful stereotypes and biases. They should be taught digital literacy to better understand the consequences of their actions online. Meanwhile, robust support systems are needed for women politicians facing abuse.</p>
<p>The impact of online abuse on <a href="https://www.liebertpub.com/doi/full/10.1089/cyber.2020.0253?casa_token=mYmW6o8HwcIAAAAA%3ANXbKJZWbltb-16jPYgMWXhLy52DYXNFzDF9qeUnGGT8Jz5QTpnKns32rgqzREOB8mB6Fyqs0J6-cdw">female politicians is significant</a>. And if the issue isn’t addressed, it could lead to dire consequences for democracy as women retreat from positions of power.</p><img src="https://counter.theconversation.com/content/214301/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tom Felle does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Inaction on gendered abuse is making it an even more effective tool for discouraging women from taking public office.Tom Felle, Associate Professor of Journalism, University of GalwayLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2092082023-08-07T12:06:45Z2023-08-07T12:06:45ZYoung people need more support coping with online sexual harms<figure><img src="https://images.theconversation.com/files/540255/original/file-20230731-157556-npzrog.jpg?ixlib=rb-1.1.0&rect=15%2C0%2C5309%2C2985&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Motivating young people to think critically about online risks helps them understand how stereotypes, inequalities and sexist double standards impact people online.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/young-people-need-more-support-coping-with-online-sexual-harms" width="100%" height="400"></iframe>
<p>Digital technologies and the internet have become a part of daily life for many young people in Canada and worldwide. While that increased connectivity brings many benefits, it can also open youth up to online harm and abuse. It is important that meaningful supports are in place to protect young people from sexual harm. </p>
<p>In 2020, humanitarian organization Plan International <a href="https://www.planinternational.nl/uploaded/2020/09/SOTWGR2020-CommsReport-EN.pdf?x10967">surveyed just over 14,000 young girls and women</a> aged 15-25 in 22 countries, including Canada. Fifty-eight per cent of participants reported having personally experienced some form of online harassment, including sexual harassment. </p>
<p>People who have experienced these problems report <a href="https://www.cigionline.org/publications/supporting-safer-digital-spaces/">significant adverse effects</a> on their well-being, including <a href="https://webfoundation.org/2020/11/the-impact-of-online-gender-based-violence-on-women-in-public-life/">lower self-esteem, increased anxiety, stress</a> and even <a href="http://www.bwss.org/wp-content/uploads/2014/05/CyberVAWReportJessicaWest.pdf">attempts at self-harm</a>.</p>
<p>Further, research has shown that rates of sexual harm have increased among people with one or multiple marginalized identities like race, <a href="https://www.cigionline.org/static/documents/SaferInternet_Special_Report.pdf">sexual orientation</a> or a disability. </p>
<p>Young people who <a href="https://mediasmarts.ca/sites/default/files/2023-07/report_ycwwiv_trends_recommendations.pdf">experience this kind of discrimination</a> can face a higher risk of significant mental health problems.</p>
<p>Despite the severity of these harms, much of Canadian education, social supports and laws do not provide young people with the tools and protection they want and need. </p>
<p>Parents, teachers, technology companies, civil society organizations and governments are grappling with how to support young people in these cases. So, where are we going wrong?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A young woman looks at a phone with an upset look." src="https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540262/original/file-20230731-104526-v5p4rm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Online harassment and abuse can negatively impact a young person’s mental health and self-esteem.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>We need to use the right words</h2>
<p><a href="https://1332d589-88d9-46fd-b342-d3eba2ef6889.usrfiles.com/ugd/1332d5_0b255967851a48c580f8a3c23e786399.pdf">Our research shows</a> that terms like “cyberbullying” no longer capture the scope of harms young people experience in digital spaces. Using this term can downplay the seriousness of the issue because it evokes an idea of schoolyard teasing rather than some of the more serious forms of sexual harms that youth can experience.</p>
<p>These digital harms can include <a href="https://doi.org/10.1007/978-3-030-83734-1_31">receiving unsolicited explicit images</a>, sexual harassment, exploitative sexual extortion and non-consensual distribution of intimate images. Many of these behaviours fall outside of what the average person would imagine when they think of cyberbullying and require new terminology that accurately describes what youth are experiencing.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/be-careful-with-photos-talk-about-sex-how-to-protect-your-kids-from-online-sexual-abuse-139971">Be careful with photos, talk about sex: how to protect your kids from online sexual abuse</a>
</strong>
</em>
</p>
<hr>
<p>As a group of leading scholars studying the unique challenges of navigating relationships and sexual experiences online, we have adopted the term “technology-facilitated sexual violence” to describe the sexual harms young people experience in digital spaces.</p>
<p>Our website offers a <a href="https://www.diydigitalsafety.ca/resources">hub of resources</a> to help support young people and address technology-facilitated sexual violence.</p>
<p>Through our five-year research project, <a href="https://www.diydigitalsafety.ca/">Digitally Informed Youth (DIY) Digital Safety</a>, we will engage with young people and the adults who support them. This is the first research project in Canada to specifically examine technology-facilitated sexual violence among young people aged 13-18 years old. We aim to understand their challenges, how they cope and their ideas for solutions.</p>
<p><a href="https://www.diydigitalsafety.ca/publications">Our research</a> has emphasized that tackling this problem requires acknowledging young people’s integrated digital and physical lives and recognizing that technology as a tool can both facilitate harm and can be harnessed to combat such harm. </p>
<h2>Lack of Canadian research</h2>
<p>Educators and policymakers must understand the problem within the unique context of Canadian society. Although there is a growing amount of Canadian research on technology-facilitated sexual violence, most research on this topic has been conducted in countries like the United States or Australia.</p>
<p>Specifically, there is little research on what young people in Canada are experiencing online, what terminology we should use to identify these harms and what supports young people find effective. Additionally, some young people in Canada face challenges because they live in remote communities or have less access to supportive resources.</p>
<p>It is essential to have contextual evidence-based research so that educators can talk to young people about their rights, understand what behaviour is harmful and know how young people should respond to abusive sexual behaviours online. Youth voices and perspectives must be included in this analysis.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="One person placing their hands around another's." src="https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/540270/original/file-20230731-227785-volgbk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Supporting young people means creating solutions based on trust and open dialogue.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Consistent and accessible support</h2>
<p>As technology has evolved, the Canadian legal system has introduced laws to address sexual harms against young people and adults, such as criminal laws against <a href="https://laws-lois.justice.gc.ca/eng/acts/c-46/section-163.1.html">child pornography</a>, <a href="https://laws.justice.gc.ca/eng/AnnualStatutes/2007_20/FullText.html">child luring</a>, <a href="https://laws-lois.justice.gc.ca/eng/acts/c-46/section-162.HTML">voyeurism</a> and <a href="https://www.justice.gc.ca/eng/rp-pr/other-autre/cndii-cdncii/p6.html">non-consensual distribution of intimate images</a>.</p>
<p>However, young people still receive <a href="https://doi.org/10.1177/0964663917724866">confusing messages</a> about how these laws apply to them and which sexual behaviours are harmful. For example, many young people receive inaccurate <a href="https://needhelpnow.ca/app/en/resources_involving_safe_adult">victim-blaming messaging</a> about images they may take of their bodies.</p>
<p>Legal interventions may be an appropriate response in some of the most serious cases of technology-facilitated sexual violence, but <a href="https://doi.org/10.1177/17416590221142762">young people need more than legal measures</a>. In reality, many are looking for various forms of support from schools, friends, <a href="https://mediasmarts.ca/sites/default/files/2023-07/report_ycwwiv_trends_recommendations.pdf">family</a>, non-profit organizations and victim-service organizations.</p>
<p>Currently, school curricula and policies across Canada address technology-facilitated sexual violence in various ways, and the approaches vary significantly among provinces and territories. In some regions, there is minimal or even no language related specifically to technology-facilitated sexual violence in the curricula and policies. </p>
<p>With technology being a consistent part of young people’s lives, it is key that school policies and curricula are updated to address the realities of young people’s increasingly digitized relationships.</p>
<p>To update school policies and curricula effectively, some researchers suggest promoting the concept of being good <a href="https://doi.org/10.1080/14681811.2023.2204223">“sexual citizens”</a> among young people. This means encouraging them to navigate their lives and relationships with a solid ethical and interpersonal foundation. This model shifts away from victim-blaming and abstinence-only messaging. Instead, it focuses on fostering healthy relationships and communication.</p>
<p>Motivating young people to think critically about online risks is an empowering approach. It helps them acknowledge the influence that stereotypes, inequalities and sexist double standards have in these discussions and how they impact individuals’ access to power and resources.</p>
<p>Relying on legal scare tactics or surveillance methods by caregivers and tech companies <a href="https://mediasmarts.ca/sites/default/files/2023-07/report_ycwwiv_trends_recommendations.pdf">undermines trust between young people and the adults in their lives</a>. It also raises concerns among youth about how platforms are using the data collected from them. </p>
<p>Instead, we need solutions based on trust and open dialogue, and for parents, educators, technology companies and policymakers to engage with young people as the first step to creating a culture shift.</p><img src="https://counter.theconversation.com/content/209208/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alexa Dodge's research receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC). </span></em></p><p class="fine-print"><em><span>Christopher Dietzel receives funding from iMPACTS: Collaborations to Address Sexual Violence on Campus; Social Sciences and Humanities Research Council of Canada (SSHRC) Partnership Grant 895–2016-1026 (Project Director, Shaheen Shariff, Ph.D., James McGill Professor, McGill University).</span></em></p><p class="fine-print"><em><span>Kaitlynn Mendes receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and the Canada Research Chairs Program.</span></em></p><p class="fine-print"><em><span>Suzie Dunn's research receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC). </span></em></p><p class="fine-print"><em><span>Estefania Reyes does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New approaches are needed to address the scope of abuse young people can experience when online.Estefania Reyes, PhD student, Sociology, Western UniversityAlexa Dodge, Assistant Professor of Criminology, Saint Mary’s UniversityChristopher Dietzel, Postdoctoral fellow, the Sexual Health and Gender Lab, Dalhousie UniversityKaitlynn Mendes, Canada Research Chair in Inequality and Gender, Western UniversitySuzie Dunn, Assistant Professor, Law, Dalhousie UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1993242023-02-08T10:58:09Z2023-02-08T10:58:09ZHow tech companies are failing women workers and social media users – and what to do about it<figure><img src="https://images.theconversation.com/files/508328/original/file-20230206-25-p6o6gc.jpg?ixlib=rb-1.1.0&rect=0%2C386%2C2928%2C1302&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Lerbank-bbk22/Shutterstock</span></span></figcaption></figure><p>From Elon Musk’s <a href="https://theconversation.com/why-elon-musks-first-week-as-twitter-owner-has-users-flocking-elsewhere-193857">erratic start</a> as Twitter’s new owner to <a href="https://www.nytimes.com/2022/11/09/technology/meta-layoffs-facebook.html">Meta’s recent decision</a> to layoff more than 11,000 employees, and an ongoing <a href="https://www.ft.com/content/28f7e49f-09b3-407f-82f8-56683f5d0663">downturn for tech stocks</a>, the social media sector is once again in turmoil. </p>
<p>But while these latest shockwaves have attracted a great deal of public attention, we talk considerably less of their repercussions on women. Big tech companies are failing women on both sides of the screen: their employees and the users of their services. This is why recent moves to <a href="https://www.theguardian.com/technology/2023/feb/04/online-safety-bill-needs-tougher-rules-on-misogyny-say-peers">regulate social media firms</a> should include specific protections for women.</p>
<p>Online abuse, as has been <a href="https://arxiv.org/abs/1902.03093">repeatedly confirmed by academic research</a> and <a href="https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1-1/">civil rights groups</a>, often targets women users. One of Musk’s first acts after buying Twitter was to introduce verification to reduce the number of fake accounts. Such accounts are <a href="https://www.compassioninpolitics.com/three_quarters_of_those_experiencing_online_abuse_say_it_comes_from_anonymous_accounts">often cited</a> among the main causes of social media violence. But the <a href="https://www.cleanuptheinternet.org.uk/post/what-do-elon-musk-s-blue-tick-experiments-mean-for-the-uk-s-online-safety-bill">authentication process</a> (since withdrawn after protests from the Twitter community) simply relied on “certified” profiles paying a monthly fee. </p>
<p>As such, the move seemed more like a way to raise revenues than an effective online safety strategy. To make things worse, and more or less simultaneously, Musk also controversially <a href="https://www.euronews.com/next/2023/01/18/which-controversial-figures-has-elon-musk-reinstated-on-twitter">restored the accounts</a> of several high-profile figures previously banned for misogynistic discourse. This included self-defined “sexist” influencer <a href="https://www.theguardian.com/technology/2022/aug/06/andrew-tate-violent-misogynistic-world-of-tiktok-new-star">Andrew Tate</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/elon-musks-twitter-blue-gives-verification-for-a-fee-this-could-make-twitter-even-less-safe-for-women-193967">Elon Musk's 'Twitter Blue' gives verification for a fee – this could make Twitter even less safe for women</a>
</strong>
</em>
</p>
<hr>
<p>Beyond the tycoon’s chaotic approach to leadership, these decisions indicate wider trends within the social media industry with far-reaching ramifications for women. </p>
<p>Over the last few years, in fact, platforms such as Twitter, Facebook, YouTube and TikTok have all responded to mounting public pressure by adopting more stringent guidelines against <a href="https://www.epe.admin.cam.ac.uk/five-things-you-should-know-about-digital-gender-based-violence-dgbv-and-ways-curb-it">gender-based hate speech</a>. These changes, however, have been mostly achieved through <a href="https://mckinneylaw.iu.edu/iiclr/pdf/vol32p97.pdf">self-regulation</a> and voluntary partnerships with the public sector. This approach leaves companies free to reverse previous decisions in the way Musk has.</p>
<p>Besides, censoring individual internet personalities and promoting account verification doesn’t actually address the core causes of social media violence. The actual design of these platforms and the business models these companies employ play a more central role. </p>
<p>Social media platforms want to keep us all online to produce profitable data and maintain audiences for advertisements. They do this with algorithms that create an echo chamber. This means we keep seeing content similar to whatever attracted our clicks in the first place. But research shows this also facilitates the <a href="https://intpolicydigest.org/how-social-media-is-fueling-divisiveness/">circulation of “divisive” messages</a>. It also supports the <a href="https://research-information.bris.ac.uk/en/publications/from-individual-perpetrators-to-global-mobilisation-strategies-th">spread of online sexism</a>, and pushes users that view problematic materials into a “<a href="https://www.theguardian.com/society/2022/oct/30/global-incel-culture-terrorism-misogyny-violent-action-forums">black hole</a>” of related updates.</p>
<p>While the platforms themselves have become problematic for women that use them, many of the companies behind them are also failing the women workers that build and manage online social media networks.</p>
<h2>Tech company redundancies</h2>
<p>Social media companies’ treatment of employees should also be examined through a gender lens, particularly more recently as they <a href="https://www.businessinsider.com/economic-downturn-tech-industry-layoffs-stock-plunge-funding-slowdown-2022-6?r=US&IR=T">react to a market downturn</a> with mass layoffs and other cost-cutting strategies.</p>
<p>A particularly at-risk category (which I have examined, among others, in my <a href="https://septemberpublishing.org/product/the-threat-why-digital-capitalism-is-sexist-and-how-to-resist/">recently published book</a>) is that of <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">social media moderators</a>. These employees are charged with the task of cleaning up platforms of content that violates community standards. They are constantly exposed to misogynistic hate speech, images of sexual violence and non-consensual pornography. <a href="https://www.washingtonpost.com/technology/2020/05/12/facebook-content-moderator-ptsd/">Female staff</a> tend to feel especially triggered and many <a href="https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health">develop mental health issues</a>, including depression, anxiety and post-traumatic stress syndrome as a result.</p>
<p>Social media firms and their international subcontractors (to which large part of moderation operations are outsourced) make other choices that also infringe on employees’ rights, particularly female moderators. One of the latest has been <a href="https://www.theguardian.com/business/2021/mar/26/teleperformance-call-centre-staff-monitored-via-webcam-home-working-infractions">placing AI-powered cameras</a> in the homes of moderators who work remotely. This is a particularly brutal intrusion for women since they already often face harassment or safety issues in more public spaces.</p>
<p>Online abuse and workers’ treatment concern people of all genders. Women, however, pay a unique price for social media violence. Recent <a href="https://onlineviolencewomen.eiu.com/">research from The Economist</a> shows fear of new aggressions pushed nine out of ten female victims surveyed to alter their digital habits – 7% even quit their jobs.</p>
<figure class="align-center ">
<img alt="Woman looks at laptop; home in background; remote working." src="https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/508333/original/file-20230206-15-ysb1s7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Moderators delete posts that violate community standards on social media and so are regularly exposed to disturbing content.</span>
<span class="attribution"><span class="source">fizkes/Shutterstock</span></span>
</figcaption>
</figure>
<h2>Specific solutions to online hate</h2>
<p>Just as women workers and users encounter specific issues as a result of social media policies – or lack thereof – the interventions designed to improve their safety and wellbeing should also be specific.</p>
<p>My book looks at how digital capitalists – including but not limited to social media corporations – fail female users and workers, and <a href="https://gen-pol.org/2019/11/when-technology-meets-misogyny-multi-level-intersectional-solutions-to-digital-gender-based-violence/">how to remedy this</a>. Among the reforms I suggest are interventions to make platforms more accountable. </p>
<p>The <a href="https://bills.parliament.uk/bills/3137">UK Online Safety Bill</a> is set to give regulators the power to fine or prosecute companies that neglect to remove harmful materials, for example. It is important, though, that policy change in this area specifically identifies women as a protected category, which this bill <a href="https://demos.co.uk/blog/the-online-safety-bill-will-it-protect-women-online/">currently fails to do</a>. Transparency commitments for platforms’ algorithms and regulations around data-mining business models could also help but are so far not yet – or not fully – integrated into most national and international legislation.</p>
<p>And since workers must be protected as much as technology users, it is vital that <a href="https://www.wired.co.uk/article/facebook-content-moderators-ireland">they can organise via trade unions</a>, and that there is a push to ensure employers respect their duty of care towards the workforce. This might involve prohibiting invasive workplace surveillance, for example.</p>
<p>There is one solution to both issues: it is time for social media giants to implement specific strategies to safeguard women on both sides of the screen.</p><img src="https://counter.theconversation.com/content/199324/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lilia Giugni is affiliated with GenPol - Gender & Policy Insights, a UK-based feminist think tank, and with the Royal Society of Arts.</span></em></p>Women need better protection from online hate and misogyny, both while using social media and when working for technology companies.Lilia Giugni, Research Associate, University of BristolLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1975152023-01-23T13:23:09Z2023-01-23T13:23:09ZOnline racial harassment leads to lower academic confidence for Black and Hispanic students<figure><img src="https://images.theconversation.com/files/505447/original/file-20230119-13-j3f6us.jpg?ixlib=rb-1.1.0&rect=45%2C108%2C5961%2C3899&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Students of color become less confident in their academic abilities when they encounter racially demeaning content online.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/teenager-sending-email-from-smart-phone-in-his-bed-royalty-free-image/537461694?phrase=online%20harassment&adppopup=true">ljubaphoto / Getty Images</a></span></figcaption></figure><p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take about interesting academic work.</em> </p>
<h2>The big idea</h2>
<p>Online racial discrimination or harassment has a <a href="https://doi.org/10.1007/s10964-022-01689-z">negative effect on the academic and emotional well-being</a> of students of color. That is the key finding from a study I published recently in the Journal of Youth and Adolescence.</p>
<p>For the study, I surveyed 356 Black and Hispanic teens across the U.S. I analyzed their responses to questions about their social media use and experiences. I also asked about their mental health and beliefs about their academic potential. The adolescents were 16 years old on average.</p>
<p>Girls in this study had on average one to three more social media accounts than boys. Girls reported depression levels that were four points higher than those of boys. This suggests more depressive symptoms among girls. Black teens reported social media activity that was three points greater than that of Hispanic teens. They also reported more online experiences of discrimination – almost 10% more – than their Hispanic counterparts.</p>
<p>Black and Hispanic teens who used social media more were more likely than not to encounter online racial harassment or discrimination, whether as direct victims or observing their racial group or another racial group being demeaned or discredited. <a href="https://cyberpsychology.eu/article/view/4237/3282">Brendesha Tynes</a>, a researcher at the University of Southern California, describes online discrimination as “disparaging remarks, symbols, images, behaviors that inflict harm through the use of computers, cell phones and other electronic devices.”</p>
<p>Additionally, students who observed more online racial harassment or discrimination suffered more depression and anxiety than those with fewer of these negative online experiences. Higher levels of depression and anxiety undermined Black and Hispanic adolescents’ confidence in their academic abilities. </p>
<p>Students didn’t experience more depression and anxiety only when they personally, or members of their own racial or ethnic group, were targeted. They also experienced more depression and anxiety when they observed other people and racial or ethnic groups being targeted.</p>
<h2>Why it matters</h2>
<p>When teens encountered online discrimination during their social media use, they had fewer positive beliefs about their academic skills. This is noteworthy, because if it weren’t for this discrimination, teens who use social media often had more positive perceptions of their academic skills and abilities than those who used less social media.</p>
<p>Online racial discrimination and harassment represents a unique risk for teenagers of color. Not only are they <a href="https://www.pewresearch.org/internet/2016/08/15/blacks-more-likely-than-whites-to-see-and-post-race-related-content-on-social-media/">more likely to see and post more race-related content</a>, but when this race-related content is negative it has <a href="https://www.apa.org/science/about/psa/2015/12/online-racial-discrimination">harmful effects on their mental health, academics and overall behavior</a>.</p>
<p>If society has a better understanding of how online racial discrimination and harassment affects teenagers’ mental health and academic well-being, then schools, parents and youth agencies could be better able to help reduce the harm.</p>
<h2>What other research is being done</h2>
<p>My lab and other researchers are conducting studies to determine other effects that online harassment may have on young people of color. For instance, I am currently exploring whether these negative online experiences may influence young people to engage in social and political activism. This includes protests, voting, canvassing, writing to legislators and community organizing.</p><img src="https://counter.theconversation.com/content/197515/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alvin Thomas does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Depression and anxiety often follow when teenagers see or experience racial hostilities online.Alvin Thomas, Assistant Professor, Phyllis Northway Faculty Fellow, University of Wisconsin-MadisonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1883172022-08-12T09:04:07Z2022-08-12T09:04:07ZNearly 70% of Premier League footballers are abused on Twitter – we used an AI to sift through millions of tweets<p>As the new Premier League football season gets underway, a few things are certain. There will be goals, drama and excitement, and unfortunately, players will be subjected to vile abuse on social media.</p>
<p>My colleagues at <a href="https://www.turing.ac.uk/">the Alan Turing Institute</a> and I have published a <a href="https://www.turing.ac.uk/research/publications/tracking-abuse-twitter-against-football-players-2021-22-premier-league-season">report</a>, commissioned by <a href="https://www.ofcom.org.uk/home">Ofcom</a>, in which we found that seven out of ten Premier League footballers face abuse on Twitter. One in 14 receive abuse every day. </p>
<p>These are stark statistics, with huge implications for player welfare. Other analysis has revealed a high rate of online abuse, <a href="https://www.thepfa.com/news/2021/8/4/online-abuse-ai-research-study-season-2020-21">particularly racist abuse</a>, of footballers that has gone largely <a href="https://journals.sagepub.com/doi/10.1177/2167479517745300">unchallenged</a> by football governing organisations. Mental health is increasingly a <a href="https://www.thepfa.com/players/wellbeing/mental-health-and-football">concern in football</a>, and there is <a href="https://www.liebertpub.com/doi/full/10.1089/cyber.2020.0253">plenty of evidence</a> that online abuse can lead to a range of mental health problems, from depression to suicidal thoughts.</p>
<p>Our report is one of the first to use artificial intelligence (AI) to systematically detect and track online abuse against footballers at scale. This is almost impossible to do manually because of the sheer size and complexity of social media. </p>
<p>We focused our analysis on Twitter because it is widely used by footballers and fans, and it makes its data freely available to researchers. In total, we collected 2.3 million tweets that mentioned or directly replied to tweets from 618 Premier League footballers during the first half of the 2021-22 season.</p>
<p>At the heart of our analysis is a new machine learning model developed by the Turing’s <a href="https://onlinesafety.turing.ac.uk/">online safety team</a> as part of our <a href="https://onlinesafety.turing.ac.uk/online-harms-observatory/">Online Harms Observatory</a>. This model is able to automatically assess whether or not a tweet is abusive by analysing its language. </p>
<p>To provide a benchmark for our AI model and a more in-depth breakdown of the tweet content, we also hand-labelled 3,000 tweets, categorising them as positive, neutral, critical or abusive. Critical tweets were those that criticised a player’s actions on or off the pitch, but not in such a way that could be deemed abusive. </p>
<p>We acknowledge that categorising tweets in this way is <a href="https://aclanthology.org/2022.naacl-main.13/">to some degree subjective</a>, but we sought to reduce human bias as much as possible by consistently applying the same definitions and guidelines to all tweets.</p>
<h2>What did we find?</h2>
<p>Of the 3,000 tweets that we hand labelled, the majority (57%) were positive. Tweets routinely expressed admiration, praise and support for the players, often using emojis, exclamation marks and other indicators of intense positive emotion. A smaller proportion of tweets were labelled as critical (12.5%), neutral (27%) or abusive (3.5%).</p>
<p>Our machine learning model, applied to all 2.3 million tweets, found that 2.6% contained abuse. This might sound like a low percentage, but it represents almost 60,000 abusive tweets over just five months.</p>
<p>Abuse is widespread: 68% of players received at least one abusive tweet during this period. But players have very different experiences online: just 12 players received half of all abuse. Cristiano Ronaldo, Harry Maguire and Marcus Rashford received the most abusive tweets. </p>
<p>Abuse also varied hugely over the course of the season, with big peaks following key events. For instance, the number of abusive tweets spiked on August 27 2021, when <a href="https://www.bbc.co.uk/sport/football/58359561">Manchester United re-signed Cristiano Ronaldo</a>, and November 7 2021, when <a href="https://www.manchestereveningnews.co.uk/sport/football/football-news/harry-maguire-man-united-city-22093728">Harry Maguire sent a tweet</a> apologising after Manchester United lost to Manchester City.</p>
<p>We found that about 8.5% of abusive tweets (0.2% of all tweets) attacked players’ identities by referencing a protected characteristic such as religion, race, gender or sexuality. This is a surprisingly low proportion given the concerns about racial abuse of footballers online. But we only looked at identity attacks using keywords (whereas we had a full AI solution for identifying abuse), and did not look specifically at the experiences of non-white players.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/euro-2020-could-twitter-stop-racist-abuse-before-it-happens-164409">Euro 2020: could Twitter stop racist abuse before it happens?</a>
</strong>
</em>
</p>
<hr>
<h2>Being a good fan online</h2>
<p>Addressing online abuse is not an easy task – finding and categorising abuse is technically difficult and raises fundamental questions around free speech and privacy. But we cannot leave abuse unchallenged.</p>
<p>Some social media platforms, including Twitter, are already taking steps to improve their trust and safety processes, but more can be done. This may include amplifying and promoting content that is not abusive; giving additional, practical support and advice to players (and others) who are receiving large amounts of abuse; and making more use of properly governed machine learning tools to automatically detect and take action against abuse. Ultimately, platforms should shoulder most of the responsibility for cleaning up their services.</p>
<figure class="align-center ">
<img alt="Three men cheer while gathered around a mobile phone, they are sitting at a bar with a pint of beer" src="https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/478060/original/file-20220808-22-kqvdqp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Getting emotional about football online shouldn’t lead to abuse of individual players.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/men-watching-football-on-smartphone-drinking-1275964108">Prostock-studio / Shutterstock</a></span>
</figcaption>
</figure>
<p>In general, <a href="https://www.ofcom.org.uk/research-and-data/internet-and-on-demand-research/internet-use-and-attitudes/internet-users-experience-of-harm-online">abusive content is underreported</a>. Ofcom polled the public about their <a href="https://www.ofcom.org.uk/news-centre/2022/seven-in-ten-premier-league-footballers-face-twitter-abuse">experiences of players being targeted online</a>, finding that more than a quarter of teens and adults who go online (27%) saw abuse directed at a footballer last season. Among those who came across abuse, more than half (51%) said they found the content extremely offensive and around 30% didn’t take any action in response.</p>
<p>There’s absolutely nothing wrong with being emotional about football and expressing how you feel online but we should all take care to not cross the line into being abusive and intimidating. And if you see someone else being abusive, be proactive. Report it and show that this content has no place in football (or anywhere else). Football is a beautiful game, and we can all help to keep it that way.</p>
<hr>
<p><a href="https://theconversation.com/au/topics/social-media-and-society-125586" target="_blank"><img src="https://images.theconversation.com/files/479539/original/file-20220817-20-g5jxhm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=144&fit=crop&dpr=1" width="100%"></a></p><img src="https://counter.theconversation.com/content/188317/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bertram Vidgen receives research funding from Ofcom, DCMS and the EPSRC. He is CEO and co-founder of Rewire, a startup building socially responsible AI for online safety. He has advised Parliament, DCMS and a range of large tech companies. </span></em></p><p class="fine-print"><em><span>Angus Redlarski Williams receives research funding from Ofcom, DCMS and the EPSRC.</span></em></p>One in 14 Premier League footballers receives online abuse every day.Bertie Vidgen, Head of Online Safety, Alan Turing InstituteAngus Redlarski Williams, Data Scientist, Online Harms, Alan Turing InstituteLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1866472022-07-14T04:25:36Z2022-07-14T04:25:36ZSendit, Yolo, NGL: anonymous social apps are taking over once more, but they aren’t without risks<figure><img src="https://images.theconversation.com/files/473783/original/file-20220713-20-pkfvpq.jpeg?ixlib=rb-1.1.0&rect=170%2C161%2C5820%2C3826&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Have you ever told a stranger a secret about yourself online? Did you feel a certain kind of freedom doing so, specifically because the context was removed from your everyday life? Personal disclosure and anonymity have long been a potent mix laced through our online interactions. </p>
<p>We’ve recently seen this through the resurgence of anonymous question apps targeting young people, including Sendit and NGL (which stands for “not gonna lie”). The latter has been installed 15 million times globally, according to recent <a href="https://techcrunch.com/2022/07/11/anonymous-social-ngl-tops-15m-installs-2-4m-in-revenue-as-users-complain-about-being-scammed/">reports</a>.</p>
<p>These apps can be linked to users’ Instagram and Snapchat accounts, allowing them to post questions and receive anonymous answers from followers.</p>
<p>Although they’re trending at the moment, it’s not the first time we’ve seen them. Early examples include ASKfm, launched in 2010, and Spring.me, launched in 2009 (as “Fromspring”).</p>
<p>These platforms have a troublesome history. As a sociologist of technology, I’ve studied human-technology encounters in contentious environments. Here’s my take on why anonymous question apps have once again taken the internet by storm, and what their impact might be.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A series of screens advertising various features of the 'NGL' app." src="https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=256&fit=crop&dpr=1 600w, https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=256&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=256&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=322&fit=crop&dpr=1 754w, https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=322&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/473782/original/file-20220713-14-7p7h1u.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=322&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The app NGL is targeted at ‘teens’ on the Google app store.</span>
<span class="attribution"><a class="source" href="https://play.google.com/store/apps/details?id=com.nglreactnative&hl=en_US&gl=US">Screenshot/Google Play Store</a></span>
</figcaption>
</figure>
<h2>Why are they so popular?</h2>
<p>We know teens are drawn to social platforms. These networks connect them with their peers, support their journeys towards forming identity, and provide them space for experimentation, creativity and bonding.</p>
<p>We also know they manage online disclosures of their identity and personal life through a technique sociologists call “audience segregation”, or “code switching”. This means they’re likely to <a href="https://oxford.universitypressscholarship.com/view/10.1093/oso/9780199381265.001.0001/oso-9780199381265-chapter-3">present themselves differently</a> online to their parents than they are to their peers. </p>
<p>Digital cultures have long used <a href="https://www.tandfonline.com/doi/abs/10.1080/1369118X.2015.1093531">online anonymity</a> to separate real-world identities from online personas, both for privacy and in response to online surveillance. And research has shown online anonymity <a href="https://spssi.onlinelibrary.wiley.com/doi/full/10.1111/1540-4560.00247">enhances self-disclosure and honesty</a>.</p>
<p>For young people, having online spaces to express themselves away from the adult gaze is important. Anonymous question apps provide this space. They promise to offer the very things young people seek: opportunities for self-expression and authentic encounters.</p>
<h2>Risky by design</h2>
<p>We now have a generation of kids growing up with the internet. On one hand, young people are hailed as pioneers of the digital age – and on they other, we fear for them as its innocent victims. </p>
<p>A recent <a href="https://techcrunch.com/2022/06/29/anonymous-social-apps-shift-their-attention-to-instagram-in-the-wake-of-snapchats-ban/%22%22">TechCrunch</a> article chronicled the rapid uptake of anonymous question apps by young users, and raised concerns about transparency and safety. </p>
<p>NGL <a href="https://www.businessinsider.com/ngl-anonymous-instagram-q-and-a-app-surging-in-popularity-2022-7">exploded in popularity</a> this year, but hasn’t solved the <a href="https://www.nbcnews.com/tech/internet/ngl-anonymous-message-app-instagram-tests-link-bullying-rcna36152">issue of</a> hate speech and bullying. Anonymous chat app <a href="https://arstechnica.com/information-technology/2017/04/yik-yak-is-dead-long-live-yik-yak/">YikYak</a> was shut down in 2017 after becoming littered with hateful speech – but has <a href="https://techcrunch.com/2021/08/16/yik-yak-is-back/">since returned</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A screenshot of a Tweet from @Mistaaaman" src="https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=296&fit=crop&dpr=1 600w, https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=296&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=296&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=372&fit=crop&dpr=1 754w, https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=372&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/473781/original/file-20220713-26-tsnljj.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=372&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Anonymous question apps are just one example of anonymous online spaces.</span>
<span class="attribution"><a class="source" href="https://twitter.com/Mistaaaman/status/1126585149561421824">Screenshot/Twitter</a></span>
</figcaption>
</figure>
<p>These apps are designed to hook users in. They leverage certain platform principles to provide a highly engaging experience, such as interactivity and gamification (wherein a form of “play” is introduced into non-gaming platforms).</p>
<p>Also, given their experimental nature, they’re a good example of how social media platforms have historically been developed with a “move fast and break things” attitude. This approach, first articulated by Meta CEO Mark Zuckerberg, has arguably reached its <a href="https://hbr.org/2019/01/the-era-of-move-fast-and-break-things-is-over">use-by date</a>.</p>
<p>Breaking things in real life is not without consequence. Similarly, breaking away from important safeguards online is not without social consequence. Rapidly developed social apps can have harmful <a href="https://www.mdpi.com/1660-4601/15/11/2471">consequences</a> for young people, including cyberbullying, cyber dating abuse, image-based abuse and even online grooming. </p>
<p>In May 2021, <a href="https://techcrunch.com/2021/08/03/anonymous-snapchat-app-sendit-surges-with-3-5m-installs-after-snap-bans-yolo-and-lmk/">Snapchat suspended</a> integrated anonymous messaging apps Yolo and LMK, after <a href="https://www.scribd.com/document/507515040/Snap-Lawsuit">being</a> <a href="https://www.courthousenews.com/wp-content/uploads/2022/01/rodriguez-meta-snap-complaint.pdf">sued</a> by the distraught parents of teens who committed suicide after being bullied through the apps. </p>
<p>Yolo’s developers <a href="https://arstechnica.com/tech-policy/2021/05/snap-cuts-off-yolo-lmk-anonymous-messaging-apps-after-lawsuit-over-teens-death/">overestimated</a> the capacity of their automated content moderation to identify harmful messages. </p>
<p>In the wake of these suspensions, Sendit soared through <a href="https://techcrunch.com/2022/03/17/following-suicides-and-lawsuits-snapchat-restricts-apps-building-on-its-platform-with-new-policies/">the app store charts</a> as Snapchat users sought a replacement. </p>
<p>Snapchat then <a href="https://www.snap.com/en-US/safety-and-impact/post/announcing-new-policies-for-snaps-developer-platform">banned</a> anonymous messaging from third-party apps in March this year, in a bid to limit bullying and harassment. Yet it <a href="https://www.youtube.com/watch?v=7jW-IRuXj4g">appears</a> Sendit can still be linked to Snapchat as a third-party app, so the implementation conditions are variable.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1546246767695519762"}"></div></p>
<h2>Are kids being manipulated by chatbots?</h2>
<p>It also seems these apps may feature automated <a href="https://www.sciencedirect.com/science/article/pii/S2666827020300062">chatbots</a> parading as anonymous responders to prompt interactions – or at least that’s what staff at Tech Crunch found. </p>
<p>Although chatbots can be harmless (or even helpful), problems arise if users can’t tell whether they’re interacting with a bot or a person. At the very least it’s likely the apps are not effectively screening bots out of conversations. </p>
<p>Users can’t do much either. If responses are <a href="https://screenrant.com/ngl-link-qna-instagram-anonymous-explained/">anonymous</a> (and don’t even have a profile or post history linked to them), there’s no way to know if they’re communicating with a real person or not.</p>
<p>It’s difficult to confirm whether bots are widespread on anonymous question apps, but we’ve seen them cause huge problems on other platforms – opening avenues for deception and exploitation.</p>
<p>For example, in the case of <a href="https://journals.uic.edu/ojs/index.php/fm/article/view/6426/5525">Ashley Madison</a>, a dating and hook-up platform that was hacked in 2015, bots were used to chat with human users to keep them engaged. These bots used fake profiles created by Ashley Madison employees. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/anorexia-coach-sexual-predators-online-are-targeting-teens-wanting-to-lose-weight-platforms-are-looking-the-other-way-162938">'Anorexia coach': sexual predators online are targeting teens wanting to lose weight. Platforms are looking the other way</a>
</strong>
</em>
</p>
<hr>
<h2>What can we do?</h2>
<p>Despite all of the above, <a href="https://dl.acm.org/doi/abs/10.1145/3134711">some research</a> has found many of the risks teens experience online pose only brief negative effects, if any. This suggests we may be overemphasising the risks young people face online.</p>
<p>At the same time, implementing parental controls to mitigate online risk is often in tension with young people’s <a href="https://journals.sagepub.com/doi/abs/10.1177/1461444816686318">digital rights</a>. </p>
<p>So the way forward isn’t simple. And just banning anonymous question apps isn’t the solution.</p>
<p>Rather than avoid anonymous online spaces, we’ll need to trudge through them together – all the while demanding as much accountability and transparency from tech companies as we can.</p>
<p>For parents, there are some <a href="https://www.esafety.gov.au/parents/resources">useful resources</a> on how to help children and teens navigate tricky online environments in a sensible way.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ending-online-anonymity-wont-make-social-media-less-toxic-172228">Ending online anonymity won't make social media less toxic</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/186647/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alexia Maddox does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Anyone who has trawled through an internet forum will have seen how anonymity can change people. What happens when young people are thrown into the mix?Alexia Maddox, Research Fellow, Blockchain Innovation Hub, RMIT, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1824242022-06-16T02:24:51Z2022-06-16T02:24:51ZWho really gets fired over social media posts? We studied hundreds of cases to find out<figure><img src="https://images.theconversation.com/files/467642/original/file-20220608-22-5fh727.jpg?ixlib=rb-1.1.0&rect=26%2C116%2C5937%2C3835&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>What you say and do on social media can affect your employment; it can prevent you from getting hired, stall career progression and may even get you fired. Is this fair – or an invasion of privacy?</p>
<p>Our recent <a href="https://journals.sagepub.com/doi/10.1177/20563051221077022">research</a> involved a study of 312 news articles about people who had been fired because of a social media post.</p>
<p>These included stories about posts people had made themselves, such as a teacher who was fired after they came out as bisexual on Instagram, or a retail employee let go over a racist post on Facebook.</p>
<p>It also included stories about posts made by others, such as videos of police engaging in racial profiling (which led to their dismissal).</p>
<p>Racism was the most common reason people were fired in these news stories, with 28% of stories related specifically to racism. Other forms of discriminatory behaviour were sometimes involved, such as queerphobia and misogyny (7%); workplace conflict (17%); offensive content such as “bad jokes” and insensitive posts (16%); acts of violence and abuse (8%); and “political content” (5%).</p>
<p>We also found these news stories focused on cases of people being fired from public-facing jobs with high levels of responsibility and scrutiny. These included police/law enforcement (20%), teachers (8%), media workers (8%), medical professionals (7%), and government workers (3%), as well as workers in service roles such as hospitality and retail (13%).</p>
<p>Social media is a double-edged sword. It can be used to hold people to account for discriminatory views, comments or actions. But our study also raised important questions about privacy, <a href="https://doi.org/10.1177/0950017015613746">common</a> <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/ijsa.12067">HR practices</a> and how employers use social media to make decisions about their staff.</p>
<p>Young people in particular are expected to navigate social media use (documenting their lives, hanging out with friends, and engaging in self-expression) with the threat of future reputational harm looming.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"913775877208416256"}"></div></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/doxxing-swatting-and-the-new-trends-in-online-harassment-40234">Doxxing, swatting and the new trends in online harassment</a>
</strong>
</em>
</p>
<hr>
<h2>Are all online posts fair game?</h2>
<p>Many believe people just need to accept the reality that what you say and do on social media can be used against you. </p>
<p>And that one should only post content they wouldn’t mind their boss (or potential boss) <a href="https://www.newcastleherald.com.au/story/455845/logged-off-six-hunter-workers-fired-over-facebook-comments/">seeing</a>.</p>
<p>But to what extent should employers and recruiting managers respect the privacy of employees, and not use personal social media to make employment decisions?</p>
<p>Or is everything “fair game” in making hiring and firing decisions?</p>
<p>On the one hand, the capacity for using social media to hold certain people (like police and politicians) to account for what they say and do can be immensely valuable to democracy and society. </p>
<p>Powerful social movements such as <a href="https://theconversation.com/and-just-like-that-metoo-changed-the-nature-of-online-communication-174527">#MeToo</a> and <a href="https://theconversation.com/friday-essay-twitter-and-the-way-of-the-hashtag-141693">#BlackLivesMatter</a> used social media to call out structural social problems and individual bad actors.</p>
<p>On the other hand, when everyday people lose their jobs (<a href="https://doi.org/10.1016/j.rpto.2016.09.001">or don’t get hired in the first place</a>) because they’re LGBTQ+, post a photo of themselves in a bikini, or because they complain about customers in private spaces (all stories from <a href="https://journals.sagepub.com/doi/10.1177/20563051221077022">our study</a>), the boundary between professional and private lives is <a href="https://www.wiley.com/en-us/Work%27s+Intimacy-p-9780745650289">blurred</a>.</p>
<p>Mobile phones, emails, working from home, highly competitive employment markets, and the intertwining of “work” with “identity” all serve to blur this line.</p>
<p>Some workers must develop their own <a href="https://link.springer.com/article/10.1007/s10606-018-9315-3">strategies and tactics</a>, such as not friending or following workmates on some social media (which itself can lead to tensions).</p>
<p>And even when one does derive joy and fulfilment from work, we should expect to have some boundaries respected.</p>
<p>Employers, HR workers, and managers should think carefully about the boundaries between professional and personal lives; using social media in employment decisions can be more complicated than it seems. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/467644/original/file-20220608-26-2g1hpu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many believe people just need to accept the reality that what you say and do on social media can be used against you.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>A ‘hidden curriculum of surveillance’</h2>
<p>When people feel monitored by employers (current, or imagined future ones) when they use social media, this creates a “<a href="https://doi.org/10.1177/1461444818791318">hidden curriculum of surveillance</a>”. For young people especially, this can be damaging and inhibiting.</p>
<p>This hidden curriculum of surveillance works to produce compliant, self-governing citizen-employees. They are pushed to curate often highly sterile representations of their lives on social media, always under threat of employment doom.</p>
<p>At the same time, these very same social media have a clear and productive role in revealing violations of power. Bad behaviour, misconduct, racism, misogyny, homophobia, transphobia, and other forms of bigotry, harassment, and violence have all been exposed by social media.</p>
<p>So, then, this surveillance can be both bad and good – invasive in some cases and for some people (especially young people whose digitally-mediated lives are managed through this prism of future impact) but also liberating and enabling justice, accountability, and transparency in other scenarios and for other actors.</p>
<p>Social media can be an <a href="https://doi.org/10.1177/0893318914541966">effective way for people to find work</a>, for <a href="https://psycnet.apa.org/record/2016-30476-002">employers to find employees</a>, to present <a href="https://sajhrm.co.za/index.php/sajhrm/article/view/861">professional profiles on sites like LinkedIn</a> or portfolios of work on platforms like Instagram, but these can also be personal spaces even when they’re not set to private.</p>
<p>How we get the balance right between using social media to hold people to account versus the risk of invading people’s privacy depends on the context, of course, and is ultimately about power.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/as-use-of-digital-platforms-surges-well-need-stronger-global-efforts-to-protect-human-rights-online-135678">As use of digital platforms surges, we'll need stronger global efforts to protect human rights online</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/182424/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brady Robards receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Darren Graf does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How we get the balance right between using social media to hold people to account versus the risk of invading people’s privacy depends on the context, of course, and is ultimately about power.Brady Robards, Senior Lecturer in Sociology, Monash UniversityDarren Graf, Assistant researcher, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1815762022-04-21T19:12:38Z2022-04-21T19:12:38ZIf Elon Musk succeeds in his Twitter takeover, it would restrict, rather than promote, free speech<figure><img src="https://images.theconversation.com/files/459170/original/file-20220421-16-bc4elu.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4626%2C3074&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A lawsuit filed on April 12 alleges that Tesla CEO Elon Musk illegally delayed disclosing his stake in Twitter so he could buy more shares at lower prices.</span> <span class="attribution"><span class="source">(AP Photo/Susan Walsh, File)</span></span></figcaption></figure><p>On April 25, following several weeks of speculation, Twitter announced that <a href="https://www.theguardian.com/technology/2022/apr/25/twitter-elon-musk-buy-takeover-deal-tesla">it had reached an agreement to sell the company to Tesla CEO and multi-billionaire Elon Musk</a>. In mid-April, Musk made public <a href="https://www.bloomberg.com/news/articles/2022-04-21/musk-is-exploring-launching-a-tender-offer-for-twitter">his desire to acquire Twitter</a>, make it a private company, and <a href="https://www.bloomberg.com/news/articles/2022-04-14/elon-musk-launches-43-billion-hostile-takeover-of-twitter">overhaul its moderation policies</a>. </p>
<p>Citing ideals of free speech, Musk claimed that “<a href="https://www.washingtonpost.com/technology/2022/04/18/musk-twitter-free-speech/">Twitter has become kind of the de facto town square, so it’s just really important that people have the, both the reality and the perception that they are able to speak freely within the bounds of the law</a>.”</p>
<p>While making Twitter free for all “within the bounds of the law” seems like a way to ensure free speech in theory, in practice, this action would actually serve to suppress the speech of Twitter’s most vulnerable users.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/_NbpH9GdBcQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">CBC’s The National looks at Elon Musk’s attempt at a hostile takeover of Twitter.</span></figcaption>
</figure>
<p>My team’s research into online harassment shows that when platforms fail to moderate effectively, the most marginalized people may withdraw from posting to social media as a way to keep themselves safe.</p>
<h2>Withdrawal responses</h2>
<p>In <a href="https://harassment.thedlrgroup.com/">various research projects since 2018</a>, we have interviewed scholars who have experienced online harassment, surveyed academics about their experiences with harassment, conducted in-depth reviews of literature detailing how knowledge workers experience online harassment, and reached out to institutions that employ knowledge workers who experience online harassment. </p>
<p>Overwhelmingly, throughout our various projects, we’ve noticed some common themes:</p>
<ul>
<li>Individuals are targeted for online harassment on platforms like Twitter simply because they are women or members of a minority group (racialized, gender non-conforming, disabled or otherwise marginalized). The topics people post about matter less than their identities in predicting the intensity of online harassment people are subjected to.</li>
<li>Men who experience online harassment, often experience a different type of harassment than women or marginalized people. Women, for example, tend to experience more sexualized harassment, such as rape threats.</li>
<li>When people experience harassment, they seek support from their organizations, social media platforms and law enforcement, but often find the support they receive is insufficient.</li>
<li>When people do not receive adequate support from their organizations, social media platforms and law enforcement, they adopt strategies to protect themselves, including withdrawing from social media.</li>
</ul>
<p>This last point is important, because our data shows that there is a very real risk of losing ideas in the unmoderated Twitter space that Musk says he wants to build in the name of free speech. </p>
<p>Or in other words, what Musk is proposing would likely make speech on Twitter less free than it is now, because people who cannot rely on social media platforms to protect them from online harassment tend to leave the platform when the consequences of online harassment become psychologically or socially destructive.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a woman holding a mobile phone has her forehead on a table" src="https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/459166/original/file-20220421-25-n2wpxy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Research shows that when people receive online harassment on a social media platform, they are likely to withdraw from using it.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Arenas for debate</h2>
<p>Political economist John Stuart Mill famously wrote about <a href="https://www.michaelrectenwald.com/essays/john-stuart-mill-the-marketplace-of-ideas-and-minority-opinion">the marketplace of ideas</a>, suggesting that in an environment where ideas can be debated, the best ones will rise to the top. This is often used to justify opinions that social media platforms like Twitter should do away with moderation in order to encourage constructive debate. </p>
<p>This implies that bad ideas should be taken care of by a sort of invisible hand, in which people will only share and engage with the best content on Twitter, and the toxic content will be a small price to pay for a thriving online public sphere.</p>
<p>The assumption that good ideas would edge out the bad ones is both counter to Mill’s original writing, and the actual lived experience of posting to social media for people in minority groups. </p>
<p>Mill advocated that <a href="https://chicagounbound.uchicago.edu/roundtable/vol3/iss1/4">minority ideas be given artificial preference</a> in order to encourage constructive debate on a wide range of topics in the public interest. Importantly, this means that moderation of online harassment is key to a functioning marketplace of ideas.</p>
<h2>Regulation of harassment</h2>
<p>The idea that we need some sort of online regulation of harassing speech is borne out by our research. Our research participants repeatedly told us that the consequences of online harassment were extremely damaging. These consequences ranged from burnout or inability to complete their work, to emotional and psychological trauma, or even social isolation. </p>
<p>When targets of harassment experienced these outcomes, they often also experienced economic impacts, such as issues with career progression after being unable to complete work. Many of our participants tried reporting the harassment to social media platforms. If the support they received from the platform was dismissive or unhelpful, they felt less likely to engage in the future.</p>
<p>When people disengage from Twitter due to widespread harassment, we lose those voices from the very online public sphere that Musk says he wants to foster. In practice, this means that women and marginalized groups are most likely to be the people who are excluded from Musk’s free speech playground. </p>
<p>Given that our research participants have told us that they already feel Twitter’s approach to online harassment is limited at best, I would suggest that if we really want a marketplace of ideas on Twitter, we need more moderation, not less. For this reason, I’m happy that <a href="https://nypost.com/2022/04/15/twitter-pushes-back-on-elon-musks-takeover-with-poison-pill/">the Twitter Board of Directors is attempting to resist Musk’s hostile takeover</a>.</p><img src="https://counter.theconversation.com/content/181576/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jaigris Hodson receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) Canada Research Chairs Program. </span></em></p>Elon Musk’s attempt to take over Twitter uses free speech as the motivation, but research shows that unregulated online spaces result in increased harassment for marginalized users.Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1779432022-03-07T10:05:26Z2022-03-07T10:05:26ZSix things social media users and businesses can do to combat hate online<figure><img src="https://images.theconversation.com/files/449784/original/file-20220303-11-1il281n.jpg?ixlib=rb-1.1.0&rect=34%2C5%2C3888%2C2578&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/qANvvc543Tg">Lala Azizli/Unsplash</a></span></figcaption></figure><p>Online hostility has become a bigger problem over recent years, particularly with people spending more time on <a href="https://www.statista.com/topics/7863/social-media-use-during-coronavirus-covid-19-worldwide/">social media</a> during the COVID-19 pandemic. A US survey found <a href="https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/">four in ten Americans</a> have experienced harassment online – with three-quarters reporting that the most recent abuse happened on social media.</p>
<p>When online hostility happens on a continued basis it can be classified into a range of behaviours such as <a href="https://www.emerald.com/insight/content/doi/10.1108/INTR-08-2020-0462/full/html">trolling</a>, <a href="https://www.tandfonline.com/doi/full/10.1080/01972243.2021.1981507">bullying</a> and <a href="https://link.springer.com/article/10.1007/s10551-013-1806-z">harassment</a>.</p>
<p>More severe forms of online hostility can have <a href="https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/">real-world consequences</a> for those affected, such as mental and emotional distress.</p>
<p>Debates about who should be responsible for the management of online hostility have been taking place over the last decade, but with little agreement. I would argue that three different sectors need to be involved: social media platforms, the companies that host business pages on social media, and users themselves.</p>
<p>The foundation of online hostility moderation lies with social media platforms. They must continuously update their processes and features to minimise the problem. We regularly hear that social media platforms are not doing enough <a href="https://www.bbc.co.uk/news/technology-60264178">to counter online hostility</a>, and this may be true. In particular, I believe platforms could do more to educate companies and people about the available features designed to address hostility, and how to implement these appropriately. </p>
<h2>What you can do</h2>
<p>While social media platforms and businesses each play crucial roles in moderation, it’s social media users who experience hostility first-hand, either as observers or victims. </p>
<p>There is no one-size-fits-all approach to responding to online hostility, but here are three courses of action you might consider.</p>
<p><strong>1. Defend the victims</strong></p>
<p>Providing support to the victims of hostility by challenging the aggressor and asking them to stop could be a viable option in less severe instances of online hostility. Recent <a href="https://kar.kent.ac.uk/92684/">research</a> has shown that this can make the victim feel satisfied with the online brand community (for example, the Facebook fanpage) where the hostility occurred.</p>
<p>While this can be an effective way to combat hostility, and can make the victim feel supported, there’s also a risk that it can escalate the situation, with the aggressor continuing to attack the victim, or attacking you. In this case, the two options below may be better.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-helps-reveal-peoples-racist-views-so-why-dont-tech-firms-do-more-to-stop-hate-speech-140997">Social media helps reveal people's racist views – so why don't tech firms do more to stop hate speech?</a>
</strong>
</em>
</p>
<hr>
<p><strong>2. Hide, mute or block hostile content</strong></p>
<p>Hiding, muting or blocking <a href="https://www.facebook.com/help/207042374708584/?helpref=related_articles">hostile content</a> or users could be appropriate where users feel less comfortable to respond, but don’t want to continue to be exposed to harmful content. </p>
<p>This isn’t just for victims. We know harassment doesn’t have to be <a href="https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/">experienced directly</a> to be upsetting. This option puts the user in control of the situation and allows them to either temporarily or permanently block hostility (depending on whether it’s a one-off or happening frequently).</p>
<p><strong>3. Report hostile content</strong></p>
<p>In instances of severe and repeated hostility, <a href="https://help.twitter.com/en/safety-and-security/report-abusive-behavior">reporting content and users</a> to companies or platforms is a suitable option. This requires the user to describe the incident and type of hostility that has occurred.</p>
<figure class="align-center ">
<img alt="A woman looks at her smartphone, appears unhappy." src="https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=405&fit=crop&dpr=1 600w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=405&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=405&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=509&fit=crop&dpr=1 754w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=509&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/449787/original/file-20220303-21-hdp8az.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=509&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Experiences of online hostility can affect a person’s mental wellbeing.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/unhappy-african-american-lady-looking-her-1823290100">Prostock-studio/Shutterstock</a></span>
</figcaption>
</figure>
<h2>What businesses can do</h2>
<p>Companies that manage social media pages can also block and report content and users, but they have other tools at their disposal, too.</p>
<p>For example, social media platforms enable companies to self-moderate their business pages by blocking offensive words from appearing. Businesses and brands that manage <a href="https://www.facebook.com/help/1182883832161405">a Facebook page</a> can choose up to 1,000 keywords to block in any language (these can include words, phrases and even emojis). If a user posts a comment containing one of the blocked words, their post will not be shown unless the page’s administrator chooses to publish it.</p>
<p>While these tools may help to a degree, automated platform features alone are not enough. Technology is increasingly sophisticated, but it’s difficult for machines to determine whether a particular comment or post is appropriate or not, regardless of the language used. Platforms also rely on human moderators, but these are <a href="https://fortune.com/2018/03/22/human-moderators-facebook-youtube-twitter/">a finite resource</a>.</p>
<p>As part of my research into hostility moderation, I have looked at the <a href="https://www.tandfonline.com/doi/abs/10.1080/0267257X.2017.1329225">different strategies</a> which companies and brands are choosing to adopt. These include:</p>
<ol>
<li><p><strong>Impartial or neutral strategies</strong> mean the companies do not take a particular side during incidents, but provide further information on the topic at the root of the hostility.</p></li>
<li><p><strong>Cooperative moderation strategies</strong> involve reinforcing positive comments and interactions by acknowledging those users who support others during incidents of hostility. </p></li>
<li><p><strong>Authoritative strategies</strong> focus on moderating hostility by referring to the business page engagement rules and, in more extreme instances, by temporarily or permanently blocking users from posting comments. </p></li>
</ol>
<p>My <a href="https://www.sciencedirect.com/science/article/pii/S1094996820301006">research</a> has also found that an authoritative approach to moderation, in requesting users to interact in a more civil manner, generates the most positive attitudes towards the company, and a perception that it has a level of social responsibility.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-facebook-isnt-telling-us-about-its-fight-against-online-abuse-96818">What Facebook isn't telling us about its fight against online abuse</a>
</strong>
</em>
</p>
<hr>
<p>Ultimately, we all have a role to play to address hostility online. Social media platforms are not perfect, but they have made moderation tools widely available, and we should use them where it’s warranted.</p><img src="https://counter.theconversation.com/content/177943/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Denitsa Dineva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We all have a role to play to address hostility online.Denitsa Dineva, Lecturer in Marketing and Strategy, Cardiff UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1768442022-02-15T14:59:00Z2022-02-15T14:59:00ZDon’t watch Pam and Tommy – the series turns someone’s trauma into entertainment<figure><img src="https://images.theconversation.com/files/445984/original/file-20220211-27-28j57k.jpg?ixlib=rb-1.1.0&rect=0%2C4%2C3000%2C2083&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/tommy-lee-pamela-anderson-rokbar-hollywood-503770069">Tinseltown / Shutterstock</a></span></figcaption></figure><p>Millions have watched the new series <a href="https://www.imdb.com/title/tt13659418/">Pam and Tommy</a>, retelling the mid-1990s story of celebrities Pamela Anderson and Tommy Lee and their leaked “sex tape”. People involved in the show – including the showrunner and actors – <a href="https://www.newsweek.com/pam-tommy-cast-what-has-pamela-anderson-said-sex-tape-1677235">claim</a> that they are making a “feminist statement” and suggest that the portrayal of events favours Pamela Anderson. Lily James, who plays Anderson, <a href="https://www.indiewire.com/2022/02/pam-and-tommy-lily-james-performance-society-fault-1234694359/">said</a> she hopes the show will “make people look at their own culpability in perpetuating this unhealthy viral internet behavior”.</p>
<p>However, the show contributes to this “unhealthy” behaviour by turning Anderson’s experience of a private sexual video being widely distributed and viewed without her consent into entertainment. Many have searched for -– and found -– the original video, watching it on mainstream porn sites, encouraging others on internet forums to watch and sharing links to wherever else it can be found. </p>
<p>The reality is that Anderson was <a href="https://ew.com/tv/pam-and-tommy-made-without-pamela-anderson/">not involved</a> with the series, and has not spoken publicly about what she thinks of it. When we peel back the “good intentions” the series is using as promotion, we are left with a show that exploits and profits from an incredibly traumatic experience in someone’s life.</p>
<p>Unfortunately, Anderson’s experiences are familiar. While rarer in the mid-1990s, distributing sexual images and videos without consent -– known as <a href="https://link.springer.com/article/10.1007/s10691-017-9343-2">image-based sexual abuse</a> – is now alarmingly <a href="https://www.routledge.com/Image-based-sexual-abuse-A-study-on-the-causes-and-consequences-of-non-consensual/Henry-Mcglynn-Flynn-Johnson-Powell-Scott/p/book/9780815353836">commonplace</a>. During the pandemic, reports to the Revenge Porn Helpline <a href="https://swgfl.org.uk/magazine/revenge-porn-helpline-release-intimate-image-abuse-an-evolving-landscape/">have doubled</a>. </p>
<p>Rather than society becoming more aware of the harm of intimate image abuse, the reaction to this new series suggests we are becoming desensitised. We accept as entertainment the retelling of a story that inevitably leads to the resurfacing of the original video.</p>
<h2>Recurring harm</h2>
<p>The distribution and viewing of private sexual videos without consent can be <a href="https://journals.sagepub.com/doi/full/10.1177/0964663920947791">devastating</a> for survivors. The breach of trust and sense of violation is acute. Some describe it as a <a href="https://claremcglynn.files.wordpress.com/2019/06/shattering-lives-and-myths-final.pdf">“social rupture”</a> which divides their lives into before and after the abuse. One described it as <a href="https://theconversation.com/sexual-abuse-happens-online-too-but-current-laws-leave-too-many-victims-unprotected-119604">“torture for the soul”</a>.</p>
<p>The harms are constant and relentless, with each viewing of the image or video experienced as a <a href="https://claremcglynn.files.wordpress.com/2019/10/shattering-lives-and-myths-revised-aug-2019.pdf">new assault and abuse</a>. Actor Jennifer Lawrence, whose private images were hacked and went viral in 2014, said <a href="https://news.sky.com/story/jennifer-lawrence-says-trauma-of-having-her-nude-photos-shared-online-will-exist-forever-12475972">just last year</a> that “my trauma will exist forever”. Paris Hilton, whose private video was non-consensually shared, <a href="https://www.cosmopolitan.com/uk/entertainment/a36158300/paris-hilton-sex-tape-ptsd/">revealed</a> she has been left with PTSD and the abuse is “something that will hurt me for the rest of my life”.</p>
<figure class="align-center ">
<img alt="A young woman holding a mobile phone covers her face with her hand" src="https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=408&fit=crop&dpr=1 600w, https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=408&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=408&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=513&fit=crop&dpr=1 754w, https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=513&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/445985/original/file-20220211-19-1pmfxtd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=513&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">To victims of online intimate image abuse, each view or share is like experiencing an assault all over again.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-sad-vulnerable-girl-using-mobile-383565967">Marcos Mesa Sam Wordley / Shutterstock</a></span>
</figcaption>
</figure>
<p>With this in mind, it’s right to question the ethics of making a series like Pam and Tommy. The series recreates the video, so the producers themselves must have watched the original in making the show. And in the aftermath – users on internet forums have encouraged others to watch it and posted links about how to find it. Many have also made degrading comments about Anderson herself. All of this is an eerie echo of the original incident.</p>
<p>The creators of the series must have been aware this would happen, and they possibly knew it would mean an increased audience. This series facilitates the continuation of image-based sexual abuse by actively dredging up a traumatic experience. In doing so, it contributes to the <a href="https://journals.sagepub.com/doi/full/10.1177/0964663920947791">constancy</a> of harms that victims experience.</p>
<h2>Fixing the problem</h2>
<p>As well as challenging shows like Pam and Tommy, more can be done to reduce the prevalence and harms of experiences like Anderson’s. While many internet platforms claim to have policies against non-consensual material, it is nevertheless <a href="https://academic.oup.com/bjc/article/61/5/1243/6208896">freely and easily accessible</a>. The availability of this material normalises and legitimises image-based sexual abuse. It is also a primary concern of those who face <a href="https://notyourporn.com/">brick walls</a> when trying to get the material removed.</p>
<p>Legislation, like the UK’s <a href="https://www.gov.uk/government/publications/draft-online-safety-bill">online safety bill</a>, should hold social media and internet companies accountable for their role in perpetuating these harms. Porn companies should be required to identify users uploading material, moderate content to remove non-consensual material and swiftly respond to take-down requests. Platforms must also secure the consent of all those in uploaded videos, and it should be <a href="https://www.endviolenceagainstwomen.org.uk/wp-content/uploads/Online-Safety-Bill-Full-Brief-final.pdf">a criminal offence</a> to upload material without consent. </p>
<p>While such steps would help to reduce the amount of non-consensual imagery online, we are fighting a losing battle if television and film producers are, in practice, trivialising these abuses. We can only hope that the actors and creators of Pam and Tommy will reflect on how they have exploited Pamela Anderson for profit, in just the same way that she was exploited when the video was originally stolen.</p><img src="https://counter.theconversation.com/content/176844/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The show has been marketed as a feminist reclaiming, but at an emotional and traumatic cost to Pamela Anderson.Alishya Dhir, Teaching Fellow & PhD Researcher, Durham UniversityClare McGlynn, Professor of Law, Durham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1766302022-02-11T14:01:39Z2022-02-11T14:01:39ZThe ‘new’ offences added to the online safety bill are not really new – and could continue to fail victims of online abuse<figure><img src="https://images.theconversation.com/files/445707/original/file-20220210-40669-71jgd0.jpg?ixlib=rb-1.1.0&rect=8%2C0%2C5455%2C3645&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/sad-girl-feeling-upset-reading-bad-1100198840">fizkes/Shutterstock</a></span></figcaption></figure><p>The UK government has <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">recently revealed</a> more detail about its ambitious plans <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/973939/Online_Harms_White_Paper_V2.pdf">to become</a> the “safest place in the world to go online” by way of the <a href="https://www.gov.uk/government/publications/draft-online-safety-bill">online safety bill</a>. </p>
<p>The draft bill is currently being reviewed by the government following a <a href="https://committees.parliament.uk/committee/534/draft-online-safety-bill-joint-committee/news/159784/no-longer-the-land-of-the-lawless-joint-committee-reports/">report by the joint committee</a>, with the bill expected to go before parliament in the <a href="https://www.bbc.co.uk/news/technology-59638569">coming months</a>.</p>
<p>Among a range of measures, the online safety bill is set to include three “<a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">new</a>” criminal offences to prohibit the sending of <a href="https://www.bbc.co.uk/news/technology-60264178">harmful communication</a>, genuinely threatening messages and false information.</p>
<p>But this sorts of behaviour is already prohibited by law. So what’s to say including them in the online safety bill is going to protect people subject to online abuse?</p>
<h2>Harmful communications</h2>
<p>Under <a href="https://www.legislation.gov.uk/ukpga/2003/21/section/127">section 127(1)</a> of the Communications Act 2003, it’s an offence to send a communication which is grossly offensive, indecent, obscene or menacing. The Act has not been without fault. Vague terms such as “grossly offensive”, which is not clearly defined, have led to some obscure outcomes.</p>
<p>For instance, a person received <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2018/10/6_5039_LC_Online_Comms_Report_FINAL_291018_WEB.pdf">a fine</a> for posting an image on Snapchat of two police officers where he had drawn penises on their heads. In another case, the Crown Prosecution Service (CPS) decided not to prosecute <a href="https://www.bbc.co.uk/news/uk-wales-19661950">a footballer</a> for sending a homophobic tweet about divers Tom Daley and Peter Waterfield. The tweet was considered offensive, but not so grossly offensive that the criminal law should intervene.</p>
<p>Under the <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2021/07/Modernising-Communications-Offences-2021-Law-Com-No-399.pdf">new offence</a> expected to be included in the online safety bill, it would be unlawful to intentionally send or post a communication that is “likely to cause harm to a likely audience” (anyone who is likely to see, hear or otherwise encounter the communication) without a reasonable excuse.</p>
<p>Harm is defined as “serious distress”, though what constitutes serious distress is not clear. As for “a reasonable excuse”, the sending of a communication ending a relationship could be considered to have been sent with reasonable excuse – though again, no actual definition is given.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-age-verification-is-another-flawed-attempt-to-regulate-online-pornography-in-the-uk-176762">Why age verification is another flawed attempt to regulate online pornography in the UK</a>
</strong>
</em>
</p>
<hr>
<p>It is anticipated that <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2021/07/Modernising-Communications-Offences-2021-Law-Com-No-399.pdf">guidelines</a> will be issued to help interpret the meaning of this offence. But it’s difficult to imagine how this will overcome current issues around the terminology.</p>
<p>As an example, <a href="https://www.cps.gov.uk/legal-guidance/social-media-guidelines-prosecuting-cases-involving-communications-sent-social-media">prosecution guidelines</a> have been issued in an attempt to shed some light on the meaning of “grossly offensive”, among other points, in the current law. But in a 2018 report the <a href="https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2018/10/6_5039_LC_Online_Comms_Report_FINAL_291018_WEB.pdf">Law Commission</a> noted that these may not be enough to resolve the challenges of interpretation in an online context.</p>
<p>If we have struggled to comprehend what constitutes “grossly offensive” because there’s no definitive definition, why would “serious distress” be any different? What one police force may find harmful, another may not.</p>
<h2>Genuinely threatening communications</h2>
<p>The second offence the government wishes to <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">prohibit</a> is the sending of “genuinely threatening communications […] where communications are sent or posted to convey a threat of serious harm”. </p>
<p>The <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">rationale</a> here is to better capture threats of rape and serious threats of violence, while also strengthening cyberstalking <a href="https://www.legislation.gov.uk/ukpga/1997/40/section/2A">provisions</a>.</p>
<p>Yet the sending of a threatening message is already prohibited under <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/912203/6_5039_LC_Online_Comms_Report_FINAL_291018_WEB.pdf">several</a> laws, including <a href="https://www.legislation.gov.uk/ukpga/1988/27/section/1">section 1</a> of the Malicious Communications Act 1988. So if we currently have provisions in place which make this conduct unlawful, why would enacting “new” criminal law produce different results?</p>
<figure class="align-center ">
<img alt="A young man sits in front of a laptop." src="https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/445734/original/file-20220210-55635-d8g5sf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The UK government has recently revealed additions to its online safety bill.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/african-black-man-looking-laptop-computer-1854646072">Ancapital/Shutterstock</a></span>
</figcaption>
</figure>
<p>Issues lie not only with the law but also how the criminal justice system deals with <a href="https://www.bbc.co.uk/news/uk-58924168">complaints</a> of online threats – an element not reflected in the details we’ve seen of the bill. The use of social media to send threats of rape is <a href="https://www.amnesty.org/en/latest/news/2018/03/online-violence-against-women-chapter-3/">not new</a>. The problem lies in <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/912203/6_5039_LC_Online_Comms_Report_FINAL_291018_WEB.pdf">interpretation</a>. In some instances, threats of rape have been treated as “<a href="https://weekwoman.wordpress.com/2014/09/29/a-brief-comment-on-peter-nunn/">grossly offensive</a>” communications as opposed to “threatening”. </p>
<p>If we truly want to protect people from this form of abuse, we need to understand why the sending of threats of rape is not often being labelled as “threatening” by the justice system. </p>
<h2>False communications</h2>
<p>The government also wants to <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">create</a> “an offence for when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm”. This seeks to tackle the issue of disinformation. </p>
<p>In fairness to the government, they acknowledge that the sending of false information is already <a href="https://www.legislation.gov.uk/ukpga/1988/27/section/1">prohibited by law</a>. But they are of the opinion that the threshold of criminal liability is <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">too high</a>. </p>
<p>The proposed new offence is designed to make it easier to prosecute those who knowingly send false messages, so long as the aim in sending the communication was to create “non-trivial emotional, psychological or physical harm”. </p>
<p>So it will be lawful to send a false communication which causes trivial harm but unlawful where the communication causes non-trivial harm. The difference between the two, or how the difference will be distinguished, is not clear at this stage.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-protect-children-online-without-using-tough-rules-and-reprimands-154151">How to protect children online without using tough rules and reprimands</a>
</strong>
</em>
</p>
<hr>
<h2>Where to next?</h2>
<p>The government has <a href="https://www.gov.uk/government/news/online-safety-law-to-be-strengthened-to-stamp-out-illegal-content">acknowledged</a> that there are existing laws prohibiting these behaviours, and noted that they are seeking to strengthen the law in these areas. </p>
<p>But we don’t yet have enough information to see how these proposed offences will improve upon the laws currently in place. Issues around interpretation will continue to pose problems for the criminal justice system if we cannot understand with certainty what, for example, will constitute “serious ditress”, how we define a genuine threat, or the difference between a false communication intended to cause trivial harm or one intended to cause a non-trivial harm.</p>
<p>If the UK government wants the country to become the safest place in the world to go online, we need to go back to basics. We need to educate. We need to understand why the law is currently unsatisfactory.</p><img src="https://counter.theconversation.com/content/176630/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Laura Higson-Bliss does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We recently heard that ‘new’ criminal offences would be added to the UK’s online safety bill to help tackle the growing problem of online abuse.Laura Higson-Bliss, Lecturer in Law, Keele UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1744982022-01-19T17:13:24Z2022-01-19T17:13:24ZLegal reform is needed to protect young women from the growing threats of online sexual violence<figure><img src="https://images.theconversation.com/files/441358/original/file-20220118-19-1qceppv.jpg?ixlib=rb-1.1.0&rect=8%2C8%2C5981%2C3978&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Online sexually violent behaviour can include sharing intimate images without consent.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The increase in online interaction created by COVID-19 has generated a spike in girls and young women being subjected to what’s called <a href="https://doi.org/10.1007/s11920-021-01269-1">technology-facilitated sexual violence (TFSV)</a>. The term refers to everything from sharing someone’s nude photos without their consent to sending unsolicited pictures of one’s own genitals.</p>
<p>TFSV, a range of harmful and sexually aggressive online behaviours, affects <a href="https://doi.org/10.1080/10926771.2019.1710636">88 per cent of all Canadian university undergraduate women</a>. Younger teens are also <a href="https://globalnews.ca/news/468416/death-of-nova-scotia-teen-rehtaeh-parsons-draws-comparisons-to-amanda-todd-case/">affected by these behaviours</a>.</p>
<p>Survivors have few legal options, and have recently been found to be at <a href="https://doi.org/10.3138/cjhs.2020-0044">higher risk of suicide</a>. This highlights the need for more education and legal reform around these acts, <a href="https://canlii.ca/t/sq84">which some legal experts say should be criminal</a>.</p>
<h2>Early exposures</h2>
<p>“No matter how much you think you’re protecting your child, they can still get to them,” says Heather Mackie of Vancouver, B.C., whose name has been changed to protect the identity of her daughter.</p>
<p>Two years ago, Mackie’s then 12-year-old daughter, Emma (not her real name), created an Instagram account for her fictional character on Roblox, a popular online gaming platform whose users are mostly <a href="https://www.statista.com/statistics/1190869/roblox-games-users-global-distribution-age/">under 16 years old</a>. What Emma next received in her inbox shocked her, and her mother.</p>
<p>“It was a picture of a man’s genitals,” says Mackie. Emma was visibly upset. “She deleted it and blocked him. We then deleted the account.”</p>
<p>Experiences like Emma’s are common. A recent Canadian survey of university-aged women found <a href="https://doi.org/10.1177/08862605211030018">6.4 per cent had their first experience with online sexual harassment between 12 and 14 years of age</a>.</p>
<p>Experts differ slightly in how they classify forms of TFSV, with one classification including <a href="https://doi.org/10.1007/s11920-021-01269-1">image-based sexual abuse (non-consensual sharing of victims’ images), video voyeurism and unsolicited sexual images</a>, which is what Emma received.</p>
<p>Another definition adds <a href="https://doi.org/10.3138/cjhs.2020-0044">online sexual aggression and coercion, including extortion, blackmail and bribery</a>, as well as online harassment of people based on their gender or sexuality.</p>
<p>Terminology is important. According to Rosel Kim, staff lawyer at the <a href="https://www.leaf.ca/">Women’s Legal Education and Action Fund</a> in Toronto, terms such as “cyberviolence” downplay the severity of the act. “Cyberviolence is not separate from violence,” she says.</p>
<p>Another term, “revenge porn,” <a href="https://www.huffingtonpost.co.uk/entry/why-are-we-still-calling-it-revenge-porn-victims-explain-change-in-the-laws-needed_uk_5d3594c2e4b020cd99465a99">blames its victims</a>, and is better described as a form of image-based sexual abuse.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/xk7j2NukmvE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">“Revenge porn” laws, like the one proposed in Newfoundland and Labrador, make the dissemination of intimate images illegal without consent.</span></figcaption>
</figure>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/revenge-porn-is-sexual-violence-not-millennial-negligence-126233">Revenge porn is sexual violence, not millennial negligence</a>
</strong>
</em>
</p>
<hr>
<p>Which brings us back to Emma, who a year later had a second incident. She was on the <a href="https://www.dailymail.co.uk/sciencetech/article-9977719/Drop-video-chat-app-Houseparty-close-October.html">now-defunct social networking app Houseparty</a> and witnessed a friend being harassed online.</p>
<p>“The language they used was shocking,” says Mackie, whose daughter took screenshots of the chat and reported the incident to the police liaison officer at her school. The bully had sent an image depicting anal penetration of a popular children’s cartoon character, and the rest was “mostly words telling her to go kill herself.”</p>
<p>Words that, as it turns out, can lead to real harm.</p>
<h2>Online violence and suicide</h2>
<p>“Sexual violence has been around forever, but the context has shifted (online),” says Amanda Champion, a criminology PhD candidate at Simon Fraser University in Burnaby, B.C.</p>
<p>Champion is the co-author of a 2021 study that <a href="https://doi.org/10.3138/cjhs.2020-0044">clarified the psychological link between TFSV and suicide</a>. According to her findings, TFSV victims’ public exposure makes them targets for bullying, which can lead to depression and the feeling that they’re a burden to friends and family.</p>
<p>This “perceived burdensomeness” leads victims to “believe that you’re so much of a burden that your death is worth more than your life,” which opens the door to suicide, Champion says.</p>
<p>In Canada, this process was starkly illustrated in 2012, when 15-year-old <a href="https://www.thestar.com/news/canada/2022/01/10/amanda-todds-mom-wins-fight-to-lift-publication-ban-on-her-daughters-name-and-story.html">Amanda Todd died by suicide after a nude screenshot of her was shared online without her consent</a>. A year later, 17-year-old <a href="https://newsinteractives.cbc.ca/longform/five-years-gone">Rehtaeh Parsons</a>, who was allegedly raped and then bullied over shared photos of the assault, also ended her own life.</p>
<p>In light of these stories, lawyers have been pondering how to hold perpetrators accountable for TFSV while protecting survivors.</p>
<h2>Legal options</h2>
<p>In Canada, not a lot of people realize they can report TFSV to the police, says Suzie Dunn, a law and technology professor at Dalhousie University in Halifax. “It’s downplayed by society and even by police. People are still conceptualizing whether or not these are true harms,” she says.</p>
<p>When it comes to legal options, Kim and Dunn say the key is understanding what TFSV victims’ goals are. “Maybe they want images to be taken down, or an apology — not necessarily to put a person in jail,” says Kim.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A young woman wearing glasses, jeans and a hoodie sits alone on a bench with her arms resting on her knees" src="https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/441434/original/file-20220119-13-1h5qyra.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sometimes, the victims of TFSV are unclear what legal options are available to them.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Under the Canadian <a href="https://www.justice.gc.ca/eng/rp-pr/other-autre/cndii-cdncii/p6.html"><em>Criminal Code</em></a>, an offender may be charged with voyeurism, obscene publication, criminal harassment, extortion or defamatory libel. However, if the only goal is to have harmful content taken down, then pursuing a criminal charge may be more trouble than it’s worth, Kim says.</p>
<p>The first barrier is convincing the police that there’s enough evidence to charge an offender. Then once in court, “you have to prove beyond a reasonable doubt” — a high burden of proof, says Kim. During trial, the accused’s lawyer may also expose a survivor to further trauma.</p>
<p>Lastly, the criminal justice system moves slowly — and without a conviction, harmful content stays up, says Dunn.</p>
<h2>The need for legal reform</h2>
<p>Dunn says Canada lags behind other nations such as Australia when it comes to education, research and legislation around TFSV. </p>
<p>Since 2015, Australia has had an <a href="https://www.esafety.gov.au/about-us/who-we-are/about-the-commissioner">eSafety Commissioner</a>, “the world’s first government agency committed to keeping its citizens safer online.” Kim and Dunn say Canada should have a similar government-funded statutory body that advocates in this area.</p>
<p>Starting points for advocacy may include implementing more expedient <a href="https://www.cbc.ca/news/canada/nova-scotia/nova-scotia-public-feedback-cyber-protection-act-1.6308724">image take-down laws</a> and <a href="https://globalnews.ca/news/8323529/the-facebook-papers-social-media-regulation-canada/">regulating social media companies such as Facebook</a>, agree Dunn and Kim.</p>
<p>“These platforms make money through engagement. What’s engaging content is often extreme content that tends to be abusive or violent,” says Kim.</p>
<p>In last fall’s federal election, the Liberals promised to rework <a href="https://nationalpost.com/news/politics/liberals-to-introduce-bill-forcing-online-giants-to-compensate-news-outlets">online harms legislation within 100 days of Parliament’s Nov. 22 return</a> — that timer is set to expire on March 2.</p>
<p><em>Anyone with concerns about online sexual violence is encouraged to visit <a href="https://www.cybertip.ca/en/">CyberTip</a> for resources, support and information.</em></p><img src="https://counter.theconversation.com/content/174498/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Anthony Fong does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The pandemic has led to an increase in online interactions, including sexually violent behaviours. Teens as young as 12 are affected, but many victims are not aware of their options in seeking justice.Anthony Fong, Global Journalism Fellow, Dalla Lana School of Public Health, University of TorontoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1743172022-01-04T17:22:29Z2022-01-04T17:22:29Z‘Sextortion’ leads to financial losses and psychological trauma. Here’s what to look out for on dating apps<figure><img src="https://images.theconversation.com/files/439324/original/file-20220104-27-15xjx2f.jpg?ixlib=rb-1.1.0&rect=6%2C0%2C4484%2C2996&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/bullying-on-phonegirl-smartphone-suffers-harassment-2026888592">Tikhonova Yana/Shutterstock</a></span></figcaption></figure><p>When malicious internet users gain people’s trust to obtain sexually explicit photos or videos, and then use these materials – specifically the threat of sharing them – to coerce them for sexual, personal or financial gain, this is called “<a href="https://www.nationalcrimeagency.gov.uk/what-we-do/crime-threats/kidnap-and-extortion/sextortion-webcam-blackmail">sextortion</a>”.</p>
<p>Recent incidents of sextortion are not entirely surprising given the increase in social interactions online during lockdowns, particularly by way of <a href="https://fortune.com/2021/02/12/covid-pandemic-online-dating-apps-usage-tinder-okcupid-bumble-meet-group/">online dating</a>. But they are concerning. Being a victim of sextortion is associated with significant financial losses (sextortion cost Americans <a href="https://www.cyberscoop.com/fbi-sextortion-scams-losses-2021/">US$8 million</a> over the first seven months of 2021, for example) as well as <a href="https://cybercrimejournal.com/Nilssonetalvol13issue1IJCC2019.pdf">trauma</a>. </p>
<p>Perpetrators of sextortion could be strangers who want to exploit vulnerable young <a href="https://www.justice.gov/psc/file/842411/download">children or adolescents</a>. They could be <a href="https://humantraffickingsearch.org/wp-content/uploads/2018/09/Sextortion_Report.pdf">former romantic partners</a> or members of <a href="https://journals.sagepub.com/doi/full/10.1177/0886260520909186">organised crime groups</a>.</p>
<p>Anyone who <a href="https://www.statista.com/statistics/617136/digital-population-worldwide/">uses the internet</a> can become a victim of sextortion. Age, gender and financial status don’t make a difference to the perpetrators – as long as the potential victim presents some vulnerability and a potential win. That means dating apps are an obvious setting in which users could be vulnerable. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/relationships-during-a-pandemic-how-dating-apps-have-adapted-to-covid-19-160219">Relationships during a pandemic: How dating apps have adapted to COVID-19</a>
</strong>
</em>
</p>
<hr>
<h2>What does sextortion look like on dating apps?</h2>
<p>A dating app user with malicious intent might first gain the trust of the victim, groom them into believing they have found a good potential match, maybe engage in <a href="https://dx.doi.org/10.3390%2Fijerph18052526">some sexting</a>, and then move to a platform that allows sharing photos and video calls (if the app doesn’t have provisions for this), while continuing the grooming process.</p>
<p>The perpetrators may ask for photos or a video for confirmation of identity or verification of the “merchandise”. If that doesn’t work, in many cases they will send a photo of themselves or someone else to <a href="https://cybercrimejournal.com/Nilssonetalvol13issue1IJCC2019.pdf">lure the victim</a> into trusting them. </p>
<p>As they become more and more “real” to the victim, they increase their demands and ask for sexually explicit photos or videos. This may not seem out of place given sending nude photos online has become quite common. A US survey, for example, indicated Americans send <a href="https://badgirlsbible.com/naked-ethics">1.8 million</a> nudes per day, or 20 per second.</p>
<figure class="align-center ">
<img alt="The back of a person's head looking at a computer screen in the dark." src="https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/439326/original/file-20220104-13-1dby4zv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Reports of sextortion have surged during the COVID pandemic.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/silhouette-mans-head-front-computer-monitor-1059872750">tommaso79/Shutterstock</a></span>
</figcaption>
</figure>
<p>While a perpetrator may use the images to blackmail the victim as soon as they receive them, sextortion doesn’t always occur in the first instance. Often people who receive these photos will send them to others. As a result, a victim’s photo can circulate online indefinitely. </p>
<p>Such private material could end up in the hands of a sex offender who could use it to coerce the victim and ask for more material. It could end up in the hands of an organised crime group, who could extort the victim for money or other valuable items. Or it could end up in the online market where people buy explicit photos and videos, which could then be used for sextortion.</p>
<p>In reality, anyone who has ever sent a nude photo or a sexually explicit video to someone online is a potential future victim of sextortion. We can never be certain of how such material is handled, stored, used and even disseminated.</p>
<h2>Consequences and prevention</h2>
<p>Subject to continued fear of being publicly exposed and humiliated, some victims may relocate or stop using social media and other relevant apps in an attempt to escape their extortionist. </p>
<p>Victimisation by sextortion can lead to psychological suffering, and serious mental health issues such <a href="https://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=7863&context=dissertations">as depression and anxiety</a>. Unfortunately for some people, continued experiences of sextortion can lead to self-harm and even death by suicide.</p>
<p>In <a href="https://cybercrimejournal.com/Nilssonetalvol13issue1IJCC2019.pdf">our research</a> published in 2019, my colleagues and I looked at three cases in which a victim of ongoing sextortion died by suicide. We found common themes in these cases included fear, helplessness, hopelessness, shame, humiliation, self-blame and general distress.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/12-ways-to-keep-your-kids-safe-from-sexual-encounters-online-97655">12 ways to keep your kids safe from sexual encounters online</a>
</strong>
</em>
</p>
<hr>
<p>To <a href="https://www.ncsc.gov.uk/guidance/sextortion-scams-how-to-protect-yourself">protect yourself</a> against sextortion, the first step is to be aware of the risks, particularly if you’re using online dating platforms. Refrain from sharing sexually explicit photos or videos of yourself with strangers. </p>
<p>I would go slightly further and suggest that sharing this kind of material should also be avoided in a new relationship. While sexting can have <a href="https://www.apa.org/news/press/releases/2015/08/common-sexting">positive effects</a> on a relationship (such as increasing intimacy and sexual satisfaction), given the risks, a couple should establish a high level of trust before sharing such material online.</p>
<p>And what if you think you’re being targeted? Unfortunately, victims of sextortion often attempt to ignore or comply with the perpetrator’s demands, and don’t report the incidents. This can lead to repeated threats, while the demands increase. And victims can never be certain that the material will not become public even if they comply with the perpetrator’s demands.</p>
<p>If you suspect that you are a victim of sextortion, you should not hesitate to report the incident to the relevant authorities in your region.</p><img src="https://counter.theconversation.com/content/174317/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Calli Tzani does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Attackers gain the trust of vulnerable individuals to obtain sexually explicit photos or videos via the internet, and then use these materials to blackmail victims.Calli Tzani, Senior Lecturer in Investigative Psychology, University of HuddersfieldLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1194422021-10-06T12:34:52Z2021-10-06T12:34:52ZCyberbullying among teens: our research shows online abuse and school bullying are often linked<figure><img src="https://images.theconversation.com/files/424520/original/file-20211004-23-lj7llb.jpg?ixlib=rb-1.1.0&rect=46%2C0%2C5171%2C3444&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/unhappy-teenage-girl-being-bullied-by-1498027142">Daisy Daisy/Shutterstock</a></span></figcaption></figure><p>Over recent years, England has faced a concerning <a href="https://www.bbc.com/news/education-48692953">rise in cyberbullying</a> compared to other countries. This issue <a href="https://www.stompoutbullying.org/blog/Cyberbullying-During-COVID-19">has been compounded</a> by an increase in digital activity among teenagers during COVID-19 lockdowns.</p>
<p><a href="https://www.liebertpub.com/doi/abs/10.1089/cyber.2012.0275">Cyberbullying</a>, sometimes called online harassment or abuse, refers to behaviours where a person repeatedly causes harm to others using electronic devices and technologies. The modern abundance of devices with internet access makes it easier for cyberbullies to remain anonymous and create multiple accounts with different identities, giving them the freedom to attack multiple social media users simultaneously, often without obstruction.</p>
<p>There are <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/621070/Health_behaviour_in_school_age_children_cyberbullying.pdf">numerous means</a> of victimisation. These include making posts on social media intended to threaten or humiliate someone, publishing videos or photos that embarrass or intimidate, and “doxxing” — posting someone’s personal or private information, such as where they live, online.</p>
<p>All this harassment leaves victims feeling isolated, scared and depressed. We know that tragically, many victims <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00367/full">contemplate suicide</a>.</p>
<p>We have investigated school bullying and cyberbullying among young people in the UK, and have recently published <a href="https://onlinelibrary.wiley.com/doi/10.1002/jip.1578">our findings</a>. Some 408 people aged 16-30 took part in this research project, which involved completing a survey online. The majority (351, or almost 90%) were still at school at the time they took part. </p>
<p>Some 37% of participants reported they had experienced cyberbullying. The victims primarily classified the perpetrators as their classmates, followed by students the victims perceived as “popular” in school, older boys or girls, and people unknown to them.</p>
<p>Victimisation took place on various platforms, with Facebook the most commonly reported (74%), followed by Twitter (17%), Snapchat (9%) and Instagram (9%). Common forms of victimisation included the spreading of malicious rumours (49% of participants who experienced cyberbullying said they were subject to rumours), threats (44%), and exclusion from a group, such as chat rooms or online games (29%).</p>
<figure class="align-center ">
<img alt="A teenage boy lies on his bed using his smartphone, with a laptop open." src="https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=395&fit=crop&dpr=1 600w, https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=395&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=395&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=497&fit=crop&dpr=1 754w, https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=497&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/424532/original/file-20211004-21-qy0s6d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=497&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Cyberbullying can take many forms.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/young-boy-studies-lying-on-bed-1905630148">Ollyy/Shutterstock</a></span>
</figcaption>
</figure>
<p>Although our sample size was relatively small, and the majority of respondents were female, these findings are concerning, suggesting schoolmates cyberbully each other at a large scale. Importantly, victims reported that the online incidents they were subjected to occurred most commonly because of arguments in real-life settings. So it’s clear cyberbullying and bullying at school are often interconnected.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/anonymous-apps-risk-fuelling-cyberbullying-but-they-also-fill-a-vital-role-119836">Anonymous apps risk fuelling cyberbullying but they also fill a vital role</a>
</strong>
</em>
</p>
<hr>
<h2>When school bullying and cyberbullying collide</h2>
<p>Nowadays, more and more schools and teachers tolerate students using mobile phones <a href="https://www.oxfordlearning.com/should-cell-phones-be-allowed-classrooms/">at school</a>. And although social media and access to the internet can be a useful <a href="http://www.bestmastersineducation.com/social-media/">educational tool</a>, there are students who use technology to victimise their classmates or others. </p>
<p>Other <a href="https://journals.sagepub.com/doi/abs/10.1177/0886260514540324?journalCode=jiva">researchers have shown</a> that cyberbullying can occur alongside verbal aggression and violent behaviour, and vice versa. An escalation of bullying in the schoolyard to bullying online, or vice versa, could be at the hands of the perpetrator, or of the victim seeking revenge. As one of our participants said:</p>
<blockquote>
<p>He bullied me at school, shoved my things around for no reason and laughed at me with his friends, made fun of my clothes and the way I speak. I could not take it anymore, so one day I created a fake Facebook account and badgered him with texts and posts. I am not sorry, he deserved it for what he was doing to me at school.</p>
</blockquote>
<p>Another participant told us: </p>
<blockquote>
<p>She [a girl at the participant’s school] kept telling people to ignore me and not like my posts. She would embarrass me whenever I would upload a photo and she would share my photos with others just for a laugh […] So one day I just went up to her and told her to leave me alone but she laughed and that made me so angry so I pushed her down. Now she does it even more and her friends joined in as well. I don’t know how to stop this.</p>
</blockquote>
<figure class="align-center ">
<img alt="A girl holding a smartphone sits on the floor burying her head." src="https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=423&fit=crop&dpr=1 600w, https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=423&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=423&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=531&fit=crop&dpr=1 754w, https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=531&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/424536/original/file-20211004-17-17xpsgv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=531&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">An incident of bullying at school can lead to cyberbullying.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/depressed-indian-teen-girl-cellphone-crying-2037489563">Prostock-studio/Shutterstock</a></span>
</figcaption>
</figure>
<h2>What should we do?</h2>
<p>In today’s digital age, bullying among children and young people no longer stops when the school bell rings. But it appears that protective policies have progressed at a much slower pace than the means of cyberbullying have advanced.</p>
<p>It’s clear from our study, and <a href="https://doi.org/10.1016/j.chb.2014.07.035">other research in this area</a>, that Facebook is a particularly risky platform for cyberbullying. With Facebook founder and CEO Mark Zuckerberg having <a href="https://news.sky.com/story/facebooks-mark-zuckerberg-denies-claims-social-media-giant-prioritises-profit-over-user-safety-12427008">firmly denied criticism</a> the company prioritises profit over users’ safety, it would be timely to see further protections put in place for users.</p>
<p>For example, Facebook should work to shorten the response time when online harassment is reported. Although we understand reports are usually reviewed <a href="https://iheartmob.org/resources/safety_guides/facebook_guide">within 48 hours</a>, this can still allow ample time for further dissemination of abusive material.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-cyberbullies-overtly-and-covertly-target-their-victims-90448">How cyberbullies overtly and covertly target their victims</a>
</strong>
</em>
</p>
<hr>
<p>Separately, it’s crucial that schools and policymakers pay equal attention to cyberbullying as they do to traditional bullying, as well as the way the two interact. While there are <a href="https://thenorthernquota.org/campaigns/campaign-launched-raise-awareness-affects-cyberbullying-can-have-young-people">multiple campaigns</a> seeking to <a href="https://www.cybersmile.org/">raise awareness</a> about cyberbullying, it’s possible that teenagers in the UK would benefit from a more intense and sustained campaign to inform parents on how to protect their children. </p>
<p>Such a campaign could include expert advice for parents on how to monitor their child’s online behaviour in a constructive way, how to support their child in the event they do fall victim to cyberbullying, and how to manage the situation if their child is perpetrating cyberbullying. Regularly educating children at school about the consequences of school bullying and cyberbullying is also important.</p><img src="https://counter.theconversation.com/content/119442/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We explored experiences of cyberbullying among young people in the UK. This is what we found.Calli Tzani, Senior Lecturer in Investigative Psychology, University of HuddersfieldJohn Synnott, Senior Lecturer in Investigative and Forensic Psychology, University of HuddersfieldMaria Ioannou, Reader in Investigative & Forensic Psychology, University of HuddersfieldLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1662532021-09-28T11:57:01Z2021-09-28T11:57:01ZSocial media gives support to LGBTQ youth when in-person communities are lacking<figure><img src="https://images.theconversation.com/files/423023/original/file-20210923-21-1fxtf5g.jpg?ixlib=rb-1.1.0&rect=0%2C4%2C3000%2C2991&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Social media can provide ways for LGBTQ youth to learn more about, and stay connected to, their identities.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/celebrating-pride-on-social-media-royalty-free-illustration/1250449474">miakievy/DigitalVision Vectors via Getty Images</a></span></figcaption></figure><p>Teens today have <a href="https://theconversation.com/yes-online-communities-pose-risks-for-young-people-but-they-are-also-important-sources-of-support-158276">grown up on the internet</a>, and social media has served as a space where LGBTQ youth in particular can develop their identities.</p>
<p>Scholarship about the online experiences of LGBTQ youth has traditionally focused on <a href="https://dx.doi.org/10.1007%2Fs40653-017-0175-7">cyberbullying</a>. But understanding both the risks and the benefits of online support is key to helping LGBTQ youth thrive, both on- and offline.</p>
<p>I am a <a href="https://scholar.google.com/citations?user=ZuHbDP0AAAAJ&hl=en">senior research scientist</a> studying the benefits and challenges of <a href="https://www.wcwonline.org/Youth-Media-Wellbeing-Research-Lab/youth-media-wellbeing-research-lab">teen social technology and digital media use</a>. My colleagues, <a href="https://wellesley.academia.edu/RachelHodes">Rachel Hodes</a> and <a href="https://www.wcwonline.org/Research-Associates/amanda-richer">Amanda Richer</a>, and I recently <a href="https://doi.org/10.2196/26207">conducted a study</a> on the social media experiences of LGBTQ youth, and we found that online networks can provide critical resources for them to explore their identities and engage with others in the community.</p>
<h2>Beyond cyberbullying</h2>
<p>The increased risk of cyberbullying that LGBTQ youth face is well-documented. LGBTQ youth are <a href="https://www.glsen.org/news/out-online-experiences-lgbt-youth-internet">almost three times more likely</a> to be <a href="https://dx.doi.org/10.1016%2Fj.chiabu.2014.08.006">harassed online</a> than their straight, cisgender peers. This can result in increased rates of <a href="https://doi.org/10.1080/19361653.2011.649616">depression and feelings of suicide</a>: 56% of sexual minorities experience depression, and 35% experience suicidal thoughts as a direct result of cyberbullying.</p>
<p>However, the digital landscape may be shifting.</p>
<p>Our 2019 survey of 1,033 children ages 10 to 16 found <a href="https://doi.org/10.2196/26207">no difference</a> between the amount of cyberbullying reported by straight versus sexual minority youth residing in a <a href="https://transgenderlawcenter.org/equalitymap">relatively progressive part of the U.S.</a> known for legalizing gay marriage. Some social media platforms like <a href="https://theconversation.com/theres-something-queer-about-tumblr-73520">Tumblr</a> are considered a safer haven for sexual minorities than others, especially during the <a href="https://theconversation.com/how-young-lgbtqia-people-used-social-media-to-thrive-during-covid-lockdowns-156130">COVID-19 lockdown</a>. This is despite past <a href="https://www.newsweek.com/twitter-blocked-searches-lgbt-terms-bisexual-and-called-it-error-703550">censorship of LGBTQ content</a> on certain platforms due to biases in the algorithm.</p>
<p>LGBTQ youth tend to have <a href="https://doi.org/10.2196/26207">smaller online social networks</a> than their straight peers. We found that LGBTQ youth were significantly less likely than their straight peers to engage with their online friends. Conversely, LGBTQ youth are more likely to have friends they know only online, and to perceive these online friends as <a href="https://doi.org/10.1016/j.chiabu.2014.08.006">significantly more socially supportive</a> than their in-person friends. </p>
<p>The LGBTQ youth we surveyed in our study were more likely to join an online group in order to <a href="https://doi.org/10.2196/26207">reduce social isolation or feelings of loneliness</a>, suggesting that they were able to reach out to and engage with social media networks outside of their in-person peer circles in supportive and fortifying ways.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person lying down with rainbow sock-clad legs resting on the back of a sofa." src="https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423024/original/file-20210923-17-8xjgek.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">LGBTQ youth are less likely to be friends with family members online and more likely to join social media sites their parents would disapprove of.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/teenager-lyiing-down-with-her-legs-resting-on-the-royalty-free-image/1324272422">Vladimir Vladimirov/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Despite living in an area with higher levels of acceptance toward sexual minorities, our study participants felt a need to keep parts of their identities separate and hidden online. They were less likely than non-LGBTQ kids to be friends with family members online and more likely to join social media sites their parents would disapprove of. And about 39% said they had no one to talk to about their sexual orientation at all.</p>
<h2>Not just surviving, but thriving online</h2>
<p>Despite the risk of online harassment and isolation, social media can give LGBTQ youth space to explore their sexual identities and promote <a href="https://doi.org/10.1016/j.chb.2016.07.051">mental well-being</a>.</p>
<p>In 2007, Australian researchers conducted one of the earliest studies on how <a href="https://doi.org/10.1177%2F1363460707072956">internet communities serve as safe spaces for LGBTQ youth</a> who face hostile environments at home. Their surveys of 958 youth ages 14 to 21 found that the anonymity and lack of geographic boundaries in digital spaces provide an ideal practice ground for coming out, engaging with a communal gay culture, experimenting with nonheterosexual intimacy and socializing with other LGBTQ youth.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Illustration phone with rainbow heart on the screen, surrounded by positive reaction symbols." src="https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=814&fit=crop&dpr=1 600w, https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=814&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=814&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1023&fit=crop&dpr=1 754w, https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1023&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/423031/original/file-20210923-23-ggu04o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1023&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Some LGBTQ youth use social media to engage with and support social causes.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/diversity-on-social-media-royalty-free-illustration/1325416830">gobyg/DigitalVision Vectors via Getty Images</a></span>
</figcaption>
</figure>
<p>The internet also <a href="https://doi.org/10.1177%2F1363460707072956">provides critical resources</a> about LGBTQ topics. LGBTQ youth may <a href="https://doi.org/10.1016/j.chb.2016.06.009">use online resources</a> to educate themselves about sexual orientation and gender identity terminology, learn about gender transition and find LGBTQ spaces in their local community. The internet can also be a useful tool to identify LGBTQ-friendly <a href="https://doi.org/10.1007/978-3-319-69638-6_4">physicians, therapists and other care providers</a>.</p>
<p>Finally, online platforms can serve as springboards for LGBTQ activism. A <a href="https://www.glsen.org/news/out-online-experiences-lgbt-youth-internet">2013 report by the Gay, Lesbian & Straight Education Network</a> surveying 1,960 LGBTQ youth ages 13 to 18 found that 77% had taken part in an online community supporting a social cause. While 68% of LGBTQ youth also volunteered in-person, 22% said they only felt comfortable getting involved online or via text. This signals that online spaces may be critical resources to foster civic engagement.</p>
<p>While social media is not without its dangers, it can often serve as a tool for LGBTQ youth to build stronger connections to both their local and virtual communities, and communicate about social issues important to them. </p>
<p>[<em>Insight, in your inbox each day.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=insight">You can get it with The Conversation’s email newsletter</a>.]</p><img src="https://counter.theconversation.com/content/166253/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Linda Charmaraman receives funding from the National Institutes of Health.</span></em></p>While online communities may not fully address the isolation LGBTQ youth face in-person, they can serve as an important source of social support and a springboard for civic engagement.Linda Charmaraman, Director of Youth, Media & Wellbeing Research Lab, Wellesley CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1613252021-05-30T11:17:43Z2021-05-30T11:17:43ZPost-secondary workplace harassment policies need to adapt to digital life<figure><img src="https://images.theconversation.com/files/402721/original/file-20210525-21-15ce8a8.jpg?ixlib=rb-1.1.0&rect=29%2C89%2C4962%2C2986&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What hapens when someone outside of the university community co-ordinates a mass email campaign demanding the firing of a faculty member? University policies need to cover this. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Digital tools <a href="https://theconversation.com/women-scientists-get-vocal-about-top-billing-on-twitter-31906">have been part of the scholarly trade for some time</a>, but the COVID-19 pandemic has also accelerated pre-existing trends towards the digitization of higher education. Many experts expect that various technologies adopted over the past year <a href="https://teachonline.ca/tools-trends/10-lessons-post-pandemic-world">will continue to be used</a> <a href="https://teachonline.ca/tools-trends/10-lessons-post-pandemic-world">long after the pandemic ends</a>.</p>
<p>The use of technologies such as video conferencing, social media platforms and virtual discussion groups have heightened scholars’ online visibility. Unfortunately it’s also opened the door to new experiences of abuse and harassment, including <a href="https://theconversation.com/zoom-bombings-disrupt-online-events-with-racist-and-misogynist-attacks-138389">zoom bombing</a> <a href="https://www.insidehighered.com/quicktakes/2021/03/02/college-basketball-analyst-allegedly-doxxed-professors">and doxxing</a> — sharing someone’s personal information online without their consent.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-is-doxxing-and-why-is-it-so-scary-95848">What is doxxing, and why is it so scary?</a>
</strong>
</em>
</p>
<hr>
<p>Most universities and colleges have policies designed to protect their community members from harassment and discrimination. However, these policies have limitations that restrict their usefulness when abuse or harassment occurs online.</p>
<p>We examined harassment and discrimination policies at Canadian universities and colleges to identify areas that require updating for research and education that is increasingly online. This <a href="https://harassment.thedlrgroup.com/">examination was informed by our work</a> speaking to and surveying <a href="https://doi.org/10.1177/1461444818781324">women scholars to identify</a> <a href="https://doi.org/10.5210/fm.v23i8.9136">the kinds of support they need</a> so that we can better understand how existing policies fall short. </p>
<p>We’ve found that where policies do address online abuse and harassment, they are largely ineffective in a world where higher education institutions are integrated in society’s fabric and engage with people beyond the halls of academia in a variety of public platforms and through social media. An update is overdue.</p>
<h2>Analog policies, digital environment</h2>
<p>We searched the public websites of 232 universities and colleges across Canada for their workplace harassment and discrimination policies.</p>
<p>We identified policies at 129 institutions (56 per cent). Of these, only 41 institutions acknowledged online abuse and harassment in some way. Next, we analyzed those 41 policies to understand how university and college community members might be protected from online abuse and harassment within the context of their institution’s policy. </p>
<p>The scope of these 41 policies fell short in two ways.</p>
<p>First, the main objective of many policies is to protect people from abuse and harassment from other members of the same institution. While this stipulation is reasonable in the context of a post-secondary institution, it precludes perpetrators of online abuse and harassment that are unknown, anonymous or not part of the university or college. This limited scope poses serious problems because the online abuse and harassment that academics receive <a href="https://www.chronicle.com/article/right-wing-trolls-attacked-me-my-administration-buckled">often involves people outside of or unknown to the institution</a>.</p>
<figure class="align-center ">
<img alt="An empty campus hallway." src="https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/402736/original/file-20210525-19-1pm5on8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The main objective of many policies is to protect people from abuse and harassment from others at the same institution, but online abuse often goes beyond this.</span>
</figcaption>
</figure>
<p>Second, policies that define their scope in relation to place typically limit harassment to spaces such as university-sanctioned events, events related to work and study or any other place needed to fulfil duties to the institution.</p>
<p>This provision opens up the possibility for policies to cover individuals who aren’t related to the campus community but ignores the fact that <a href="https://doi.org/10.1080/17439884.2021.1878218">scholars’ online abuse and harassment doesn’t always occur on official university online platforms</a>. Examples include media appearances or receiving harassing messages on personal social media accounts (as opposed to, say, receiving harassing messages on an institution’s learning management system).</p>
<p>Defining harassment and discrimination policies in terms of institutional personnel and people who are physically on campus or engaged in official university business excludes acts of harassment that occur outside of institutionally-sanctioned platforms. While some policies might cover email, online classrooms or video conferencing apps for teaching, there are <a href="https://doi.org/10.1177/1461444818781324">many other platforms that scholars use for teaching, learning and research that fall outside the scope of current harassment policies</a>.</p>
<p>When faculty appear on radio or TV, produce TikTok videos or write for mass media to engage broader audiences, they go beyond university campuses and official institutional digital platforms. Without finding ways to address the multi-platform nature of academic work, harassment policies risk leaving scholars unprotected from online harassment.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bB8rtjCxK98?wmode=transparent&start=17" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Video about support women scholars seek and need after online harassment.</span></figcaption>
</figure>
<h2>Reconsider boundaries</h2>
<p>We were glad to see that there were mentions of the fact that abuse and harassment can occur online in the policies for 41 post-secondary institutions, but we see further room for development and growth.</p>
<p>Harassment policies should aim to consider, for example, what happens when a perpetrator outside of the university community co-ordinates a mass email campaign to a <a href="https://www.universityaffairs.ca/features/feature-article/the-growing-problem-of-online-harassment-in-academe/">dean or department chair demanding the expulsion of a student or the firing of a faculty member</a>. These policies need to adopt frameworks that reconsider rigid boundaries of work and non-work life both on and offline.</p>
<p>Institutions need to ensure the safety and well-being of their members when members are online as much when they are in physical classrooms and lecture halls. Acknowledging that work sometimes necessarily occurs outside of university campuses and on public virtual platforms that are not officially tied to the university would be a great step.</p>
<p>Additionally, to truly address online abuse and harassment, policy and procedures will need to account for the presence of online harassment and devise ways to respond that go beyond predominantly disciplining offenders. Such policies need to protect and support scholars. </p>
<figure class="align-center ">
<img alt="Abstract floating icons of various connected devices floating over a city scape." src="https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=433&fit=crop&dpr=1 600w, https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=433&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=433&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=544&fit=crop&dpr=1 754w, https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=544&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/402733/original/file-20210525-17-83eskx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=544&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Acknowledging that work sometimes necessarily occurs outside of university campuses and on public virtual platforms would be a great step towards more effective harassment policies.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>What should change</h2>
<p>When we <a href="https://harassment.thedlrgroup.com/peer-reviewed-publications/">interviewed scholars who experienced harassment</a>, they suggested they could benefit from support from their institution’s IT department or human resources. Policies for these departments which enable support for those experiencing harassment would be helpful in situations where the perpetrator of harassment isn’t connected to the university. </p>
<p>This change in focus will help create a safer work environment for scholars whose abusers cannot be identified and whose abuse stems from off-campus harassment or virtual harassment on public platforms.</p>
<p>Creating or revising harassment policies to account for digital environments is no easy task. However, <a href="https://doi.org/10.1080/14680777.2021.1883086">given the wide variety of adverse negative effects that harassment has on scholars</a>, it is important to foster safe work environments that are conducive to students and faculty thriving. </p>
<p>We recommend universities and colleges develop procedural frameworks for working through the inevitable challenges created by new technologies and modes of work and learning. We cannot know what risks new technologies will bring, but we can create policies that allow for more flexibility in scope and definition to accommodate multiple modes of work and education.</p>
<p><em>This is adapted from an article originally published by <a href="https://academicmatters.ca/analog-policies-in-a-digital-world-how-workplace-harassment-policies-need-to-adapt-to-an-increasingly-digital-education-environment">Academic Matters</a></em>.</p><img src="https://counter.theconversation.com/content/161325/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jaigris Hodson receives funding from the Canada Research Chairs program, SSHRC, and CIHR.</span></em></p><p class="fine-print"><em><span>Chandell Gosse receives funding from the Social Sciences and Humanities Research Council. </span></em></p><p class="fine-print"><em><span>George Veletsianos receives funding from the Canada Research Chairs program, SSHRC, and CIHR.</span></em></p>Where policies do address online abuse and harassment, they’re largely ineffective in a world where academics engage with people in a variety of public platforms and through social media.Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads UniversityChandell Gosse, Postdoctoral Research Associate, Interdisciplinary Studies, Royal Roads UniversityGeorge Veletsianos, Professor and Canada Research Chair in Innovative Learning and Technology, Royal Roads UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1435202020-08-21T12:38:10Z2020-08-21T12:38:10ZHere’s what it’ll take to clean up esports’ toxic culture<figure><img src="https://images.theconversation.com/files/353939/original/file-20200820-18-1i1cwq0.jpg?ixlib=rb-1.1.0&rect=7%2C29%2C4985%2C3787&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">College videogame team members practice League of Legends.</span> <span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/LeagueofLegendsEsports101/82e5316f00e046da906c96a16d9f07d7/photo?Query=League%20of%20legends&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=225&currentItemNo=13">AP Photo/M. Spencer Green</a></span></figcaption></figure><p>In day-to-day life, you probably haven’t had someone yell at you, “Get back in the kitchen and make me a sandwich!” If you’re a woman who plays online video games, though, statements like this, and worse, are all too common. </p>
<p>As COVID-19 has driven much of life online and fueled a <a href="https://www.washingtonpost.com/video-games/2020/05/12/video-game-industry-coronavirus/">boom in online gaming</a>, harassment in these and other internet spaces <a href="https://webfoundation.org/2020/07/theres-a-pandemic-of-online-violence-against-women-and-girls/">has increased</a>. <a href="https://www.statista.com/statistics/232383/gender-split-of-us-computer-and-video-gamers/">Forty-one percent</a> of computer and videogame players are female, down from 46% in 2019.</p>
<p>Despite its digital nature, online harassment can have <a href="https://repository.law.umich.edu/mlr/vol108/iss3/3/">real-world consequences for victims</a>, including emotional and physical distress. This has left online gaming companies and players scrambling for better community management techniques to prevent harassment. As a <a href="https://scholar.google.com/citations?user=7IEXEiwAAAAJ&hl=en">researcher who studies gaming</a>, I’ve found that the right cultural norms can result in healthy online communities, even in the highly competitive world of esports.</p>
<p>The stakes are high. Competitive video gaming, or esports, now exceeds <a href="https://www.forbes.com/sites/jamesayles/2019/12/03/global-esports-revenue-reaches-more-than-1-billion-as-audience-figures-exceed-433-million/#7c218d871329">US$1 billion</a> in yearly revenue. Professional, collegiate and high school leagues are expanding, especially as COVID-19 has <a href="https://www.theguardian.com/sport/2020/apr/11/esports-ride-crest-of-a-wave-as-figures-rocket-during-covid-19-crisis">decreased opportunities for traditional sports</a>. </p>
<h2>History of harassment</h2>
<p>Recent stories from <a href="https://www.nytimes.com/2020/06/23/style/women-gaming-streaming-harassment-sexism-twitch.html">The New York Times</a>, <a href="https://www.wired.com/story/twitch-streaming-metoo-reckoning-sexual-misconduct-allegations/">Wired</a>, <a href="https://www.insider.com/twitch-sexual-assault-misconduct-allegations-video-gaming-community-streamers-harassment-2020-7">Insider</a> and others have highlighted how pervasive sexism, racism, homophobia and other forms of discrimination are in online spaces. However, these issues are hardly new. Similar problems arose in 2014’s <a href="https://www.washingtonpost.com/news/the-intersect/wp/2014/10/14/the-only-guide-to-gamergate-you-will-ever-need-to-read/">GamerGate</a> Twitter-based campaign of harassment of female gamers, designers and journalists. </p>
<p>Sexism was also common before GamerGate. For example, professional gamer Miranda Pakozdi quit her team following <a href="https://www.nytimes.com/2012/08/02/us/sexual-harassment-in-online-gaming-stirs-anger.html?_r=1">sexual harassment</a> from her coach in 2012; the coach, Aris Bakhtanians, famously stated that <a href="https://kotaku.com/competitive-gamers-inflammatory-comments-spark-sexual-h-5889066">“sexual harassment is part of [the fighting game] culture”</a> and that it could not be removed. </p>
<p>Others have suggested that the <a href="https://doi.org/10.1016/j.chb.2006.09.001">anonymity</a> of online game spaces, combined with gamers’ <a href="https://syslab.cs.washington.edu/papers/lol-chi15.pdf">competitive natures</a>, increases the likelihood of toxic behavior. Survey data from the <a href="https://www.adl.org/media/14643/download">American Defamation League</a> suggests that at least 37% of female gamers have faced gender-based harassment.</p>
<p>However, positive online communities exist, and a study by lawyer and former Microsoft user experience designer <a href="https://www.osborneclarke.com/lawyers/rebecca-chui/">Rebecca Chui</a> found that <a href="https://doi.org/10.4101/jvwr.v7i2.7073">anonymous online communities are not inherently toxic</a>. Rather, a culture of harassment requires community norms that allow for it. This suggests that online bad behavior can be addressed effectively. The question is how.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="An arena full of people watching an international videogame tournament" src="https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/353945/original/file-20200820-20-1f601qz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Online video gaming, or esports, has grown to have professional, collegiate and scholastic leagues, and international tournaments like this one in Paris in 2019.</span>
<span class="attribution"><a class="source" href="https://newsroom.ap.org/detail/FranceLeagueofLegendsFinals/6bf32ff641ad4aa1985d33b0b5eddec9/photo?Query=League%20of%20legends&mediaType=photo&sortBy=&dateRange=Anytime&totalCount=225&currentItemNo=7">AP Photo/Thibault Camus</a></span>
</figcaption>
</figure>
<h2>Players’ coping strategies</h2>
<p>In my interview-based research with female gamers, I’ve found that players have <a href="https://doi.org/10.1177%2F1555412015587603">many strategies for avoiding or managing online harassment</a>. For instance, some play only with friends or avoid using voice chat to hide their gender. Other gamers get really good at their favorite games, to shut down harassment through skill. Research by other media scholars, such as <a href="https://adanewmedia.org/2013/06/issue2-gray/">Kishonna Gray</a> and <a href="https://doi.org/10.1177%2F0731121419837588">Stephanie Ortiz</a>, has found similar results across race and sexuality.</p>
<p>These strategies have significant downsides, however. For example, ignoring toxicity or brushing it off allows it to persist. Pushing back against harassers often results in further harassment. </p>
<p>They can also put the burden of challenging harassment on the victim, rather than on the perpetrator or community. This can drive victims out of online spaces. As my interviewees gained responsibilities in their jobs or families, for instance, they no longer had the time or energy to manage harassment and stopped gaming. My research suggests that game companies need to intervene in their communities to keep players from having to go it alone.</p>
<h2>How companies can intervene</h2>
<p>Game companies are becoming increasingly invested in community management strategies. Large publisher Electronic Arts held a <a href="https://www.cnet.com/news/gaming-can-be-toxic-toward-women-and-minorities-electronic-arts-wants-to-help-fix-that/">community management summit</a> in 2019, and companies like <a href="https://www.cnet.com/news/microsofts-xbox-team-has-a-plan-to-fight-toxic-gamers/">Microsoft</a> and <a href="https://www.pcmag.com/news/intel-levels-up-ai-to-battle-toxicity-in-online-games">Intel</a> are developing new tools for managing online spaces. A group of game development companies even recently formed the <a href="https://fairplayalliance.org/about/">Fair Play Alliance</a>, a coalition working to address harassment and discrimination in gaming.</p>
<p>It’s important that interventions be rooted in the experiences of players, however. Right now, many companies intervene though practices like banning or blocking harassers. For instance, the live-streaming platform Twitch recently banned several prominent streamers following allegations that they had committed sexual harassment. </p>
<p>This is a start, but harassers who are blocked or banned often create new accounts and return to their previous behaviors. Blocking also manages harassment after it occurs, rather than stopping it at the source. Thus blocking should be combined with other potential approaches.</p>
<p>First, companies should expand the tools they provide players to manage their online identities. Many participants avoided voice chat to limit gender harassment. This at times made it difficult to compete. Games like Fortnite, League of Legends and Apex Legends, however, have instituted <a href="https://www.pcgamer.com/apex-legends-ping-system-is-a-tiny-miracle-for-fps-teamwork-and-communication/">“ping” systems</a>, where players can communicate essential game information rapidly, without requiring voice. Similar tools could be built into many other online games. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1110949804526452741"}"></div></p>
<p>Another option my interviewees suggested is to make it easy for players to group with friends, so they have someone on their side to guard against harassment. Grouping mechanisms work particularly well when matched to the needs of their specific game. For instance, in games like Overwatch and League of Legends, players need to take on different roles to balance their team. Abuse can occur when randomly assigned teammates all want to play the same character. </p>
<p>Overwatch recently introduced a <a href="https://us.forums.blizzard.com/en/overwatch/t/guide-how-to-use-the-looking-for-group-system/127114">new grouping system</a> that allows players to choose their characters, then be matched with players who have chosen different roles. This appears to <a href="https://www.theguardian.com/games/2018/aug/17/tackling-toxicity-abuse-in-online-video-games-overwatch-rainbow-seige">reduce abusive in-game chat</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Screenshot of videogame League of Legends showing clasped hands" src="https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=377&fit=crop&dpr=1 600w, https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=377&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=377&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=474&fit=crop&dpr=1 754w, https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=474&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/351423/original/file-20200805-477-13giwws.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=474&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An example of in-game commendations for positive behavior in League of Legends.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/15838163@N00/9375189766">Daniel Garrido/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Finally, companies should work to change their basic cultural norms. For example, League of Legends publisher Riot Games once instituted a “<a href="https://www.vox.com/2015/7/7/11564110/doing-something-about-the-impossible-problem-of-abuse-in-online-games">Tribunal</a>” system where players could view incident reports and vote on whether the behavior was acceptable in the League community. </p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p>
<p>Although Riot Games unfortunately closed the Tribunal shortly after its release, including community members in any solution is a good idea. Companies should also develop clear community guidelines, encourage positive behavior through tools like in-game accolades, and respond to ongoing issues rapidly and decisively.</p>
<p>If esports continue to expand without game companies addressing the toxic environments in their games, abusive and exclusionary behaviors are likely to become entrenched. To avoid this, players, coaches, teams, leagues, game companies and live-streaming services should invest in better community management efforts.</p><img src="https://counter.theconversation.com/content/143520/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amanda Cote does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Combating sexism and other forms of harassment in online videogames comes down to community standards.Amanda Cote, Assistant Professor of Media Studies/Game Studies, University of OregonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1356782020-04-08T06:51:54Z2020-04-08T06:51:54ZAs use of digital platforms surges, we’ll need stronger global efforts to protect human rights online<figure><img src="https://images.theconversation.com/files/325970/original/file-20200407-104477-x3zqc2.jpg?ixlib=rb-1.1.0&rect=92%2C170%2C4647%2C2984&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>As millions of people are moving work and social interactions online to protect themselves from COVID-19, existing online safety measures may not be enough to deal with a surge in harassment and abuse.</p>
<p>Concerns about rising levels of scamming and harassment prompted online safety organisation <a href="https://www.netsafe.org.nz/">NetSafe</a> <a href="https://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=12318228">to issue a warning</a> to users to maintain vigilance. This abuse has included threats of violence and explicit <a href="https://www.aljazeera.com/news/2020/04/anti-asian-hate-continues-spread-online-covid-19-pandemic-200405063015286.html">racism and xenophobia</a>. </p>
<p>Online abuse breaches several human rights. We argue that governments have obligations under international law and should establish a digital human rights charter, with special protections built in for women and children.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/zoombombers-want-to-troll-your-online-meetings-heres-how-to-stop-them-135311">'Zoombombers' want to troll your online meetings. Here's how to stop them</a>
</strong>
</em>
</p>
<hr>
<h2>Cyber violence against women</h2>
<p>Online platforms replicate culture with all its offline risks and inequalities. </p>
<p>Offline, discrimination against women permeates <a href="http://docstore.ohchr.org/SelfServices/FilesHandler.ashx?enc=6QkG1d%2fPPRiCAqhKb7yhsqMFgv33OTgoZv7ZAgL6thAQ9IftfPs3g9t3r4w6hFnRBqTwEr%2biim0%2bsAlJpAatSmEIaiBa2tDiXsJJkM5ckb%2fmDeJMOEw4XS%2fWDcWV%2fXkK">all aspects of our society</a>, including the family, education, the workplace, the legal system and government. Discrimination manifests in different ways, including <a href="http://docstore.ohchr.org/SelfServices/FilesHandler.ashx?enc=6QkG1d%2fPPRiCAqhKb7yhsldCrOlUTvLRFDjh6%2fx1pWAeqJn4T68N1uqnZjLbtFua2OBKh3UEqlB%2fCyQIg86A6bUD6S2nt0Ii%2bndbh67tt1%2bO99yEEGWYpmnzM8vDxmwt">violence against women</a>. </p>
<p>These unequal gender dynamics <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/184/58/PDF/G1818458.pdf?OpenElement">repeat online</a>, resulting in women being subjected to sexist, misogynistic and violent content. In 2018, a UN women’s human rights expert <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/184/58/PDF/G1818458.pdf?OpenElement">recognised cyber violence</a> as a specific form of violence against women. </p>
<p>In a <a href="https://www.amnesty.org.nz/amnesty-reveals-alarming-impact-online-abuse-against-women">2017 Amnesty International survey</a>, nearly a quarter (23%) of women surveyed across eight developed countries said they had experienced online abuse or harassment more than once. Of those women, 41% felt their physical safety was threatened on at least one occasion. </p>
<p>In New Zealand, a third of women reported being victims of online harassment. Of those who experienced abuse online:</p>
<ul>
<li>75% had trouble sleeping well</li>
<li>49% felt their personal safety was at risk</li>
<li>32% felt the safety of their families was at risk </li>
<li>72% were less able to focus on everyday tasks</li>
<li>70% experienced lower self-esteem or loss of self-confidence</li>
<li>two-thirds felt a sense of powerlessness. </li>
</ul>
<p>Almost half (49%) reduced their use of social media or left platforms altogether. </p>
<p>The UN’s Human Rights Council identified <a href="https://undocs.org/en/A/HRC/38/47">widespread online violence against women</a> as a significant reason for the <a href="https://undocs.org/A/HRC/35/9">global digital divide between men and women</a>. </p>
<p>Online violence against women by (mostly) men is especially persistent on social media platforms like <a href="https://www.theguardian.com/technology/2019/mar/04/facebook-women-abuse-harassment-social-media-amnesty">Facebook</a>, <a href="https://www.newsroom.co.nz/2019/02/20/449770/twitters-huge-fail-on-online-abuse">Twitter</a> and <a href="https://www.theatlantic.com/technology/archive/2018/10/instagram-has-massive-harassment-problem/572890/">Instagram</a>. It includes online harassment, cyberstalking, “doxing” (where private information is shared by others online) and <a href="https://www.sciencedirect.com/science/article/pii/S0747563218305454?via%3Dihub">revenge pornography</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/meet-sara-sharon-and-mel-why-people-spreading-coronavirus-anxiety-on-twitter-might-actually-be-bots-134802">Meet ‘Sara’, ‘Sharon’ and 'Mel': why people spreading coronavirus anxiety on Twitter might actually be bots</a>
</strong>
</em>
</p>
<hr>
<h2>Obligations of governments and online platforms</h2>
<p>Cyber violence breaches international human rights laws, including the <a href="https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx">right to freedom of expression</a> (fewer women are likely to share their opinions or thoughts online), the <a href="https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx">right to be free from discrimination and violence</a>, the <a href="https://undocs.org/A/HRC/35/9">right to information about health</a> (including potentially life-saving updates about COVID-19) and the <a href="https://www.ohchr.org/EN/Issues/DigitalAge/Pages/DigitalAgeIndex.aspx">right to privacy</a>. </p>
<p>International human rights law applies both <a href="https://undocs.org/A/RES/68/167">offline and online</a>. </p>
<p>Social media platforms have created <a href="https://help.twitter.com/en/rules-and-policies/twitter-rules">community standards</a> to protect users’ human rights, but they may not be evolving fast enough during disruptive times such as we are experiencing now. The massive increase in use is likely to amplify the <a href="https://www.theguardian.com/lifeandstyle/2016/nov/22/emotional-violence-cyberbullying-trolling-bullying-racism-misogyny">dark side of social media</a>. </p>
<p>Governments around the world have been slow to use their legislative powers to regulate online platforms. The live streaming of the Christchurch mosque attacks on March 15 2019 highlighted the platforms’ failure to control the spread of hateful content. </p>
<p>An <a href="https://www.christchurchcall.com/">international agreement</a> to eliminate violent extremist content online has been difficult to achieve. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/christchurchs-legacy-of-fighting-violent-extremism-online-must-go-further-deep-into-the-dark-web-133159">Christchurch's legacy of fighting violent extremism online must go further – deep into the dark web</a>
</strong>
</em>
</p>
<hr>
<h2>Protecting rights and lives online</h2>
<p>While platforms remain global with “one size fits all” community standards, governments have different responses to restricting individual freedom of expression. </p>
<p>Governments should consider establishing an international charter on digital human rights, which all social media platforms could adopt. Such a charter would enable a coherent and consistent response to cyber violence, in a world that is now almost exclusively online. </p>
<p>There are some practical steps we can all take. These steps include <a href="https://help.twitter.com/en/safety-and-security/report-abusive-behavior">reporting online violations</a>, <a href="https://help.twitter.com/en/using-twitter/blocking-and-unblocking-accounts">blocking</a> people or groups, and closely monitoring connections. </p>
<p>If you are experiencing serious online bullying, harassment, revenge porn or other forms of abuse and intimidation, <a href="https://www.police.govt.nz/advice-services/cybercrime-and-internet/harmful-digital-communications-hdc">contact police who may take action</a> under the <a href="http://www.legislation.govt.nz/act/public/2015/0063/latest/whole.html">Harmful Digital Communications Act 2015</a>.</p><img src="https://counter.theconversation.com/content/135678/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>During the COVID-19 pandemic, online platforms might seem to be safer places to work and socialise, but online abuse is expected to rise – and women are at a higher risk.Cassandra Mudgway, Senior Lecturer in Law, Auckland University of TechnologyKate Jones, Senior lecturer in Digital Marketing & Social Media, Auckland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1130302019-05-07T20:08:43Z2019-05-07T20:08:43ZHow highly sexualised imagery is shaping ‘influence’ on Instagram - and harassment is rife<figure><img src="https://images.theconversation.com/files/272344/original/file-20190502-103060-t5un39.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The body plays a crucial role in Instagram influencers' selfies.</span> <span class="attribution"><a class="source" href="https://pixabay.com/photos/adult-body-bra-woman-lingerie-1869735/">https://pixabay.com/photos/adult-body-bra-woman-lingerie-1869735/</a></span></figcaption></figure><p>Australians are some of the most active social media users in the world and Instagram is <a href="https://www.socialmedianews.com.au/social-media-statistics-australia-january-2019/">particularly popular</a>. One in three of us have an account, with more than 9,000,000 monthly active users. The rise of Instagram reflects our increasingly visual culture, with <a href="https://www.sensis.com.au/about/our-reports/sensis-social-media-report">45% of Australians having taken a selfie</a> and uploaded it to social media. </p>
<p>But Instagram isn’t just a place for personal photos, it’s big business. The platform is the birthplace and breeding ground of influencer marketing: a relatively new, multi-billion dollar industry, <a href="http://mediakix.com/2018/03/influencer-marketing-industry-ad-spend-chart/#gs.48ychl">projected to grow</a> from US$6 billion in 2018 to US$10 billion in 2020.</p>
<p>Influencers generate digital content and gain the attention of a “following” on social media through representations of their everyday lives, in which various commodities and brands play a vital role. The larger the audience, and the more attention they receive, the greater the monetisation potential. </p>
<p>Most influencers, are women, who aspire to build a personal brand. There are <a href="https://cdn2.hubspot.net/hubfs/4030790/InfluencerDB-State-of-the-Industry-2018.pdf">more than 558,000 influencers</a> on Instagram who have more than 15,000 followers. </p>
<p>Theirs is a precarious form of work, with none of the traditional workplace protections and they can can spend an <a href="https://journals.sagepub.com/doi/abs/10.1177/1329878X16665177">extraordinary amount of time and effort</a> generating the “perfect shot” to upload. It’s a job that is always “on”, with the platform functioning 24/7 and delivering a constant stream of notifications. </p>
<p>The body plays a crucial role in influencers’ selfies. Conforming to <a href="https://journals.sagepub.com/doi/abs/10.1177/2056305115604337">rigid standards of attractiveness and femininity</a> is fundamental to their gaining attention. This means a lot of work for them but also has broader cultural effects in <a href="https://theconversation.com/social-media-the-bikini-bridge-and-the-viral-contagion-of-body-ideals-87262">shaping attitudes about body image</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/social-media-the-bikini-bridge-and-the-viral-contagion-of-body-ideals-87262">Social media, the 'bikini bridge' and the viral contagion of body ideals</a>
</strong>
</em>
</p>
<hr>
<p>One of the easiest ways for women to gain attention on social media is through a highly sexualised aesthetic, which is increasingly “<a href="https://journals.sagepub.com/doi/abs/10.1177/2374623816643281">pornified</a>”, i.e., borrowing a “look” associated with mainstream, pornographic imagery.</p>
<p>We analysed <a href="https://onlinelibrary.wiley.com/doi/10.1111/gwao.12354">172 female influencers’ social media pages over a period of four months</a>. They ranged from women who willingly promote brands with no remuneration to those who market themselves as a personal brand. Our sample of influencers was drawn internationally and sourced from “shoutout pages”, which act as virtual currency to build popularity and thus gain attention. We analysed the images, interactions, and comments of the influencers studied.</p>
<p>We found a continuum of pornified self-representations by these social media influencers on Instagram. This ranged from “softer” references – where influencers pose to highlight sexualised body parts and employ “porn chic” gestures such as gently pulling their hair, touching their parted lips and squatting with legs spread to the camera – to images that were hard to differentiate from mainstream commercial pornography. </p>
<p>Here, pornified representations grab viewers’ attention with the goal of being monetised to sell products, such as protein powder, gummy vitamins, or detox tea.</p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/p/BvtUp6dgwvj","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<p>Our data does not show an enormous breadth of ways in which women might wish to be sexual, but rather a fairly monotonous repetition of “sexiness” and sexual availability that is shaped by porn chic.</p>
<p>The women in our study ranged from having hundreds of followers to millions.
Those with a higher number of followers were associated with a more explicitly pornified aesthetic – sometimes using the Instagram platforms to redirect viewers to more direct paid-access pornography on external sites such as OnlyFans.com or private messaging applications like Snapchat and WhatsApp. </p>
<p><div data-react-class="InstagramEmbed" data-react-props="{"url":"https://www.instagram.com/p/Bu4Mwl3nTmj","accessToken":"127105130696839|b4b75090c9688d81dfd245afe6052f20"}"></div></p>
<p>But all of this monetised attention comes at a cost of significant sexual harassment, which <a href="https://www.theatlantic.com/technology/archive/2018/10/instagram-has-massive-harassment-problem/572890/">many argue is poorly policed</a> by Instagram. Monitoring the women’s social media feeds, we found that many female influencers are subject to sexually aggressive comments, objectifying messages from followers, and a lack of privacy in their personal lives. Influencers in our study were subject to sexual solicitation and even physical threats. </p>
<p>Such comments range from: “I would love your art work but it’s your bum I want”, or “love to spank and kiss your gorgeous ass”, to the more aggressive “Turn around before I take out my dick and beat you…”</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=286&fit=crop&dpr=1 600w, https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=286&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=286&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=359&fit=crop&dpr=1 754w, https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=359&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/264290/original/file-20190318-28487-4u8fb6.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=359&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Sexually aggressive comment on an influencer’s posted image.</span>
<span class="attribution"><span class="source">Instagram.com</span></span>
</figcaption>
</figure>
<p>Rebuking harassment or deleting hostile comments also comes at a cost for influencers. More engagement, including these kinds of comments, results in more potential attention for a given post. This, in turn, is directly tied to their ability to monetise influence. </p>
<p>Hence, we found that reading such harassment, and making the choice to not delete it, simply becomes “part of the job”. The fear of losing a partnership with a brand is always a concern, so influencers build their audience by engaging with followers in upbeat, positive and convivial ways. </p>
<p>The intensity, volume and public nature of this harassment makes social media influencers particularly vulnerable. They do not have the support of a traditional workplace and employer in dealing with these constant and inescapable interactions. </p>
<p>As well as having negative consequences for the influencers themselves, the trend towards a pornified aesthetic also has consequences for gender equality, more broadly. It’s an aesthetic that positions women and girls as existing for men’s sexualised consumption.</p><img src="https://counter.theconversation.com/content/113030/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A study of Instagram influencers has found most employ a highly sexualised aesthetic drawn from mainstream adult film. And many are subject to sexual harassment, ranging from aggressive comments to physical threats.Jenna Drenten, Assistant Professor of Marketing, Loyola University ChicagoLauren Gurrieri, Senior Lecturer in Marketing, RMIT UniversityMeagan Tyler, Senior lecturer, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1131482019-03-08T13:17:06Z2019-03-08T13:17:06ZFrance’s ‘everyday sexism’ starts at school<figure><img src="https://images.theconversation.com/files/262702/original/file-20190307-82695-y5guko.jpg?ixlib=rb-1.1.0&rect=0%2C68%2C1917%2C1270&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Smartphones have put the tools for bullying and voyeurism in the pockets of schoolchildren.</span> <span class="attribution"><a class="source" href="https://pixabay.com/fr/photos/potins-filles-groupe-portrait-532012/">Baruska/Pixabay</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>In France, the #MeToo movement has a livelier name: <a href="https://theconversation.com/balancetonporc-the-story-behind-pigs-and-lust-92491">#BalanceTonPorc, or “Name and Shame Your Pig</a>.” It inspired hundreds of women to denounce sexual harassment on the streets and in the boardroom. </p>
<p>The latest workplace harassment scandal, exposed online in mid-February, involved France’s so-called “LOL League” – an anonymous <a href="https://www.independent.co.uk/voices/ligue-du-lol-metoo-france-media-sexism-women-britain-facebook-a8783461.html">boys club in the media industry</a> – that in 2010 began bullying female colleagues online. </p>
<p>Prominent news editors, including the <a href="https://www.slate.fr/story/174261/ligue-du-lol-publicite-agence-communication-sexisme">managing editor of the French edition of Slate</a> and of the venerable news daily Libération, had targeted female colleagues on Twitter and on Facebook, mocking them on their looks and ethnic origins or making sexist and racist slurs. </p>
<p>Many of these men had since climbed up in hierarchy in the news business, and, as #MeToo unfolded in 2017 and 2018, claimed that they were feminists.</p>
<h2>France’s everyday sexism</h2>
<p>Sexism is an everyday occurrence in France, where <a href="https://www.huffingtonpost.fr/2019/03/05/des-publicites-le-temps-des-cerises-jugees-sexistes-retirees_a_23685290/">clothing advertisements</a> still use sex to sell products and men comment on women’s looks at the office and on the streets. Lewd comments are often defended as “just flirting.” </p>
<p>Flirtation was the argument some prominent French women – including actor Catherine Deneuve – have used to denounce #MeToo. In <a href="https://www.theguardian.com/commentisfree/2018/jan/10/catherine-deneuve-let-me-explain-why-metoo-is-nothing-like-a-witch-hunt">an open letter</a> published in January 2018, 100 women denounced #MeToo as a puritan witch hunt, driven by a hatred of men, and <a href="https://www.newyorker.com/news/daily-comment/why-did-catherine-deneuve-and-other-prominent-frenchwomen-denounce-metoo">claimed that French culture was simply different</a>, more sexually expressive, than American culture.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=567&fit=crop&dpr=1 600w, https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=567&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=567&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=712&fit=crop&dpr=1 754w, https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=712&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/262728/original/file-20190307-82681-5dc2b0.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=712&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A recent ad by the French brand Temps des Cerises triggered outrage when it updated France’s revolutionary motto, ‘Liberty, Equality, Fraternity’ by replacing the last word with the phrase ‘nice butt.’</span>
</figcaption>
</figure>
<p>Feminists across France rushed to defend #MeToo’s goals of exposing workplace misogyny and holding sexual harassers accountable. By summer 2018, thanks to their efforts, France passed a law <a href="https://www.huffingtonpost.com/entry/france-street-harassment-crime_us_5b634939e4b0b15abaa0f552">criminalizing street harassment</a>.</p>
<p>But the French backlash against #MeToo demonstrated a certain confusion in the country between what constitutes sexual freedom and what constitutes abuse.</p>
<p>In France, <a href="https://m.centre-hubertine-auclert.fr/sites/default/files/fichiers/actes-251114-cybersexisme-web_0.pdf">studies show</a>, the tendency to pass off sexual harassment as harmless flirtation starts as early as <a href="https://journals.openedition.org/rechercheseducations/1561#tocto2n5">primary school</a>. </p>
<h2>Sexual violence at school</h2>
<p>Children do not always understand the difference between an intimate touch that is a consensual act of sexual discovery and a nonconsensual, inappropriate touch. And violence and harassment that take other forms – insults, mockery and rumors – are particularly hard for children to identify, and for adults to detect.</p>
<p>Both boys and girls may be subjected to sexism or sexual violence in the guise of a joke or of a game. </p>
<p>Recently, students in a school near Paris were all playing a game they’d learned on the social network Snapchat. It consisted of touching the other children’s private parts to earn “points.” During “<a href="http://www.leparisien.fr/essonne-91/epinay-agression-sexuelle-en-marge-de-la-journee-de-la-fesse-au-college-21-02-2019-8017392.php">Ass Day</a>,” as the students called it, boys and girls allowed each other to touch or squeeze their genitals. </p>
<p>Not participating in this “game” was not really an option. </p>
<p>One female student told a school guard she did not consent to being touched. He looked the other way, and the groping continued. Even after school, the girl was trailed by boys trying to touch her on her way home. </p>
<p>She informed her parents, who went public with the story.</p>
<h2>Cyberbullying</h2>
<p>Other common coerced sexual encounters between students include forced kisses and voyeurism, especially spying on the boys’ or girls’ bathroom. </p>
<p><a href="https://www.centre-hubertine-auclert.fr/sites/default/files/fichiers/etude-cybersexisme-web.pdf">Smartphones and social networks</a> put the tools of voyeurism in kids’ pockets.</p>
<p>Online variations of the bathroom peeping Tom include <a href="https://eviolence.hypotheses.org/glossaire/U">upskirting</a> – when kids snap pictures up a girl’s skirt – and <a href="https://eviolence.hypotheses.org/glossaire/c">creepshotting</a>, or taking a picture of a woman’s cleavage <a href="https://www.theawl.com/2016/07/are-there-ethics-in-creepshotting/">without her knowledge</a>.</p>
<p>“<a href="https://theconversation.com/global/search?utf8=%E2%9C%93&q=revenge+porn">Revenge porn</a>” – when angry friends, school bullies or ex-partners post sexually explicit photos of a person without their content – is another danger children face on social networks.</p>
<p>In January 2018, about 50 high school girls in the eastern French region of <a href="https://www.francetvinfo.fr/societe/harcelement-sexuel/video-affaire-de-nudes-a-strasbourg-des-selfies-intimes-d-adolescentes-exhibes-sur-les-reseaux-sociaux_2622194.html">Strasbourg</a> discovered nude pictures of themselves – previously shared only with friends or boyfriends – published on Snapchat and Facebook groups linked to the school. </p>
<p>Boys are not the only ones to ridicule girls for their sexuality, a form of harassment known as <a href="https://theconversation.com/global/topics/slut-shaming-28126">slut shaming</a>. In a bid to earn male approval and popularity, girls, too, <a href="https://www.deboecksuperieur.com/ouvrage/9782804175948-les-ados-dans-le-cyberespace">post revenge porn and circulate upskirts</a> at the expense of their female classmates.</p>
<h2>Stereotypes create violence</h2>
<p><a href="https://www.egalite-femmes-hommes.gouv.fr/wp-content/uploads/2012/11/LUTTE-VIOLENCES-guide-Comportements_sexistes.pdf">Tension and aggressive behavior</a> among teens is attributable to various factors in their development: puberty, identity building, peer group influence, seduction games. </p>
<p>But gender stereotypes are at the heart of this problem, too. </p>
<p>Stereotypes about how men and women should behave are conveyed by the media, at home and <a href="https://www.lexpress.fr/actualite/5-exemples-de-sexisme-ordinaire-a-l-ecole_1318832.html">in the classroom</a>. In France, <a href="https://etudiant.lefigaro.fr/article/-paye-ton-bahut-denonce-le-sexisme-au-college-et-au-lycee_ffa3944a-dbeb-11e6-8620-c271acfe3201/">teachers’</a> own internalized sexism may unintentionally lead them to enforce social norms about “flirtation” and the stereotypical roles of boys and girls.</p>
<p>This hurts boys, too. In France, where men are expected to display their <a href="https://eviolence.hypotheses.org/files/2017/09/Sigol%C3%A8ne-Couchot-Schiex-280917.pdf">sexual dominance</a>, boys considered insufficiently “<a href="https://www.cairn.info/revue-agora-debats-jeunesses-2012-1-page-67.htm">manly</a>” can become victims of bullying and sexual violence. </p>
<p>“The socialization of boys draws two distinct groups,” <a href="http://prevenance-asso.fr/wp-content/uploads/2018/06/Les-violences-sexistes-%C3%A0-l%E2%80%99%C3%A9%20School-une-oppression-viriliste.pdf">says the French educator Eric Debarbieux</a>. “Those who manage to show their strength, to be the strongest, the most virile; and others who risk being downgraded to the category of sub-human, or ‘fags.’” </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/d9eo8azCrTs?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Quebec film 1:54 denounces homophobic harassment in schools (Yan England, 2016).</span></figcaption>
</figure>
<h2>Better sex education</h2>
<p>Schools in France mostly address gender-based behavior and sexual violence during <a href="http://eduscol.education.fr/cid46864/les-enjeux-de-l-education-a-la-sexualite.html">sex education</a> classes. </p>
<p>The national sex ed curriculum, in place <a href="https://journals.openedition.org/edso/951">since 1973</a>, is the now the <a href="https://www.lemonde.fr/m-moyen-format/article/2017/02/17/l-education-sexuelle-un-sujet-devenu-sensible-en-ile-de-france_5081339_4497271.html">subject of debate in the country</a>. </p>
<p>Teachers discuss public health issues, relationships between girls and boys, the culture of equality, sexual violence, pornography and gender and homophobic prejudices.</p>
<p>Efforts since #MeToo to make French sex education more progressive – starting it at a younger age, for example, or to teach French <a href="https://www.lexpress.fr/actualite/societe/theorie-du-genre-ou-egalite-entre-les-sexes_1836853.html">elementary school children more about gender, sex and identity</a> – have proven <a href="https://www.franceinter.fr/societe/education-sexuelle-ce-qui-est-vraiment-enseigne-a-vos-enfants">controversial</a>. Last September, the French government found itself debunking accusations that it <a href="https://www.lejdd.fr/Societe/Education/education-sexuelle-a-lecole-ce-que-vont-apprendre-nos-enfants-3748817">wanted to teach toddlers how to masturbate</a>.</p>
<p></p>
<p>But if French schools contribute to national confusion about the difference between flirting and harassment, schools need to do more to develop students’ ability to think critically about gender roles as they are <a href="http://www.lecrips-idf.net/professionnels/dossier-thematique/egalite-filles-garcons/influence-medias.htm">conveyed by the media</a>, film, television and advertising.</p>
<p>Critically, younger children must be taught about consent, which will help them distinguish between seduction and aggression. France’s anti-#MeToo women wanted to protect the “<a href="https://www.worldcrunch.com/opinion-analysis/full-translation-of-french-anti-metoo-manifesto-signed-by-catherine-deneuve">freedom to bother</a>,” which they say is necessary to give women the “freedom to say ‘no.’” </p>
<p>But children need to know, too, that they <a href="https://www.liberation.fr/debats/2018/01/14/de-la-liberte-d-importuner-au-droit-de-ne-pas-l-etre_1622400">have the right to not be bothered</a>.</p><img src="https://counter.theconversation.com/content/113148/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bérengère Stassin ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>France’s #MeToo backlash has revealed just how deeply rooted sexism is in the country. Disguised as flirtation or child’s play, sexual harassment begins as early as elementary school.Bérengère Stassin, Maître de conférences en sciences de l’information et de la communication, membre du CREM, Université de LorraineLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1102722019-02-03T19:21:06Z2019-02-03T19:21:06ZOnline trolling used to be funny, but now the term refers to something far more sinister<figure><img src="https://images.theconversation.com/files/256550/original/file-20190131-108351-w5ujdy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The definition of "trolling" has changed a lot over the last 15 years.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/722420158?size=huge_jpg">Shutterstock</a></span></figcaption></figure><p>It seems like internet trolling happens everywhere online these days – and it’s showing no signs of slowing down. </p>
<p>This week, the British press and Kensington Palace officials have <a href="https://www.abc.net.au/news/2019-01-30/british-press-urges-end-to-abuse-of-duchesses-meghan-and-kate/10760822">called for</a> an end to the merciless online trolling of Duchesses Kate Middleton and Meghan Markle, which reportedly includes racist and sexist content, and even threats.</p>
<p>But what exactly is internet trolling? How do trolls “behave”? Do they intend to harm, or amuse?</p>
<p>To find out how people define trolling, <a href="https://home.liebertpub.com/publications/cyberpsychology-behavior-brand-social-networking/10/overview">we conducted a survey</a> with 379 participants. The results suggest there is a difference in the way the media, the research community and the general public understand trolling. </p>
<p>If we want to reduce abusive online behaviour, let’s start by getting the definition right.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-empathy-can-make-or-break-a-troll-80680">How empathy can make or break a troll</a>
</strong>
</em>
</p>
<hr>
<h2>Which of these cases is trolling?</h2>
<p>Consider the comments that appear in the image below:</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=734&fit=crop&dpr=1 600w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=734&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=734&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=922&fit=crop&dpr=1 754w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=922&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/256236/original/file-20190130-108358-hp05wo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=922&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Screenshot</span></span>
</figcaption>
</figure>
<p>Without providing any definitions, we asked if this was an example of internet trolling. Of participants, 44% said yes, 41% said no and 15% were unsure.</p>
<p>Now consider this next image:</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=394&fit=crop&dpr=1 600w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=394&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=394&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=495&fit=crop&dpr=1 754w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=495&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/256549/original/file-20190131-112389-c3uu76.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=495&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Screenshot</span></span>
</figcaption>
</figure>
<p>Of participants, 69% said this was an example of internet trolling, 16% said no, and 15% were unsure).</p>
<p>These two images depict very different online behaviour. The first image depicts mischievous and comical behaviour, where the author perhaps intended to amuse the audience. The second image depicts malicious and antisocial behaviour, where the author may have intended to cause harm.</p>
<p>There was more consensus among participants that the second image depicted trolling. That aligns with a more common definition of internet trolling <a href="https://scottbarrykaufman.com/wp-content/uploads/2014/02/trolls-just-want-to-have-fun.pdf">as destructive and disruptive online behaviour</a> that causes harm to others. </p>
<p>But this definition has only really evolved in more recent years. Previously, internet trolling was defined very differently.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-researched-russian-trolls-and-figured-out-exactly-how-they-neutralise-certain-news-100994">We researched Russian trolls and figured out exactly how they neutralise certain news</a>
</strong>
</em>
</p>
<hr>
<h2>A shifting definition</h2>
<p>In 2002, one of the earliest definitions of internet “trolling” <a href="https://www.tandfonline.com/doi/pdf/10.1080/01972240290108186">described the behaviour as</a>: </p>
<blockquote>
<p>luring others online (commonly on discussion forums) into pointless and time-consuming activities. </p>
</blockquote>
<p>Trolling often started with a message that was intentionally incorrect, but not overly controversial. By contrast, internet “flaming” <a href="https://www.sciencedirect.com/science/article/pii/S0167923602001902">described online behaviour with hostile intentions</a>, characterised by profanity, obscenity, and insults that inflict harm to a person or an organisation. </p>
<p>So, modern day definitions of internet trolling seem more consistent with the definition of flaming, rather than the initial definition of trolling. </p>
<p>To highlight this intention to amuse compared to the intention to harm, communication researcher <a href="https://www.researchgate.net/profile/Jonathan_Bishop4/publication/259229799_Representations_of_'trolls'_in_mass_media_communication_A_review_of_media-texts_and_moral_panics_relating_to_'internet_trolling'/links/0046352a85d257a299000000/Representations-of-trolls-in-mass-media-communication-A-review-of-media-texts-and-moral-panics-relating-to-internet-trolling.pdf">Jonathan Bishop suggested</a> we differentiate between “kudos trolling” to describe trolling for mutual enjoyment and entertainment, and “flame trolling” to describe trolling that is abusive and not intended to be humorous. </p>
<h2>How people in our study defined trolling</h2>
<p>In our study, which has been accepted to be published in the journal <a href="https://home.liebertpub.com/publications/cyberpsychology-behavior-brand-social-networking/10/overview">Cyberpsychology, Behavior, and Social Networking</a>, we recruited 379 participants (60% women) to answer an online, anonymous questionnaire where they provided short answer responses to the following questions:</p>
<ul>
<li><p>how do you define internet trolling? </p></li>
<li><p>what kind of behaviours constitute internet trolling?</p></li>
</ul>
<p>Here are some examples of how participants responded:</p>
<blockquote>
<p>Where an individual online verbally attacks another individual with intention of offending the other (female, 27)</p>
<p>People saying intentionally provocative things on social media with the intent of attacking / causing discomfort or offence (female, 26)</p>
<p>Teasing, bullying, joking or making fun of something, someone or a group (male, 29)</p>
<p>Deliberately commenting on a post to elicit a desired response, or to purely gratify oneself by emotionally manipulating another (male, 35)</p>
</blockquote>
<p>Based on participant responses, we suggest that internet trolling is now more commonly seen as an intentional, malicious online behaviour, rather than a harmless activity for mutual enjoyment. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=556&fit=crop&dpr=1 600w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=556&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=556&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=698&fit=crop&dpr=1 754w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=698&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/256250/original/file-20190130-108370-9e2xj7.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=698&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A word cloud representing how survey participants described trolling behaviours.</span>
</figcaption>
</figure>
<h2>Researchers use ‘trolling’ as a catch-all</h2>
<p>Clearly there are discrepancies in the definition of internet trolling, and this is a problem.</p>
<p>Research does not differentiate between kudos trolling and flame trolling. Some members of the public might still view trolling as a kudos behaviour. For example, one participant in our study said:</p>
<blockquote>
<p>Depends which definition you mean. The common definition now, especially as used by the media and within academia, is essentially just a synonym to “asshole”. The better, and classic, definition is someone who speaks from outside the shared paradigm of a community in order to disrupt presuppositions and try to trigger critical thought and awareness (male, 41)</p>
</blockquote>
<p>Not only does the definition of trolling differ from researcher to researcher, but there can also be discrepancy between the researcher and the public. </p>
<p>As a term, internet trolling has significantly deviated from its early, 2002 definition and become a catch-all for all antisocial online behaviours. The lack of a uniform definition of internet trolling leaves all research on trolling open to validity concerns, which could leave the behaviour remaining largely unchecked.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/our-experiments-taught-us-why-people-troll-72798">Our experiments taught us why people troll</a>
</strong>
</em>
</p>
<hr>
<h2>We need to agree on the terminology</h2>
<p>We propose replacing the catch-all term of trolling with “cyberabuse”.</p>
<p>Cyberbullying, cyberhate and cyberaggression are all different online behaviours with different definitions, but they are often referred to uniformly as “trolling”. </p>
<p>It is time to move away from the term trolling to describe these serious instances of cyberabuse. While it may have been empowering for the public to picture these internet “trolls” as ugly creatures living under the bridge, this imagery may have begun to downplay the seriousness of their online behaviour. </p>
<p>Continuing to use the term trolling, a term that initially described a behaviour that was not intended to harm, could have serious consequences for managing and preventing the behaviour.</p><img src="https://counter.theconversation.com/content/110272/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Evita March does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Some people still think “trolling” refers to harmless fun. If we want to reduce abusive online behaviour, let’s start by getting our definitions right.Evita March, Senior Lecturer in Psychology, Federation University AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1009372018-09-09T16:30:29Z2018-09-09T16:30:29ZMaking society civil again<figure><img src="https://images.theconversation.com/files/234885/original/file-20180904-45135-k1kyl1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Eroding civility is not just a U.S. phenomenon. We need to learn how to speak to each other, no matter what our politics. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>The United States media has been awash <a href="https://www.vox.com/policy-and-politics/2018/6/25/17500988/sarah-sanders-red-hen-civility">with debates about civility</a> in recent months after a number of officials in Donald Trump’s administration have been heckled and shamed in public places. </p>
<p>Commentators have claimed the cause of incivility stems from everything from <a href="https://www.washingtonpost.com/news/posteverything/wp/2018/06/28/heres-how-political-science-explains-the-gops-obsession-with-civility/?noredirect=on&utm_term=.1de1fcf1cd84">political orientation</a> to <a href="https://www.nytimes.com/2018/06/20/us/politics/trump-language-immigration.html">Donald Trump’s leadership</a> and the way we communicate on social media. The recent <a href="https://www.reuters.com/article/us-usa-mccain-flags/white-house-wobbles-on-us-flag-after-mccain-death-idUSKCN1LC275">White House wavering on flag-lowering protocol following the death of Sen. John McCain</a> has only reinforced the ubiquity of this issue, as did <a href="http://www.chicagotribune.com/news/nationworld/politics/ct-mccain-funeral-dc-20180901-story.html">high-profile speakers calling for a return to civility at his funeral</a>.</p>
<p>But eroding civility is not just a modern American affliction;
<a href="https://www.huffingtonpost.ca/clare-beckton/canada-civility_b_7596622.html">Canada</a>, <a href="https://www.telegraph.co.uk/news/yourview/1562050/How-can-we-combat-the-culture-of-incivility.html">the U.K.</a> and others are not immune. </p>
<p>Respect and civility ultimately reflect our social competency. Their decline can be attributed to a number of factors in our modern world: Abrupt encounters between different beliefs (e.g., through immigration and <a href="https://www.thestar.com/opinion/contributors/thebigdebate/2018/07/17/does-canada-have-a-refugee-crisis-no.html">refugee “crises”</a>), <a href="https://www.cbc.ca/news/indigenous/david-suzuki-foundation-first-nations-water-report-1.4525456">the disbelief and denial that social inequalities still persist</a>, <a href="https://www.theguardian.com/technology/2017/dec/11/facebook-former-executive-ripping-society-apart">social media algorithms that only expose us to beliefs that are similar to our own</a> and the rise of <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5791909/">both real</a> and <a href="https://phys.org/news/2018-08-fake-social-media-derail-booming.html">artificial online trolls</a>. </p>
<h2>The microcosm: Incivility in groups</h2>
<p>Whether intentional or instinctual, <a href="https://link.springer.com/article/10.1007/s11211-008-0067-y">human</a> and <a href="https://books.google.ca/books?hl=en&lr=&id=9m1HzazNOHsC&oi=fnd&pg=PA84&ots=tC0ruRGdpl&sig=wCnn1hiTm__oJ0PesdMdA0xL0i4#v=onepage&q&f=false">non-human</a> animals alike act in a way that ensures equitable exchanges within their group.</p>
<p>We seek balance. If we are treated in a respectful manner, we want to return the favour. If <a href="http://doi.apa.org/journals/apl/92/4/1159.html">we feel slighted, we typically want reprisal</a>. This is the catalyst for <a href="https://www.forbes.com/sites/audreymurrell/2018/07/16/stopping-the-downward-spiral-of-workplace-incivility/#30bb18c354ef">the spiral of incivility.</a></p>
<p>Incivility has become a persistent concern in workplaces around the world (e.g., <a href="https://hbr.org/2013/01/the-price-of-incivility">U.S.</a> and <a href="https://www.ncbi.nlm.nih.gov/pubmed/28302927">Japan</a>). It reflects more general tendencies driven by features of individual psychology in group settings. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=403&fit=crop&dpr=1 600w, https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=403&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=403&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=507&fit=crop&dpr=1 754w, https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=507&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/234903/original/file-20180904-45151-cc39wl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=507&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">At work or at home, if we are treated in a respectful manner, we want to return the favour. If we feel slighted, we want reprisal.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Whether at work, at a restaurant, or at home, our expectations will ultimately depend on the kind of relationship we believe we share with those around us: Communal sharing in a family, equality with a co-worker, deference to a boss or even proportional cost and benefit in a market economy. </p>
<p>All of these expectations reflect <a href="https://www.iep.utm.edu/r-models/">possible models of fair interpersonal exchange that we might reference</a>. Crucially, violating their norms can make us <a href="https://www.tandfonline.com/doi/full/10.1080/1047840X.2012.670782?casa_token=8Stb0QpX-zQAAAAA%3A359aqIJNk81foDdx7CYfA6lfldm5OhQp0z0QJ9tbVkJJvi4Ig_tvVkBIAtCSYVHhlPtZICcUxAyX0w&">feel justified in engaging in verbal and nonverbal aggression</a>.</p>
<p>Rather than being unethical or disrespectful, others simply might not share the same beliefs about what is appropriate in a given context: For example, as children grow older, the expectation of deference to a parent can turn into an expectation of equality — one that is not yet shared by the parent. </p>
<p>Civility requires that we make a concerted effort to understand each other. <a href="http://journals.sagepub.com/doi/abs/10.1207/s15327957pspr0104_5">Despite our confidence in knowing the intentions of others, our accuracy can be quite low</a>.</p>
<h2>Depersonalizing ourselves, others online</h2>
<p>All we truly know of each other are sundry fragments that are hastily gathered in a moment. <a href="https://www.psychologicalscience.org/observer/snap-judgment-science">Social judgments are made fast and furiously</a>. Yet, understanding others is a <a href="https://www.psychologytoday.com/ca/blog/between-cultures/201606/understanding-others">multi-faceted</a> competency that requires <a href="https://www.psychologytoday.com/ca/blog/socioemotional-success/201707/theory-mind-understanding-others-in-social-world">time to develop</a>. </p>
<p>In an online setting, where many social cues are modest or absent, we are left with the written word. <a href="http://journals.sagepub.com.proxy.library.carleton.ca/doi/10.1177/1529100610390861">Without nonverbal cues discerning their meaning can be a daunting task</a>. Online posts have become <a href="https://www.livescience.com/9695-rorschach-test-discredited-controversial.html">the Rorschach tests of our time</a>. They are as ambiguous and equally inaccurate in predicting behaviour.</p>
<p>Making matters worse, when we feel like we are one of the crowd, we tend to misbehave. <a href="http://psycnet.apa.org/record/1976-20842-001">Anonymity</a>, <a href="http://socialpsychonline.com/2015/12/being-a-good-samaritan-psychology-of-helping/">a lack of time, and stress</a> can reduce helpful behaviour and increase antisocial behaviour. </p>
<p><a href="https://www.ncbi.nlm.nih.gov/pubmed/15257832">In online spaces, we feel disinhibited.</a> Online <a href="https://www.sciencedirect.com/science/article/pii/S0747563210001627">communities</a> and <a href="https://www.sciencedirect.com/science/article/pii/S0191886917300260">dating sites</a> are replete with uncivil behaviour. Rather than living in a community with repercussions, we practice avoidance. Rather than constructively confronting perceived inaccuracies we find in ourselves, we might <a href="https://www.psychologytoday.com/ca/blog/do-the-right-thing/201709/polarization-groups-never-ends-well">run further away from one another and toward the fringes</a>. </p>
<p>In the short run, we might preserve a fragile sense of self as a good and competent individual. In the long run, this isolation only reinforces perceived differences and places us in a bubble.</p>
<h2>Losing contact with our leaders</h2>
<p><a href="https://www.annualreviews.org/doi/abs/10.1146/annurev-psych-010416-044153">Power can alter our behaviour. It can change what people want and how they attain their goals</a>. </p>
<p><a href="https://www.scientificamerican.com/article/the-new-psychology-of-leadership-2007-08/">Leaders believe that they must symbolically represent the group and its values</a>. If those with power feel it’s their duty to adhere to the values of the group, they will. If certain values are deemed irrelevant, they will be ignored: A leader might focus on a group’s finances and neglect its ethics.</p>
<p>Over the long term, leaders can trap themselves if these values are not realistically attainable over the course of their tenure. This <a href="https://www.psychologytoday.com/us/blog/neuronarrative/201009/power-makes-the-hypocrite-bolder-and-smugger?amp=">moral hypocrisy</a> places them in a precarious position. The higher the pedestal, the greater the fall. And people will push.</p>
<p>Wanting a world without ambiguity, followers often resort to rationalizing inconsistencies and can <a href="https://thedecisionlab.com/bias/reactive-devaluation/">dismiss the proposals from those of other groups </a>, something that <a href="http://journals.sagepub.com/doi/abs/10.1177/0022002702046004003">can translate into real-world consequences</a>.</p>
<h2>Choosing the course of history</h2>
<p>History is a willing tutor if we’re prepared to listen with a critical ear. <a href="https://www.theguardian.com/science/2005/aug/25/controversiesinscience">When we come together to fight a common enemy, we can push back empires. When we lose common ground, our societies shatter.</a> </p>
<p>A reading of <a href="https://books.google.ca/books?hl=en&lr=&id=Sb40EosBr90C&oi=fnd&pg=PT11&dq=american+nations+woodard+regional+cultures&ots=l5H4Ko5zkH&sig=RFeXrOTBTIm483twUvof2grexKY#v=onepage&q=american%20nations%20woodard%20regional%20cultures&f=false">the history of North America</a> reflects an uneasy plurality. Whether historically or presently, <a href="https://blogs.scientificamerican.com/literally-psyched/revisiting-the-robbers-cave-the-easy-spontaneity-of-intergroup-conflict/">evidence suggests that tensions can be reduced when faced with common threats</a>. Leaders can and do manipulate this to increase cohesion within the majority. However, there is a price to pay.</p>
<p><a href="https://www.cbc.ca/news/canada/manitoba/apology-to-japanese-canadians-leaves-great-legacy-1.1865829">In the Second World War, Japanese-Canadians paid the price</a>. Now, an increase in hate crimes might suggest <a href="https://globalnews.ca/news/3523535/hate-crimes-canada-muslim/">Canadian Muslims are footing the bill</a>. The <a href="https://www.theguardian.com/politics/2018/may/11/uk-has-seen-brexit-related-growth-in-racism-says-un-representative">U.K.</a> and the <a href="https://www.usatoday.com/story/news/2018/07/17/hate-crimes-up-america-10-largest-cities/776721002/">U.S.</a> have their own variants.</p>
<p>Unless we want to become another failed stratum in the sediment of history like Rome, we must choose our responses wisely. When our barbarians are at the gates, will we be prepared?</p>
<p>The greatest threats are not as simple as identifiable countries or peoples. Instead, our common adversaries are largely self-made. Antibiotic-resistant diseases, climate change, workforces ill-equipped for seismic technological shifts and overly simplified rhetoric imperil us. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=420&fit=crop&dpr=1 600w, https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=420&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=420&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=527&fit=crop&dpr=1 754w, https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=527&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/234907/original/file-20180904-45181-1ezyfuf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=527&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Man-made problems like climate change endanger us more than other countries or peoples. In this August 2018 photo, the city of Toronto grapples with major flooding after a prolonged torrential downpour.</span>
<span class="attribution"><span class="source">THE CANADIAN PRESS/Shlomi Amiga</span></span>
</figcaption>
</figure>
<p>The endemic rashness of political discourse can no longer be tolerated. </p>
<p>Civility has a role to play here as we challenge ourselves and others. We must be humble with the limits of our knowledge. <a href="http://www.pewresearch.org/fact-tank/2018/06/18/qa-telling-the-difference-between-factual-and-opinion-statements-in-the-news/">In an age when fact and opinion have become blurred for many</a>, we must approach absolute statements with caution. This requires deliberation and respectful exchange. The more reasoned the arguments we take into consideration, the better off we will be.</p>
<p>Equally important, civility does not imply that all opinions have equal merit. Instead, we must invest time and effort in our response and avoid being stuck between reactive gut feelings and indifference. We must reflect on how we will be judged and remembered when the dust of history settles upon us.</p>
<p>In an irrevocably globalized world, civility is likely more important now than it has ever been.</p><img src="https://counter.theconversation.com/content/100937/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jordan Richard Schoenherr does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Eroding civility is not just an American phenomenon; it’s global. But it’s time for a return to civility as we reflect on how we will be judged and remembered when the dust of history settles upon us.Jordan Richard Schoenherr, Adjunct Research Professor, Department of Psychology, Carleton UniversityLicensed as Creative Commons – attribution, no derivatives.