Menu Close
We should help protect those who protect us from abuse online. Shutterstock/hitmanphoto

‘Haters gonna hate’ is no consolation for online moderators

When trolls strike in website comment sections and across social media, we tend to look to curtail the perpetrators and help their targets. But what of the moderators – the often nameless and invisible people caught in the middle trying to police the flow of abusive, and often very offensive, material?

Unlike for professions such as forensic investigators, there does not currently appear to be any research into the impact of all this on moderators. It’s time there was.

As someone who once enjoyed hate reading comments sections, I thought I was prepared when, in 2011, I started moderating comments on social media at the ABC.

I was so wrong. My expected post-outrage glow was instead replaced with a desire to shower in scalding water – a lot.

This is largely because of the following:

  1. you have to read all the comments – even the ones no-one else sees because they’re so bad you’ve removed them
  2. you have to do that until it’s time to go home
  3. you have to do that every time you go to work.

As a moderator I’ve dealt with the following: rape threats, racial slurs, hate speech against people who identify as LGBTI or are non-Christian or immigrants or refugees, and that old staple – misogyny that makes the Middle Ages look enlightened.

I’ve also been abused for not getting comments down fast enough, for deleting or hiding them, been called a left-wing ideologue, a right-wing apparatchik, a “f—ing moron” and told to (insert sexual act of choice here).

Is it possible for moderators to simply ignore the abuse as the ‘Haters gonna hate’ meme advises? Flickr/v i p e z, CC BY-NC-ND

But it’s not like it was personal – “haters gonna hate” as the meme goes – and I should probably just get over it, right?

The moderator toll

It’s very easy to think that trolling is only a problem for those who are specifically targeted. We, rightly, question the impact on their victims’ mental health as in the case of Charlotte Dawson earlier this year. We talk about the personality traits of trolls and offer advice on the best ways to deal with them.

As recent articles on sites such as Salon and Wired note, there is a toll on moderators, and it’s often high.

In an open letter to parent company Gawker Media after months of inaction on their “rape gif problem”, staffers from feminist site Jezebel explicitly noted:

In refusing to address the problem, Gawker’s leadership is prioritizing theoretical anonymous tipsters over a very real and immediate threat to the mental health of Jezebel’s staff […]

The problem is getting worse as organised groups targeting specific content get involved. When that occurs the constant river of comments can become a flash-flood.

Earlier this year, for example, a post (now deleted) on the ABC News Facebook wall was targeted by an anti-Mosque group. Within minutes of their call to arms the comments were flooded with hate speech, much of it cut-and paste.

It was almost impossible to keep up with the flow of horribly racist, religiously intolerant rants in the comments. It was almost a week before things settled down.

For all the moderators involved it was exhausting and dispiriting.

Recently we’ve seen heavy trolling of companies who make Halal food by anti-Muslim groups. There’s been similar targeting of posts on Gamergate, and it’s an expected feature of articles on feminist issues or climate change or asylum seekers.

What’s to be done?

Most of the suggested solutions are aimed at trying to end trolling and incivility. Admirable though this is, it’s doomed to failure.

Pandora’s Box is well and truly opened on this one and that means we need to think damage control.

Tweaking the algorithms that automatically block offensive content, where you can, is a potential solution – but algorithms can be gamed. In the end, as a recent article in Mashable points out:

[…] new rules don’t necessarily mean moderators will see less awful content.

Steering clear of posting troll-bait items, while tempting, is self-censorship and a very slippery slope indeed. We could get rid of comments on those articles, but the jury’s still out on that tactic.

I’m sceptical about how well ending anonymity will work. That assumes people don’t do things like make fake Facebook profiles – I’m looking at you the United States’ Drug Enforcement Agency.

We need research into the potential psychological impacts of moderation on those who do it. It’s very difficult to come up with effective strategies without it.

Platforms such as Facebook and, particularly, Twitter need to be better about responding to and acting on concerns when raised.

Employers can take a proactive approach – have regular and open conversations with moderators and take their concerns seriously. Offer support and properly resource them. Track and appropriately address the volume of comments – when and on what sites and posts they occur and how they are enabled.

Still, for all the bile, there are the diamonds – comments that reaffirm your belief in humankind and leave you floating on air. For those, on behalf of moderators past and present, thank you.


Jennifer Beckett will be answering questions between 10am to 11am AEDT today, December 8. Ask your question in our comments (below) and please take note of our Community standards on comments.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now