Menu Close

Social media helps reveal people’s racist views – so why don’t tech firms do more to stop hate speech?

This article contains examples of racist, Islamophobic and threatening language

Twitter has finally permanently removed right-wing commentator Katie Hopkins from its platform for violating its “hateful conduct” policy. Many would ask why it took so long for Twitter to ban someone with such a long record of offensive comments.

Yet for every right-winger like Hopkins, there are many more people on social media who don’t command such a large following and might be seen in some respects as ordinary people, but who are in fact equally as dangerous. They may not share the motivation of the far right, but they still express and incite racial and religious hatred, often through social creativity and online manipulation.

As Black Lives Matter continues to draw attention to racism – and trigger pushback from people using social media to express sentiments against people of colour – it’s time internet companies did more to tackle all forms of bigotry.

A few years ago, I conducted research on online Islamophobia following the 2013 Woolwich terror attack, identifying eight types of offender on Twitter who could be classed as racist. Most were not members of a far-right group. They included builders, plumbers, teachers and even local councillors. But many used the cover of social media to spread their own conspiracy theories and an “us and them” narrative.

Some people who fall into these categories still make very explicitly bigoted and even threatening comments. They can be people like Rhodenne Chand, a man of Indian origin who wasn’t a member of any far-right group but was jailed for posting a series of Islamophobic tweets after the 2017 Manchester Arena attack. These included the claim he wanted to “slit a Muslim throat”.

Meanwhile, rugby-playing student Liam Stacey was jailed for making racist tweets about footballer Fabrice Muamba. Both these cases show that you don’t have to be a far-right neo-Nazi with a hatred for all things multicultural, to make bigoted and indeed criminal statements. You can simply be someone who buys into the racist views and fake news spread by social media.

Joining in

You also don’t need to be this obviously racist to enact or encourage prejudiced behaviour online. My research showed some people simply join in with conversations targeting vulnerable figures. Others post messages that don’t say anything specifically racist but that they know will inflame racial tensions.

For example, I encountered a post asking: “What is your typical British breakfast?”. Out of context it seems harmless yet it led to a spiral of hateful comments about Muslims:

For every sausage eaten or rasher of bacon we should chop of a Muslims head [sic].

Muslims are not human.

One day we will get you scum out.

Muslim men are pigs … I am all for annihilation of all Muslims.“

In this way, social media acts as an amplifing echo chamber for such hateful rhetoric and racist views. It makes the way some people imagine the world seem more real. And it reinforces how they see the internet as a place where it’s acceptable to post comments with racially motivated language, often with the caveat that they are not racist but simply hate an ideology.

This can be seen as a form of social creativity where people shape their online behaviour to try to position their in-group (a social group with which they identify) as dominant in society. Another phrase I would use to describe them is the "virtual cyber mob”.

You don’t have to be a stereotype neo-Nazi to make bigoted comments online. fizkes/Shutterstock

As I’ve continued to research social media over the years, I’ve seen how such behaviour has become normalised even as its focus has changed. In 2019, I led an independent government-commissioned research project to see how people were using social platforms to spread racist views. This time we noted many posts centred around the media, fake news and conspiracy theories.

As part of the study, we collected hundreds of tweets posted in response to the 2019 terrorist attack against a mosque in Christchurch, New Zealand. Many portrayed it in terms of media bias against victims of other attacks. For example:

A few dead muslims compared to millions of slaughtered innocents at the hands of islamic barbarians. #islamisevil #NewZealandTerroristAttack [sic]

Let us not forget the thousands upon thousands of victims killed by the real ‘terrorists’, propagating the Islamic ideology. #AntiIslamic #IslamIsEvil #EndIslam #Muslims

It’s important to recognise that these comments on social media reflect wider attitudes that are endemic in the offline world. Social media can appear to act as a megaphone for racists, but these opinions are much more mainstream then you think. As a society we need to grapple with how these ideas have become normalised, and challenge and expose them.

Social media companies including Facebook, Twitter and now TikTok have taken active steps to block and remove those people clearly linked with the far right. But this is only a starting point. More needs to be done to identify other individuals who are less obviously spreading hatred, often under the protection of anonymity. Only then can we try and effectively change attitudes and reduce social media’s significant capacity for harm.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now