The severity of abuse conducted online during 2017’s general election has brought the issue into sharp focus for politicians, some of whom have urged the prime minister to legislate against Facebook, Twitter and Google to make them liable for content posted on their sites.
Complaints about online harassment in the UK continue to rise. A recent response to a freedom of information request from the BBC revealed that, on average, the police receive 200 reports of online abuse each day – which has been described by Essex Police chief constable, Stephen Kavanagh, as just “the tip of the iceberg”.
A report published recently by the Committee on Standards in Public Life made several recommendations, including bringing in a new law “to shift the liability for illegal content online towards social media companies”.
Silicon Valley giants such as Facebook and Google are currently protected under the European Union’s e-Commerce Directive (2000/31/EC), which states that such companies operate as “information society services”. Put simply, such services are defined as passive hosts rather than active publishers of content.
It means that social networks like Twitter are exempt from prosecution when users post illegal content, such as racist tweets, on their sites. They are only expected to remove such content after other users complain about the post. The committee said in its report:
The EU’s e-Commerce Directive is the reason that the social media companies do not search proactively for illegal content in order to remove it. The notice and takedown model incentivises service providers to avoid actively monitoring or taking preventative measures against illegal content so that they benefit from the hosting exemption.
But even with the supposed protection of the EU’s e-Commerce Directive, social networks haven’t entirely escaped regulation that forces them to act against illegal posts. In June 2017, Germany enacted legislation to fine social media firms (with a minimum net worth of £2m) if they failed to remove illegal content within 24 hours. Under those measures – which carry penalties of up to £44m – the content needs to be “clearly illegal”, which in the case of online abuse is not always easy to distinguish.
If the UK decides, post-Brexit, to abandon the e-Commerce Directive, it can develop an entirely new legal framework that could adequately tackle the proliferation of illegal posts on social networks, by holding these firms more directly accountable for comments posted on their sites.
Combating online abuse is a huge challenge
There is no quick fix when it comes to online abuse, in fact, there is probably more than one way to help overcome this problem in our society. It seems that each year, parliament creates a select committee to examine harassment on social networks, but it’s not any closer to tackling the issue.
Authorities in England and Wales currently use several laws to prosecute those who abuse others online. This includes, but isn’t limited to, the Malicious Communications Act 1988, the Protection from Harassment Act 1997 and the Communications Act 2003; each is not without its faults.
Arguably, the law is struggling to keep up with how people communicate online. Specific regulation could tighten the fragmented approach the UK currently takes in controlling the problem of online abuse. By having legislation which is more precise then that of the Communications Act – for instance a working definition on the term “grossly offensive” – it could act as a deterrence within society.
Education, education, education
Online abuse can’t be curtailed by regulation alone. Social media dominates much of society today. More than 2 billion people use Facebook on a monthly basis, according to the company’s latest statistics. Given the popularity of social networks, more should be done to educate people about how they behave online. A green paper, issued by the government on its internet safety strategy, recommended that compulsory lessons should be introduced. It will include advice on how to behave online. Recently YouTube vlogger Jack Maynard found out the hard way how past tweets can come back to haunt you.
Social networks need to take more responsibility for what is posted on their sites and, sadly, the only way this is likely to happen is through regulation. It’s been well documented that the likes of Facebook and Twitter are slow at removing hateful and illegal content from their sites.
But those who post abusive messages online also need to take responsibility for their actions. It starts with educating young people about social media and the consequences of their actions.
Any legislation enacted will need to take into account our rights to freedom of expression, but there is clearly a difference between voicing an opinion and being abusive.