Menu Close

AI can help silence the trolls, but tackling online abuse is ultimately a human choice

“Apparently you’ve got some kind of a troll problem?” Sony Pictures/Columbia Pictures

Leslie Jones, the actress and comedian who plays Patty Tolan in the all-female reboot of Ghostbusters, has become the latest celebrity on Twitter to be subjected to torrents of abuse. Her ordeal is yet another in a long list of people, overwhelmingly women, who have been abused online. Yet again attention has turned to focus on what steps Twitter is taking to tackle abusive trolling.

Sifting abusive posts from the roughly 500m sent per day on Twitter alone is quite a task. The hope is that the computers and software underpinning the platforms that allow us to communicate, share information and have information recommended to us based on our preferences will also help us keep the trolls at bay. Over the past 20 years, increasingly sophisticated algorithms have been developed to identify unwanted and abusive messages. Originally developed to combat spam email, the same techniques are being deployed against abusive messages.

These pattern-detecting algorithms are often referred to as “machine learning” because the software is fed examples from which it learns to identify similar messages. During a training phase the machine learns which features of any text are undesirable (however that is defined) and which features are acceptable. A piece of text will typically contain a set of such features. Once the machine has been trained, it is able to form a judgement when shown a new piece of text based on the accumulated evidence it has seen before.

These algorithms can be used to filter unwanted messages before we see them, and before they cause hurt or alarm. The hard part is defining the underlying learning model, figuring out what the features to identify should be, and determining how that evidence is to be accumulated and scored. How tricky this can be was excellently demonstrated by Microsoft, whose Twitter chatbot went rogue after pranksters subverted the machine learning process by teaching it to swear and make abusive remarks.

AI assistance

There are a number of approaches available to developers. This is a very active research area as machine-learning algorithms can be applied across many types of tasks, from helping computer vision make sense of what it’s seeing, to assisting robotic movements or bulk text analysis.

Commonly used approaches include Naive Bayes classifiers, logistic regression classifiers, perceptrons, support vector machines, and various kinds of multi-layered neural networks (MLNNs). MLNNs are causing the most excitement at present, not least because they roughly simulate the way in which real-life brains might be working, but also because successfully trained neural nets presently are the highest performers for many of these tasks. However, all of these approaches have their niches that depend on the precise task, the amount of training data available for learning, the time available for training, and the hardware requirements.

Banning is the ultimate end for persistent trolls. blakeburris, CC BY-SA

When it comes to intercepting trolls, machines currently can do a reasonably good job at spotting the obvious. They can spot typical patterns of insult. They might even do a reasonable job at differentiating between, for example, a message containing an out-and-out racial insult and a message that employs racial slur words in an (arguably) benign manner. For example, many short messages contain racial slur words that are being used not as an insult but as slang terms used by members within a particular social group.

But language use is subtle, and the current generation of systems struggle to identify sarcasm, reading-between-the-lines implication, or some other linguistic sleight of hand. The problem is that machines are simple-minded, based only on the form of words rather than on their underlying meanings (however they are defined).

Over to you, human

Nonetheless, given the current state-of-the-art why is there still so much blatant abuse online? Ultimately, this is down to the choices we make rather than limitations of technology. While the virtual spaces we inhabit feel public, all are in fact someone else’s private virtual real estate. The computer servers upon which websites reside might be anywhere in the world, with what is allowed defined by legal contexts that differ depending on the country. Within those quite broad limits, the owner may define further restrictions. Ultimately, some site operators feel the benefits of free speech outweigh the benefits of a life free from abuse.

Of course, to remain popular the owners of these services need to be mindful of their users’ views, but for many domains only the tolerant and thick skinned need apply. The racist remarks, harassment and abuse directed towards Leslie Jones demonstrate how social media can degenerate into an echo chamber of hateful outbursts.

Ultimately, we need to make choices about how we behave to each other online. In the meantime, technologists will continue to develop algorithms and gradually improve machines’ ability to analyse, filter, and protect us from some of the worst aspects of ourselves.

Want to write?

Write an article and join a growing community of more than 125,400 academics and researchers from 3,992 institutions.

Register now