UK election 2019: after fake Keir Starmer clip, how much of a problem are doctored videos?

Keir Starmer was recently made to look stupid in a video edited by the Conservative party. GMB

The Conservative party created a furore when it released an edited video of Labour MP Keir Starmer looking lost for words when discussing his party’s Brexit policy on ITV’s Good Morning Britain with Piers Morgan – when in fact he wasn’t lost for words in the real interview at all. BBC broadcaster, Andrew Neil has also been chastised for retweeting a doctored video on the SNP’s Ian Blackford, showing him flustered on the SNP’s record on Scottish health issues.

James Cleverly, the Conservative party chairman defended the Starmer video release, claiming it was simply edited for brevity, for easy sharing on social media and because, in his view, Starmer did not answer the question posed. So what’s the fuss? Isn’t this just the normal back and forth of electoral politics?

On the one hand, one might argue that the edited version is simply representing the opponent in their “true light”. They might actually have responded but their answer was so vague and meaningless, that they might as well have said nothing.

American comedian Stephen Colbert coined the term “truthiness” in his political satire programme, The Colbert Report. Truthiness is a feeling that comes from the gut, rather than from the facts. Might Starmer’s manufactured silence simply represent truthiness? Are the Conservatives telling people, particularly their would-be voters, what they feel and want to hear? That Labour’s policy (in their view) is bland and incoherent.

The fact is that negative advertising like this is often not perceived as negative when viewed by a party’s own supporters. They display what psychologists call “confirmation bias” and motivated reasoning. Because they don’t like what Starmer stands for, they are quite happy to believe he doesn’t know what he’s talking about – whether this video represents the truth or not. This kind of campaign communication is about getting existing supporters fired up and out to vote, rather than trying to change anyone’s mind.

What are the limits?

But the issue is: where does the doctoring of videos in electoral contests stop? What are the limits?

One malicious problem is that of “deepfake” technology, where highly enhanced video shows imagery that never happened but with a production quality where it is difficult to detect that the image is fake. Up until now, creating deepfakes has mostly been the preserve of cyberbullies and pornographers. But the political dimension was recently highlighted in the BBC TV programme, The Capture, where a soldier is framed for a murder he didn’t commit by the intelligence services to allow them to continue using deepfake technology to frame and pursue suspected terrorists.

The states of California and Texas are so worried about deepfake technology and its potential influence on elections that they have rushed through legislation to make it illegal to disseminate a malevolent deepfake about a politician within the timing of an election campaign.

It’s no surprise they are concerned. If videos can be made showing politicians saying and doing things they have not done, including extramarital affairs, using racist language or taking drugs, then deepfakes can result in creating public panic and moral outrage and thereby damage a politician’s chances in an election.

Deepfake technology has the potential to shift so-called opposition research, where political parties dig up the dirt on their opponents usually using their own statements against them, from a reactive opportunist approach to a proactive one where they make things up. Why wait for a politician to do something wrong when you can simply pay someone to doctor a video about them? Cue mayhem in their campaign.

What’s even more worrying is that this kind of technology might not just be used by domestic politicians against each other but might also be used by state adversaries or terrorist groups against legitimate politicians. What’s to stop Russia from using it to discredit politicians in Ukrainian elections, for example? Fake news shifts from a means to sow distrust in a society into a fully fledged weapon that could bring about regime change.

So there is a wider principle to this – but what do we do about it? After all, political advertising does not have to be legal, decent, honest and truthful like commercial advertising has to be. It’s not regulated by a body like the Advertising Standards Authority. But, in a world of deepfakes and truthiness, perhaps it is time that political advertising was regulated? The only problem here will be getting political parties to agree to it.


Click here to subscribe to our newsletter if you believe this election should be all about the facts.

Want to write?

Write an article and join a growing community of more than 105,300 academics and researchers from 3,358 institutions.

Register now