Menu Close
Populists like Donald Trump have used Twitter to his enormous political advantage. But the popular social media platform is failing to bring to heel the bots and fake accounts that can and have interfered with democracy. (AP Photo/J. David Ake)

Twitter’s struggle to thwart threats to democracy

Digital media have enabled new tactics of foreign influence to sway voters and magnify the messages of populists such as Donald Trump. This runs against democratic processes, and comes with important new effects on electoral politics and foreign relations.

By building networks of fake and automated accounts on Twitter, governments and businesses can artificially inflate the popularity and influence of individual accounts and create trends using hashtags. Put differently, malicious users can employ Twitter to spread and amplify disinformation to its real users.

Governments and social media companies are racing to keep up with events, and companies like Facebook and Twitter are adapting their internal policies to forestall more onerous government regulations.


Read more: Why Canada's response to the Facebook scandal has been so weak


Will this be enough to protect democracy and free speech?

Specifically, can Twitter effectively limit its use for foreign influence by eliminating bots and fake accounts from its platform for one of the world’s most polarizing political figures, the current U.S. president?

Twitter’s policy problem

Twitter has been forced into the spotlight given its personal use by Trump and its prominence in digital diplomacy, its targeting by social media marketer Devumi for fraudulent marketing, its citation in Special Counsel Robert Mueller’s indictments against Russian conspirators and its central role in establishing new regulations for social media.

In early 2017, the U.S. intelligence community released a report concluding the Russian government executed an influence campaign to sway American voters in Trump’s favour. Not only did pro-Trump bots overwhelm pro-Clinton ones on Twitter, but they were effective in reaching pro-Trump users throughout the campaign.

This bolstered “echo chambers,” and ultimately threatened the democratic process.

Prior to facing congressional scrutiny in fall 2017, Twitter published an internal review of the 2016 election, then promised a series of platform changes, including advertisements, safety and “spam” elimination. It downplayed its role in the 2016 election outcome by insisting the volume of Russian disinformation was “comparatively small” and that bots represented less than five per cent of its monthly active users.

Lack of enforcement

If Twitter is unable or unwilling to follow through on its promise to address this issue, it isn’t because of a lack of policies — it’s due to ineffective design and enforcement.

First, its impersonation policy places the burden of proof on existing users to report fake accounts rather than requiring new users to prove their authenticity from the outset. Twitter allows users to set up accounts for “parody, commentary and fan” purposes, but forbids fake engagements, including the sale of followers or likes.

Second, Twitter’s ad policy prohibits hateful and inappropriate content, and requires political campaigning to comply with applicable laws. Despite promising the contrary, its ad policy hasn’t been updated since 2015, according to its update log, though it’s just announced new measures to regulate political ads.

By January 2018, Twitter published an update. It found 3,814 accounts operated by the Internet Research Agency (IRA), an outfit of the Russian government. These accounts tweeted 175,000 times, reaching 1.4 million Americans during the election.

Twitter also cited improvements in its Information Quality initiative to remove malicious users abusing its services.

But a week later, the New York Times revealed Devumi sold more than 200 million followers to 39,000 clients using roughly 3.5 million fake accounts. Shortly after, Mueller indicted Russia’s IRA and its conspirators for executing an influence campaign, citing the instrumental role of social media, including Twitter, in the 2016 U.S. presidential election.

Twitter has therefore become an unwitting instrument for foreign influence and political propaganda. Not only is the IRA-run network of bots still active, but some of Devumi’s clients are prominent foreign officials, including an editor of China’s state-run news agency and an adviser to Ecuador’s president.

To compound the problem, false information spreads faster than factual information. The Parkland high school shooting illustrates how the potent combination of disinformation and bots can exacerbate partisan tensions and thwart democratic discourse. Not only did fake Russian accounts race to fill the Twittersphere with divisive content following the shooting, but a doctored video circulated in which student survivors of the shooting were seen tearing up the Constitution.

Tweeting his way to the presidency

Major events from 2013 to 2018.

Using our timeline above of major events credited with attracting bot activities, we tracked followers and engagement of the three Twitter accounts attached to Trump. The @realDonaldTrump account should be a particular attraction for bots and fake accounts given its size, volatility, and the expressed interest of government-affiliated Russians in Trump’s political influence. Therefore the effects of Twitter’s platform changes aimed at eliminating fake and automated followers should be especially noticeable here.

Trump’s Twitter following from the primaries to the presidency per account. Twitter; Internet Archive's Wayback Machine

It’s clear that Twitter’s platform changes (expanded definition of “spam” in November 2017 and enforcement of new policies on Dec. 18, 2017) has not had the expected effects. With fake accounts making up about 15 per cent of Twitter’s active users, plus third-party estimates that up to 30 per cent of @realDonaldTrump’s followers are fake, we would have expected to see a noticeable drop in Trump followers and engagement after the platform changes.

Instead, his followers and engagement rose steadily from 2015 on (as did the @WhiteHouse and @POTUS accounts, although much more slowly). This suggests one of three possibilities:

  1. The platform changes were a publicity move rather than a substantive change of practice;

  2. Despite its best intentions and efforts, the technical task of removing and blocking bots is more difficult than anticipated, or;

  3. Bots are quick, adaptable and able to keep pace with any attempts to eliminate them.

What next?

Twitter’s unsuccessful battle against bots and fake accounts demonstrates the problem of corporate self-regulation and suggests the need for coordinated and likely coercive government intervention. Companies have strong incentives to pass vague policies to provide flexibility and to avoid undercutting their economic model. As enforcement is in the hands of the corporation, policies that are open to interpretation protect them from pressures to enact costly measures.

While the initial political effects in social media platforms may have been unintended, the consequences have become stark and undisputed. Upcoming mid-term elections in the U.S. in November, as well as new data privacy policy in the European Union, and proactive measures to prevent political interference in the Irish abortion referendum and Indian elections, all call attention to the question of social media corporations’ ability to effectively self-regulate.

The initial evidence presented here reinforces these concerns.

Privately governed social media platforms, integral to many individuals’ daily lives, have yet to be held accountable for past political interference in democratic processes. Individual companies themselves have limited incentives to enact strong measures to protect democratic processes in the future, as this would disadvantage them in an incredibly competitive and lucrative economic market.

The remaining option is government regulation, which could occur at the national or international level.

Given states’ incentives to keep regulation lax to encourage corporations to remain in their jurisdictions, the only option to prevent malicious manipulation of social media may be international cooperation for a new global information communication technology regime.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now