The elusive “blue tick” on Twitter, once granted to high-profile users like politicians, celebrities and journalists to verify their identity, is now available for purchase. Twitter’s new owner, Elon Musk, has launched Twitter Blue, a monthly subscription service for US$7.99 (£7) that will give users blue tick status and, eventually, perks including fewer advertisements.
Musk claims that this change gives “power to the people”. But my ongoing research with women in the public eye suggests that charging for verified accounts is likely to further jeopardise women’s safety online.
One of the lesser known benefits of verification, according to the women I’ve spoken to in my research, is access to a range of extra content filters, allowing users to manage their notifications so that they are only alerted if other verified accounts tweet them, as well as being able to easily block or mute groups, lists and topics.
These filters act as a shield against abusive and threatening contact coming from other users of the platform. Decisions around awarding verification have long been opaque, with Twitter simply declaring that a verified account must be “authentic, notable and active”.
Verification also allows users to connect with others who have a blue tick, who can provide support during episodes of abuse. Verified status does not stop online abuse, but it does enable people to maintain an online presence without experiencing the psychological trauma of seeing abusive comments on a daily (or even hourly) basis. Studies by the UN and Amnesty International have found that women are more likely than others to be the target of online abuse. Ensuring that targets do not see abuse leaves perpetrators shouting into a virtual void.
So far, there has been no official comment from Twitter about how these safeguards will be applied if account verification is opened to anyone who pays the fee. Musk has indicated that there will be a “secondary tag” on the accounts of public figures, but the details are not yet clear.
Between January and June 2020 I interviewed 50 senior women in politics, policing, journalism and academia about their experiences of online abuse. I also collected over 25 million tweets sent to 200 women with a verified account. Tweets were frequently abusive, and regularly included threats of sexual violence and comments on appearance. They often dismissed women’s contributions to online discussions.
In many fields, having an online presence has become an occupational necessity. Speaking to women who had a blue tick, it was clear that being senior in an organisation frequently earmarked them as a target for increased amounts of online abuse. But as women rose up an organisation, they were more likely to have a verified Twitter account.
The women I spoke to were all active users of social media, many with verified accounts on Twitter. They shared how important the blue tick was, not to gain recognition as someone important, but rather to access the protections that come with verified status. In many cases, the filters associated with having a blue tick made the difference between remaining active on Twitter and withdrawing completely.
Power to whom?
Rather than creating a free-for-all in account verification, with safety controls sold to anyone who pays a monthly fee, there needs to be more action from social media companies to tackle online abuse. We already know that abusers often specifically target women of colour, who rely on the platform to publicise their work. As one prominent journalist based in Washington DC told me:
Tech companies need to better protect the vast majority of their users, over that of the privileged and the few. I feel like tech companies, for so long, have been worried about so-called censorship … but usually that so-called censorship they’re talking about is a very small group of extremely vocal and extremely white men.
It seems unlikely that, in the UK at least, Twitter will be able to avoid the thorny issue of regulation. The online safety bill currently promises that Ofcom will take action against firms that fail to remove illegal content, including hate crime. This can include issuing fines of up to 10% of annual worldwide turnover, or blocking their access in the UK. The Sunak government plans to rewrite the bill, leaving some to fear such measures will be watered down.
Ultimately, all of the women I spoke to felt that some kind of government oversight is inevitable, if the issue of online abuse is to be dealt with in a meaningful way. A senior police officer I spoke to explained:
If social media is going to be used for responsible reasons, and that’s why it was designed, then they do need to exercise more control over the platform. Because it’s not a free platform, it is owned by a company that I think should have more regulated responsibilities for how that platform is used. Like any other profit-making company, they have responsibility for the safety of the people using it.
Let’s see if Musk agrees.