Menu Close

Data surveillance is necessary, but we must have transparency

Who’s viewing your metadata? Adam Fagen

The latest intelligence documents published by The Guardian tell us that the US National Security Agency (NSA) is harvesting up to 200 million text messages a day under the DISHFIRE programme, and that the UK Government Communications Headquarters (GCHQ) can “mine” this hugely rich source of data. These are the revelations come courtesy of former intelligence contractor Edward Snowden, who has exposed methods of communications interception to an unprecedented extent.

Despite the wailing and gnashing of teeth that has greeted Snowden’s revelations in the US, his disclosures have prompted the establishment of a presidential review and a speech by Barack Obama on how he proposes to change NSA policy and practice.

In the UK, however, ministers have simply repeated assertions that the system for regulating intelligence collection is “probably the best in the world”. Parliament’s Intelligence and Security Committee (ISC) reported in mid-July that GCHQ had not acted illegally by accessing the NSA PRISM programme, but said that it would look further into the adequacy of current law. Three months later, the ISC broadened its inquiry to include “the appropriate balance between our individual right to privacy and our collective right to security” and invited evidence from interested parties.

No-one has doubted the legitimacy of governments carrying out “targeted” surveillance of those against whom there is some level of suspicion. Where there has been some shock and dismay is at the discovery that governments collect seemingly everything.

Who touched my data?

The preventive logic of intelligence work has always led agencies to collect as much information as possible: “you never know what you might need tomorrow”. But achieving total surveillance has only become remotely realistic with the growth of digital communications, which provide the “electronic exhaust” of our lives. The ability to collect, store and search all this information has coincided with an increased demand for intelligence from western governments who, particularly since 9/11, have lived in fear of a “new terrorism”.

Compared with the relatively predictable security threats of the Cold War, this has given rise to great uncertainty as to where, precisely, the next attack might come from. No politician wants to risk being accused of ignoring some potential source of life-saving information – so bulk collection is less a question of necessity than a case of “we can, so we will”.

But is this proportionate? In the TEMPORA programme, GCHQ collects 1-2 billion records a day from transatlantic fibre-optic cables. About 30% of this massive volume of communications is rejected immediately, while 40,000 “selectors” chosen by GCHQ and 31,000 chosen by the NSA (based on key words and phone numbers, among other things) trawl the rest. Content remains on the system for three-to-five days and the metadata (information about who is calling who, when, where and for how long) is stored for 30 days – though analysts can store “interesting” material in another database for up to five years.

The NSA has said that it “touches” 1.6% of internet traffic, and that analysts “look at” 0.00004%. Although the agencies have not earned a reputation for complete openness in recent months, these figures look realistic: if 2 billion records a day are “collected”, even the mere 32 million that are “selected” far exceed what can realistically be analysed. That 80,000 will be “looked at” still seems barely plausible if one accepts that analysis ultimately requires a human being to decide what the communication means, however clever the software in use.

In assessing whether this scale of collection is proportionate to the threat and effective in countering it, we face the apparently irresolvable problem that evidence (rather than official assertion) of impact is impossible to obtain. On one hand, while the potential for repressive invasion of privacy is vast, evidence of its actual use barely exists outside of authoritarian regimes. On the other hand, evidence that it has prevented serious attacks is also slim: some attacks are prevented, but precisely how many (and how) is kept very secret.

So, what is to be done? Obama’s recent attempt to reassure the US public that their privacy concerns will be taken more seriously may just be rhetoric. It is inconceivable that US or UK governments will legislate to outlaw bulk collection. The best we can probably hope for is some improvement in the currently inadequate system for control and oversight of current intelligence practices – but this is not a part of the ISC inquiry’s remit.

Unless there is a serious examination of the need for public education in what governments do to gather intelligence and why they do it, public suspicion will persist. As things stand, any enlightenment is more likely to emerge from the non-governmental inquiry into internet governance just announced at Davos.

Want to write?

Write an article and join a growing community of more than 161,800 academics and researchers from 4,589 institutions.

Register now