MI5 has been pulled up in court over storing mass data obtained by surveillance and hacking in a systematic invasion of privacy described as “undoubtedly unlawful” by the Investigatory Powers Commissioner. The disclosures about Britain’s security agency came to light in mid June during an ongoing case in the high court brought by the campaign group Liberty, which is challenging the architecture of the UK’s surveillance regime.
The revelations come in the wake of other recent high-profile cases regarding privacy and surveillance that campaigners hope could set precedents for the legal and technical powers of government and law enforcement.
In a major victory in May, the charity Privacy International won a five-year battle against the secrecy of the Investigatory Powers Tribunal, which oversees surveillance activities by the security services and other agencies. The tribunal was previously able to make decisions behind closed doors. This meant that limited information was made available to people making claims of misconduct or victimisation by the security services. The UK supreme court ruled that the tribunal should no longer be exempt from review in UK courts, making its decisions open to public scrutiny.
This ruling makes sure no UK government can ignore the rule of law and the role of the courts. It should also make it more difficult for mass surveillance to be signed off without proper oversight. The case sets a precedent of enforcing better built-in protections for the public from blanket privacy invasions by their own government. It also helps people object to specific cases of discrimination and harm caused by surveillance, including making it easier to bring cases, such as the MI5 data storage failures, into the public eye.
Facial recognition challenge
The other case now making its way through the courts, also brought by Liberty, concerns facial recognition technologies. Liberty supported a man called Ed Bridges who brought a case against South Wales Police. His claim is that the way the police are testing facial recognition in public places causes harm and goes against privacy rights. This links to a recent example of police forcing passersby to enter facial recognition trials, and harassing or fining anyone who refused.
The outcome of the facial recognition case, expected later in 2019, will set a precedent for how new surveillance technologies are tested and introduced. During the trial, the police said that facial recognition “potentially has great utility”. Evidence, however, shows an overwhelming rate of false positives – including 2,000 people wrongly identified as criminals at a football match. There are also continued concerns over racial bias that are yet to be addressed.
These cases come against a backdrop of increased surveillance powers. The main enabler of this is the Investigatory Powers Act 2016, which formalised existing capabilities of the security services such as phone tapping or collecting bulk communications data. The government tried to spin the act as legislation designed to make organisations such as GCHQ more accountable. But it also made surveillance powers available to other agencies including various police and defence departments, health services, the tax office and many other government departments.
Even if we expect the security services to spy on us, we are less likely to approve of, say, the Health and Safety Executive invading our privacy by accessing our internet records without a warrant. And even if we allow brief invasions of privacy to combat security threats, there still needs to be clear regulation and oversight.
Pushing for accountability
But while privacy advocacy groups are making some progress in increasing the accountability of surveillance by UK law enforcement, as the facial recognition case shows, the issues are far from resolved. It will be an ongoing process to preserve privacy and make sure that people know when it is breached. The legal precedents set by these court cases will be crucial, as they could pave the way for more challenges in the future.
Similar debates surrounding the accountability of surveillance are raging in the US. San Francisco has blocked facial recognition and the US Congress is also addressing the unchecked use of the technology. Even Microsoft has now deleted the largest database of faces used for training facial recognition systems. But the fact that the faces had already been used by companies in the US, China and elsewhere shows the risk of delaying action.
These debates highlight the importance of collective efforts to assert respect for privacy and other rights as a core part of public life. We are on the cusp of a positive shift in power towards open public debate and accountability about data and the way it is used against us.
Further transparency could help to counter the risks of combining existing surveillance systems – for example, if mass facial recognition and large scale phone tapping were used together unchecked, we could easily find ourselves in a total surveillance state. The current momentum could set positive precedents that could be built upon to protect privacy and prevent surveillance without accountability.