Menu Close

Ethics for healthcare data is obsessed with risk – not public benefits

How many times a year do we tick a website or phone app’s box saying “read and approved” – without having read the Terms of Service at all? While a user’s tick of the box is sufficient to allow businesses offering web services and smartphone apps to use “anonymised” customer data for their own purposes, the same doesn’t apply to most health research.

Consider the difference between creating that tick-box by cutting and pasting a standard legal disclaimer and writing a 40-page research ethics submission that rigorously undergoes a dozen revisions. Ethics has a bad image among many scientists and, for some, it raises images of finger wagging and obstacles to research projects.

Health researchers working with human participants – or their identifiable information – need to jump through lots of ethical and bureaucratic hoops. The underlying rationale is that health research poses particularly high risks to people, and that these risks need to be minimised. But does the same rationale apply to non-invasive research using digital health data? Setting aside physically invasive research, which absolutely should maintain the most stringent of safeguards, is data-based health research really riskier than other research that analyses people’s information?

Many corporations can use data from their customers for a wide range of purposes without needing research ethics approval, because their users have already “agreed” to this (by ticking a box), or the activity itself isn’t qualified as health research. But is the assumptions that it is less risky justified?

Facebook and Google hold voluminous and fine-grained datasets on people. They analyse pictures and text posted by users. But they also study behavioural information, such as whether or not users “like” something or support political causes. They do this to profile users and discern new patterns connecting previously unconnected traits and behaviours. These findings are used for marketing; but they also contribute to knowledge about human behaviour.

Unintended consequences

One of us recently applied to use individual-level data that had been collected by a personalised health company, where paying consumers had consented online for their information to be used for research (without any requirement for research ethics approvals). But in order to use exactly the same anonymised data, academic health researchers had to apply for ethical approval first. It was a process which took six months’ of paperwork. Despite all the time and effort it took to obtain this, the approval was never used, as there was an extremely costly stipulation that participants had to reconsent.

Another example involved a UK national bioresource of 100,000 blood samples collected for NHS research, where the name and purpose was slightly changed to a biobank. The research ethics committee decided that every participant had to provide their consent again, or else their DNA and blood samples couldn’t be used. In addition to the cost to the taxpayer, it’s expected that the decision will result in the destruction of about 30,000 samples – some from a tiny number who wouldn’t want their samples to be used, but the vast majority from people who couldn’t care less.

Many people are happy for their data and samples to be used for health research – if it creates public benefits, according to empirical research. How many of those people who failed to reconsent would have agreed to the destruction of their samples? And in whose interest is this?

The institutionalisation of medical and research ethics has created a plethora of bodies and local governance groups, with increasingly onerous conditions required for research involving human participants or access to potenitally identifiable personal information. Many jurisdictions, such as the US, understand research as systematic investigations designed to contribute to “generalisable knowledge”.

Power imbalance

This means that most socially valuable health research carried out at universities and reputable institutions requires research ethics approvals, which is a significant obstacle. But corporations don’t face the same scrutiny; ironically because they’re not seeking to spread knowledge.

The EU’s General Data Protection Regulation, which comes fully into effect in May, will make some improvements by giving citizens more control over the use of their data – but it will not really change this imbalance.

In the UK, the 2016 Data Security Review by Dame Fiona Caldicott, the National Data Guardian for Health and Care, made a number of very important suggestions to improve data security, increase people’s support for valuable health research and give patients more meaningful control over their data. Last year, the government approved all of the recommendations. It’s a step in the right direction, but it fails to address key structural problems that health research faces in the digital era.

Research ethics committees have been key to protecting patients from greedy drug companies and invasive experimental research. But obstructing publicly valuable research or destroying samples was never part of their mission. Ethicists aren’t to blame. As a society, we have allowed the idea of risk management to take on a life on its own. Rather than us managing risk, risk is now managing us- and a state of what we call “uber-ethics” has emerged. And, what’s worse, the current frameworks for research ethics are unable to deal with one of the biggest ethical challenges in the era of digital health: the power inequalities between the corporations that use data, and those whose data are used – patients and citizens.

To address the quasi monopolistic status of commercial corporations using personal data, ethics is more important than ever. But ethics must become political again – a project that supports all of us in systematically considering how specific policies, institutions, technologies and practices impact on the distribution of burdens and benefits within and across societies.

How do we get there? For academics, reminding research ethics committees of the importance of facilitating socially valuable research would help. But we also need policy changes to prioritise public benefits, especially where there is minimal risk. Regulators should pay more attention to whether or not data use has value for societies. If yes, it should receive public support and be freed from many of the onerous bureaucratic requirements that are in place. Research that does not have societal value besides lining the pockets of shareholders should be allowed to proceed, but with stricter safeguards. In addition, mechanisms must be put in place to ensure that some of the profits made with people’s data come back into the public domain.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now