Menu Close
Facebook’s been playing with your emotions. Flickr/Paul Walsh, CC BY-NC-SA

Consent and ethics in Facebook’s emotional manipulation study

Significant concerns are raised about the ethics of research carried out by Facebook after it revealed how it manipulated the news feed of thousands of users.

In 2012 the social media giant conducted a study on 689,003 users, without their knowledge, to see how they posted if it systematically removed either some positive or some negative posts by others from their news feed over a single week.

At first Facebook’s representatives seemed quite blasé about anger over the study and saw it primarily as an issue about data privacy which it considered was well handled.

“There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely,” a Facebook spokesperson said.

In the paper, published in the Proceedings of the National Academy of Science, the authors say they had “informed consent” to carry our the research as it was consistent with Facebook’s Data Use Policy, which all users agree to when creating an account.

One of the authors has this week defended the study process, although he did apologise for any upset it caused, saying: “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

Why all the outrage?

Filtering the news feed. Flickr/Mixy Lorenzo, CC BY

So why are Facebook, the researchers and those raising concerns in academia and the news media so far apart in their opinions?

Is this just standard questionable corporate ethics in practice or is there a significant ethical issue here?

I think the source of the disagreement really is about the consent (or lack thereof) in the study and as such will disentangle what concerns about consent there are and why they matter.

There are two main things that would normally be taken as needing consent in this study:

  1. accessing the data
  2. manipulating the news feed.

Accessing the data

This is what the researchers and Facebook focussed on. They claimed that agreeing to Facebook’s Data Use Policy when you sign up to Facebook constitutes informed consent. Let’s examine that claim.

We use the information we receive about you […] for internal operations, including troubleshooting, data analysis, testing, research and service improvement.

It’s worth noting that this in no way constitutes informed consent since it’s unlikely that all users have read it thoroughly. While it informs you that your data may be used, it doesn’t tell you how it will be used.

But given that the data has been provided to the researchers in an appropriately anonymised format, the data is no longer personal and hence that this mere consent is probably sufficient.

It’s similar to practices in other areas such as health practice audits which are conducted with similar mere consent.

So insofar as Facebook and the researchers are focusing on data privacy, they are right. There is nothing significant to be concerned about here, barring the misdescription of the process as “informed consent”.

Manipulating the news feed

This was not a merely observational study but instead contained an intervention – manipulating the content of users’ news feed.

Is that the real news feed? Flickr/Steven Mileham, CC BY-NC

Informed consent is likewise lacking for this intervention, placing this clearly into the realm of interventional research without consent.

This is not say it is necessarily unethical, since we sometimes permit such research on the grounds that the worthwhile research aims cannot be achieved any other way.

Nonetheless there are a number of standards that research without consent is expected to meet before it can proceed:

1. Lack of consent must be necessary for the research

Could this research be done another way? It could be argued that this could have been done in a purely observational fashion by simply picking out users whose news feed were naturally more positive or negative.

Others might say that this would introduce confounding factors, reducing the validity of the study.

Let’s accept that it would have been challenging to do this any other way.

2. Must be no more than minimal risk

It’s difficult to know what risk the study posed – judging by the relatively small effect size probably little, but we have to be cautious reading this off the reported data for two reasons.

First, the data is simply what people have posted to Facebook which only indirectly measures the impact – really significant effects such as someone committing suicide wouldn’t be captured by this.

And second, we must look at this from the perspective of before the study is conducted where we don’t know the outcomes.

Still for most participants the risks were probably minimal, particularly when we take into account that their news feed may have naturally had more or less negative/positive posts in any given week.

3. Must have a likely positive balance of benefits over harms

While the harms caused directly by the study were probably minimal the sheer number of participants means on aggregate these can be quite significant.

Likewise, given the number of participants, unlikely but highly significant bad events may have occurred, such as the negative news feed being the last straw for someone’s marriage.

This will, of course, be somewhat balanced out by the positive effects of the study for participants which likewise aggregate.

What we further need to know is what other benefits the research may have been intended to have. This is unclear, though we know Facebook has an interest in improving its news feed which is presumably commercially beneficial.

We probably don’t have enough information to make a judgement about whether the benefits outweigh the risks of the research and the disrespect of subjects’ autonomy that it entails. I admit to being doubtful.

4. Debriefing & opportunity to opt out

Typically in this sort of research there ought to be a debriefing once the research is complete, explaining what has been done, why and giving the participants an option to opt out.

This clearly wasn’t done. While this is sometimes justified on the grounds of the difficulty of doing so, in this case Facebook itself would seem to have the ideal social media platform that could have facilitated this.

The rights and wrongs

So Facebook and the researchers were right to think that in regards to data access the study is ethically robust. But the academics and news media raising concerns about the study are also correct – there are significant ethical failings here regarding our norms of interventional research without consent.

Facebook claims in its Data Usage Policy that: “Your trust is important to us.”

If this is the case, they need to recognise the faults in how they conducted this study, and I’d strongly recommend that they seek advice from ethicists on how to make their approval processes more robust for future research.

Want to write?

Write an article and join a growing community of more than 180,900 academics and researchers from 4,921 institutions.

Register now